|The Early Days of a Better Nation|
Tuesday, December 02, 2008
[Note: this is lightly edited version of a talk I gave on 30 September at this year's Gartner IT Security Summit, and later rewired as a GoH talk at Octocon. The talk depends a lot on topicality, so (a) the dates matter and (b) I can't squeeze any more talks out of it, so here it is. The picture above isn't actually of me giving that talk but it does fit what I was saying: the future isn't a straight line! It's wiggly!]
Good afternoon, and thank you for coming. I was a little perplexed when I was asked to speak about the future to a conference on IT security. Having worked ten years in IT is of course no guarantee that you know anything about security, in fact I suspect the average programmer gives people like you more sleepless nights than the average user does. As for writing science fiction, it's more or less a guarantee that you're going to get the future wrong. The science fiction writer Jack Vance gave due warning about predictions:
"What are your fees?" inquired Guyal cautiously. "I respond to three questions," stated the augur. "For twenty terces I phrase the answer in clear and actionable language; for ten I use the language of cant, which occasionally admits of ambiguity; for five, I speak a parable which you must interpret as you will; and for one terce, I babble in an unknown tongue." 
This may also be true of management consultants.
Today I'm going for clear and actionable language, on the basis that more than twenty years ago I saw this day coming. Some time around about 1986 the Reader in Mechanical Engineering at Brunel University told me that his best students weren't going into engineering, they were going to work for much higher salaries in the City, which had just undergone the famous Big Bang deregulation. And it did occur to me to think that a society whose best young engineers were going into finance was laying up some trouble for itself. I must admit I didn't actually foresee reading in New Scientist in September 2008 that the graduates most in demand by the City were literally rocket scientists.
There are two problems with thinking about the future, apart from that we can't see it. One is that we like stories, we're a story-telling species, and our heads are cluttered up with stories. But stories have a dramatic structure and usually a somehow satisfying ending. The future doesn't work like that. The other is that that we tend to think that things will continue along the same lines, like projecting a line on a graph. But the future doesn't work like that, either.
Shares, as the small print - well, today it's big black print - always reminds us, can go down as well as up. So can trends. This is why, for example, there isn't a US and a Soviet moon base, like in 2001: a Space Odyssey. It was a perfectly reasonable projection in 1969, because we'd had decades of big-scale state-led projects: the American freeway system, the welfare state, Keynesian economics, nuclear power, the space programme, and even the Soviet Union at that time was still looking pretty solid. There seemed no reason why that trend shouldn't continue. So nearly every writer of the sixties assumed we'd have marvellous mass transit systems in 2008 and lots of people would be living in domes on the Moon. And the whole world would have one big computer. The more imaginative had one big computer for each continent. Some time in the early 80s it became clear that this was yesterday's tomorrow.
Most of you, I'm sure, are familiar with the concept of legacy code. It has various definitions, but when I was working in IT back in the 80s we tended to use it for code inherited from someone else, which in the case of London Electricity usually meant someone who had retired or was otherwise no longer available for anything short of unfeasible amounts of money because they'd gone to work in the City to write the programs to implement the complex financial instruments being designed by all those engineers and rocket scientists from Brunel. In either case, that someone would have left an incredibly complex, system-critical, uncommented and undocumented tangle of code. A perhaps over-familiar example of legacy code from that time is the so-called Millennium Bug. Late in 1989 I was modifying a billing program for London Electricity so that the date-printing module could handle the change from 80s to 90s, and I pointed out that there was going to be another decade rollover 10 years later that would change the date from 19-something to 20-something, and that while we were at it we might as well fix that too. Most of the programmers there were science fiction readers, and unlike normal people we were pretty sure the 21st century was actually going to happen one of these days. We then had a discussion, took it to our line management, and they decided to fix the problem in all the other programs. So if you want to know who saved the world from the Millenium Bug, it was science fiction fans. Actually what I'm proudest of is the comment I put beside the fix: This will work for 2000 and 2100 but for subsequent century dates may require amendment. That's legacy code.
I used the term 'legacy code' in one of my novels, and Farah Mendlesohn, a science-fiction critic who read it thought it was a term I had made up, and she promptly adapted it for critical use as 'legacy text'. Legacy text is all the other science fiction stories that influence the story you're trying to write, and that generally clutter up your head even if you never read, let along write, the stuff. Most of us have default images of the future that come from Star Trek or 2001 or 1984 or Dr Who or disaster movies or computer games. These in turn interact with the tendency to project trends straightforwardly into the future. The great legacy text for the past twenty years or so is William Gibson's Neuromancer. Because Gibson was the guy who saw in the early 1980s that the next twenty years weren't going to be like the fifties and sixties in space. He famously foresaw something like the World Wide Web, but more importantly his novel projects a world where the US military is up there dominating the world from above, while down below the rest of us, from multinational corporations to lone hackers, sort of scrabble around in anarchy. I've written plenty of stories along the same lines, some of which may have become legacy texts themselves.
What if all the stories whose legacy text is Neuromancer have just turned into yesterday's tomorrow? What if right now, we're at a moment when trends that looked inexorable have reached a turning point? What if the common sense of the age is about to flip from free-market capitalism to state-regulated capitalism? Of course, turning that into actual policy won't happen overnight, or smoothly - too much political legacy code - but if it does happen then over the next ten years or so we'll be in an age of big government projects, some sort of new New Deal. We'd find ourselves back in the day before yesterday's tomorrow.
What that would mean is that every problem that has loomed large recently, from global warming to terrorism to Peak Oil to poverty to obesity, suddenly starts getting tackled in a quite different way, from the top down instead of from the bottom up. Or rather, macro-managed instead of micro-managed. People might actually feel freer, because instead of ubiquitous surveillance and endless nagging and hassling of individuals to change their behaviour and worry about their diet or carbon footprint or recyling or whatever, the governments would go straight to building non-carbon and hopefully nuclear power stations and installing reliable public transport and bike lanes, and instead of having us peering at small print on labels simply force the food manufacturers to stop putting fructose and transfats and god knows what other rubbish into the food in the first place. The problems do not go away but the whole attitude towards tackling them changes. Climate change starts looking very different if it begins to be seen in terms of building massive new power sources, transport systems, various schemes of geo-engineering - terraforming Earth, as we call it in the science-fiction biz - instead of cutting back on what we've already got and stopping Africa and Asia from getting it too.
What would such a big turning point mean for IT security? Well, the IT security problem of the past twenty years or so has come from the interaction of two processes: the proliferation and cheapening of computers, and the growth of insecurity and inequality. Some computers - in fact most computers - end up in the hands of people who are clueless about computers. Others end up in the hands, or at least under the fingers, of people who are unemployed or underemployed or criminally employed, and they've progressed from writing viruses to writing ever more sophisticated hacks, spam, scams and exploits. It has been a by-product of the processes that have also led to what Misha Gleeny in his book McMafia has called the globalisation of crime.
Now the problem of IT security will not go away, but the very nature of the problem changes if the education system has to adapt to preparing people for manufacture instead of McJobs (or finance), and if there are big technology-heavy projects to soak up the script kiddies and hackers and spammers and scammers into doing something more productive and useful and indeed profitable. And if the West is engaging in big development projects, it can hardly sic the IMF and World Bank on developing countries to let the market rip and shut down their public sectors and protected industries. So you might see again in the Middle East and Africa actual secular progressive regimes, which in the long run would soak up a lot of the swamp that terrorism comes from, not to mention Nigerian scams.
All this would take place against the background of one trend that's probably not going to reverse any time soon: Moore's Law, which very roughly means that next year's computer would do twice as much twice as fast for the same price if it weren't for Microsoft. There are similar trends in information storage and bandwidth. My friend Charles Stross, who knows far more about these things than I do, has written a brilliant analysis of the consequences of Moore's Law, in a talk you can find on his website, Charlie's Diary. I can't really better than that so I'm just going to say we can assume that over the next couple of decades the amount of computing power, the amount of information that that processing power has available to process, and the amount of bandwidth available to pass information around, are all going to increase to levels that very quickly take you into science fiction territory. Say they double every two years. In twenty years you get ten doublings: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024. At that point you probably hit the buffers of fundamental physical limits. But along the way you have to envisage a world within your own professional lifetime and indeed business planning horizon where most people have something like a Cray supercomputer in their pocket and access it on their glasses or contact lenses.
Now if things were to continue the way they have been over the past twenty years, this is a huge problem for IT security, because you have vast quantities of personal information out there, very insecure, and vast processing resources to throw at it. This translates into immense opportunities for social engineering hacks. If you know the names and birthdays and maiden names and schools and for all I know Amazon picks of not just everyone but of everyone they know, security questions don't look like much of a barrier. The way Sarah Palin's Google mail got hacked is a very small example of how that can already be done. And to be honest it was because I was thinking along those lines and reading about the Russian Business Network and the Nigerian scam and the cyber attacks on Estonia and Georgia and so on that I came up with the title of this talk, 'All your firewall are belong to us.' But as I say, I think the nature of the problem is going to change. It changes from protecting people from their own cupidity and stupidity to protecting large infrastructure and manufacture from unintended security flaws and from very well-designed attacks, whether sabotage or espionage. Because the world I've just outlined is no more one of peace and harmony than the world of the postwar boom was. Instead of one Cold War there could be several going on simultaneously, in which all sides will be engaged in carrying out and in defending against cyber attacks.
It is, however, a world where IT is going to be even more important than it has been in the world of finance capital. And its focus is going to be different. Instead of going into the City or Wall Street, engineers are going to be working on engines, rocket scientists working on rockets. And instead of applying arcane probability calculations to designing improbable financial instruments, we could see a resurgence and elaboration of the sort of applications that in the sixties it was widely expected computers would mainly get used for: physical and economic planning. Applications like linear programming, critical path analysis, input-output tables, material requirement planning. In the early 90s when I was working for Feranti, I mean GEC, I mean BAE Systems, I worked on the edges of an MRP II manufacturing resource planning system and I was surprised that such a thing actually existed, while we were at the same time turning over the NHS to pseudo-market systems being developed at vast expense. Robust planning systems already exist but are often disrupted by economic turbulence arising out of the activities of all those Stock Exchange rocket scientists. Which is where we came in.
What I would like to wind up with in terms of thinking about the future is, as advertised, things that are unlikely to happen and things that look unlikely now but probably will happen and hit us all upside the head. As a science fiction writer I'm under a contractual obligation to mention aliens. I boldly say we will not encounter aliens. OK, so much for that. What about other unpredictable developments? There are plenty of catastrophic risks out there, as well as exciting possiblities. Leaving aside giant meteor impacts, local or global nuclear war, new diseases, Greenland's ice sliding into the North Atlantic in one go, or another Yellowstone supervolcano eruption - i.e. things about which the IT industry can do very little to prepare for, other than having very good off-site back-up and disaster recovery programmes in place, preferably in the middle of Australia - there are a few things we can think about that tie in rather neatly with my restrained and conservative suggestion that the developed world is about to take a massive lurch to the left.
One of the things that recent SF, the Neuromancer-type SF of recent decades, got right was that even if humanity's power over big things seemed to have slipped from its grasp, its power over small things such as molecules was increasing very quickly indeed. This is in part a consequence of Moore's Law, in that researchers in genomics, proteomics, physiomics and all the other omics found they had more and more computer resources to throw at the problem. I don't think this is going to change or that it's anywhere near its limits, so I wouldn't be at all surprised if some of us here live long enough to find that biotechnology has come up with a cure for death. Well, for aging, cancer, and most kinds of crippling injury and disability, at least. It might even be a cosmetics company that does it. Immortality - because you're worth it. This would set the whole pension funds business in something of a new light, let's say. But it would also have consequences so profound that no SF writer to my knowledge has ever actually tackled in depth the possibility of a near-future world where the most annoying generation in history - mine - is still around in 2100 if not 3100. Perhaps the prospect just doesn't bear thinking about. Nevertheless, if you do want to imagine that kind of world, the world of the next couple of decades, I can recommend works like Bruce Sterling's Distraction and Holy Fire, Charles Stross's Accelerando, and Vernor Vinge's Rainbows End.
If people are going to live longer and healthier lives the next step might be to make people smarter. That seems a bit unlikely in the next ten years but it's not impossible. I understand that physicists are already notorious for popping pills to speed up their synapses.The only part of human intelligence enhancement that really impacts IT security is that people might then stop writing down their passwords on Post-It notes. With that however I suspect I may have strayed into the realms of fantasy. So we can expect ever more emphasis on biometric security, and you can be sure that by the time it's implemented it too will be hackable: not just in terms of hacking the IT systems that store the biometric data, but the live data itself - by tweaking bits of DNA to change the expression of particular genes, perhaps to change iris patterns or fingerprints. Eventually we will find ourselves in the world of the open source human.
Another area where ever-increasing computing power is expected to have radical consequences is of course Artificial Intelligence (AI). I've been saying for twenty years that human-equivalent AI has all my life been where it was when I was twenty years old myself - twenty years away. But ethereally-networked supercomputers in every pocket, with increasingly fine-grained interfaces with our own brains and bodies, may yet have some surprises in store. The biggest surprise might be that an AI that emerged out of that wouldn't notice us at all, any more than we notice our neurons. It wouldn't come to eat our brains because our brains are already eating for it. It would only be interested in talking to the aliens who we are likewise unaware of. That is in fact the end of that novel I mentioned earlier, Neuromancer, where the emergent AI tells a character about how an alien AI has already tried to make contact, but ''Til there was me, natch, there was nobody to know, nobody to answer.'
Finally, we can be sure that if there is indeed a big turn away from neo-liberalism and towards regulation and large-scale projects, lots and lots of things will go horribly wrong, fortunes unimaginable today will be squandered on gigantic schemes that never pay off, and conflicts and contradictions will build up until that trend, too, is reversed or turns into something different in twenty years time or less. And, no doubt, there'll be science fiction writers and futurologists standing up in front of conferences like this and saying that everything they've been saying for the past twenty years is all out of date and we have to brace ourselves for a sharp turn to the free market. With or without life-extension and smart pills, I hope we are all there to see it.
 From "Guyal of Sfere", The Dying Earth.
Vance wikiquote consulted 29/9/08