The Early Days of a Better Nation |
Ken MacLeod's comments. “If these are the early days of a better nation, there must be hope, and a hope of peace is as good as any, and far better than a hollow hoarding greed or the dry lies of an aweless god.”—Graydon Saunders Contact: kenneth dot m dot macleod at gmail dot com Blog-related emails may be quoted unless you ask otherwise.
Emergency Links
LINKS
Self-promotion
The Human Genre Project
Comrades and friends
Colleagues
Genomics
Edinburgh
Writers Blog
Editor Blogs
Publisher Blogs
Brother Blogs
Skiffy
Brits Blog
' ... a treeless, flowerless land, formed out of the refuse of the Universe, and inhabited by the very bastards of Creation'
Amazing Things
Faith
Reason
Evolution
War and Revolution
Mutualist Militants
Democratic Socialists
Impossibilists and Ilk
Viva La Quarta
Communist Parties
Other revolutionaries
Radical Resources
Readable Reds
For the sake of the argument
|
Tuesday, December 02, 2008
[Note: this is lightly edited version of a talk I gave on 30 September at this year's Gartner IT Security Summit, and later rewired as a GoH talk at Octocon. The talk depends a lot on topicality, so (a) the dates matter and (b) I can't squeeze any more talks out of it, so here it is. The picture above isn't actually of me giving that talk but it does fit what I was saying: the future isn't a straight line! It's wiggly!] Good afternoon, and thank you for coming. I was a little perplexed when I was asked to speak about the future to a conference on IT security. Having worked ten years in IT is of course no guarantee that you know anything about security, in fact I suspect the average programmer gives people like you more sleepless nights than the average user does. As for writing science fiction, it's more or less a guarantee that you're going to get the future wrong. The science fiction writer Jack Vance gave due warning about predictions: "What are your fees?" inquired Guyal cautiously. "I respond to three questions," stated the augur. "For twenty terces I phrase the answer in clear and actionable language; for ten I use the language of cant, which occasionally admits of ambiguity; for five, I speak a parable which you must interpret as you will; and for one terce, I babble in an unknown tongue." [1] This may also be true of management consultants. Today I'm going for clear and actionable language, on the basis that more than twenty years ago I saw this day coming. Some time around about 1986 the Reader in Mechanical Engineering at Brunel University told me that his best students weren't going into engineering, they were going to work for much higher salaries in the City, which had just undergone the famous Big Bang deregulation. And it did occur to me to think that a society whose best young engineers were going into finance was laying up some trouble for itself. I must admit I didn't actually foresee reading in New Scientist in September 2008 that the graduates most in demand by the City were literally rocket scientists. There are two problems with thinking about the future, apart from that we can't see it. One is that we like stories, we're a story-telling species, and our heads are cluttered up with stories. But stories have a dramatic structure and usually a somehow satisfying ending. The future doesn't work like that. The other is that that we tend to think that things will continue along the same lines, like projecting a line on a graph. But the future doesn't work like that, either. Shares, as the small print - well, today it's big black print - always reminds us, can go down as well as up. So can trends. This is why, for example, there isn't a US and a Soviet moon base, like in 2001: a Space Odyssey. It was a perfectly reasonable projection in 1969, because we'd had decades of big-scale state-led projects: the American freeway system, the welfare state, Keynesian economics, nuclear power, the space programme, and even the Soviet Union at that time was still looking pretty solid. There seemed no reason why that trend shouldn't continue. So nearly every writer of the sixties assumed we'd have marvellous mass transit systems in 2008 and lots of people would be living in domes on the Moon. And the whole world would have one big computer. The more imaginative had one big computer for each continent. Some time in the early 80s it became clear that this was yesterday's tomorrow. Most of you, I'm sure, are familiar with the concept of legacy code. It has various definitions, but when I was working in IT back in the 80s we tended to use it for code inherited from someone else, which in the case of London Electricity usually meant someone who had retired or was otherwise no longer available for anything short of unfeasible amounts of money because they'd gone to work in the City to write the programs to implement the complex financial instruments being designed by all those engineers and rocket scientists from Brunel. In either case, that someone would have left an incredibly complex, system-critical, uncommented and undocumented tangle of code. A perhaps over-familiar example of legacy code from that time is the so-called Millennium Bug. Late in 1989 I was modifying a billing program for London Electricity so that the date-printing module could handle the change from 80s to 90s, and I pointed out that there was going to be another decade rollover 10 years later that would change the date from 19-something to 20-something, and that while we were at it we might as well fix that too. Most of the programmers there were science fiction readers, and unlike normal people we were pretty sure the 21st century was actually going to happen one of these days. We then had a discussion, took it to our line management, and they decided to fix the problem in all the other programs. So if you want to know who saved the world from the Millenium Bug, it was science fiction fans. Actually what I'm proudest of is the comment I put beside the fix: This will work for 2000 and 2100 but for subsequent century dates may require amendment. That's legacy code. I used the term 'legacy code' in one of my novels, and Farah Mendlesohn, a science-fiction critic who read it thought it was a term I had made up, and she promptly adapted it for critical use as 'legacy text'. Legacy text is all the other science fiction stories that influence the story you're trying to write, and that generally clutter up your head even if you never read, let along write, the stuff. Most of us have default images of the future that come from Star Trek or 2001 or 1984 or Dr Who or disaster movies or computer games. These in turn interact with the tendency to project trends straightforwardly into the future. The great legacy text for the past twenty years or so is William Gibson's Neuromancer. Because Gibson was the guy who saw in the early 1980s that the next twenty years weren't going to be like the fifties and sixties in space. He famously foresaw something like the World Wide Web, but more importantly his novel projects a world where the US military is up there dominating the world from above, while down below the rest of us, from multinational corporations to lone hackers, sort of scrabble around in anarchy. I've written plenty of stories along the same lines, some of which may have become legacy texts themselves. What if all the stories whose legacy text is Neuromancer have just turned into yesterday's tomorrow? What if right now, we're at a moment when trends that looked inexorable have reached a turning point? What if the common sense of the age is about to flip from free-market capitalism to state-regulated capitalism? Of course, turning that into actual policy won't happen overnight, or smoothly - too much political legacy code - but if it does happen then over the next ten years or so we'll be in an age of big government projects, some sort of new New Deal. We'd find ourselves back in the day before yesterday's tomorrow. What that would mean is that every problem that has loomed large recently, from global warming to terrorism to Peak Oil to poverty to obesity, suddenly starts getting tackled in a quite different way, from the top down instead of from the bottom up. Or rather, macro-managed instead of micro-managed. People might actually feel freer, because instead of ubiquitous surveillance and endless nagging and hassling of individuals to change their behaviour and worry about their diet or carbon footprint or recyling or whatever, the governments would go straight to building non-carbon and hopefully nuclear power stations and installing reliable public transport and bike lanes, and instead of having us peering at small print on labels simply force the food manufacturers to stop putting fructose and transfats and god knows what other rubbish into the food in the first place. The problems do not go away but the whole attitude towards tackling them changes. Climate change starts looking very different if it begins to be seen in terms of building massive new power sources, transport systems, various schemes of geo-engineering - terraforming Earth, as we call it in the science-fiction biz - instead of cutting back on what we've already got and stopping Africa and Asia from getting it too. What would such a big turning point mean for IT security? Well, the IT security problem of the past twenty years or so has come from the interaction of two processes: the proliferation and cheapening of computers, and the growth of insecurity and inequality. Some computers - in fact most computers - end up in the hands of people who are clueless about computers. Others end up in the hands, or at least under the fingers, of people who are unemployed or underemployed or criminally employed, and they've progressed from writing viruses to writing ever more sophisticated hacks, spam, scams and exploits. It has been a by-product of the processes that have also led to what Misha Gleeny in his book McMafia has called the globalisation of crime. Now the problem of IT security will not go away, but the very nature of the problem changes if the education system has to adapt to preparing people for manufacture instead of McJobs (or finance), and if there are big technology-heavy projects to soak up the script kiddies and hackers and spammers and scammers into doing something more productive and useful and indeed profitable. And if the West is engaging in big development projects, it can hardly sic the IMF and World Bank on developing countries to let the market rip and shut down their public sectors and protected industries. So you might see again in the Middle East and Africa actual secular progressive regimes, which in the long run would soak up a lot of the swamp that terrorism comes from, not to mention Nigerian scams. All this would take place against the background of one trend that's probably not going to reverse any time soon: Moore's Law, which very roughly means that next year's computer would do twice as much twice as fast for the same price if it weren't for Microsoft. There are similar trends in information storage and bandwidth. My friend Charles Stross, who knows far more about these things than I do, has written a brilliant analysis of the consequences of Moore's Law, in a talk you can find on his website, Charlie's Diary. I can't really better than that so I'm just going to say we can assume that over the next couple of decades the amount of computing power, the amount of information that that processing power has available to process, and the amount of bandwidth available to pass information around, are all going to increase to levels that very quickly take you into science fiction territory. Say they double every two years. In twenty years you get ten doublings: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024. At that point you probably hit the buffers of fundamental physical limits. But along the way you have to envisage a world within your own professional lifetime and indeed business planning horizon where most people have something like a Cray supercomputer in their pocket and access it on their glasses or contact lenses. Now if things were to continue the way they have been over the past twenty years, this is a huge problem for IT security, because you have vast quantities of personal information out there, very insecure, and vast processing resources to throw at it. This translates into immense opportunities for social engineering hacks. If you know the names and birthdays and maiden names and schools and for all I know Amazon picks of not just everyone but of everyone they know, security questions don't look like much of a barrier. The way Sarah Palin's Google mail got hacked is a very small example of how that can already be done. And to be honest it was because I was thinking along those lines and reading about the Russian Business Network and the Nigerian scam and the cyber attacks on Estonia and Georgia and so on that I came up with the title of this talk, 'All your firewall are belong to us.' But as I say, I think the nature of the problem is going to change. It changes from protecting people from their own cupidity and stupidity to protecting large infrastructure and manufacture from unintended security flaws and from very well-designed attacks, whether sabotage or espionage. Because the world I've just outlined is no more one of peace and harmony than the world of the postwar boom was. Instead of one Cold War there could be several going on simultaneously, in which all sides will be engaged in carrying out and in defending against cyber attacks. It is, however, a world where IT is going to be even more important than it has been in the world of finance capital. And its focus is going to be different. Instead of going into the City or Wall Street, engineers are going to be working on engines, rocket scientists working on rockets. And instead of applying arcane probability calculations to designing improbable financial instruments, we could see a resurgence and elaboration of the sort of applications that in the sixties it was widely expected computers would mainly get used for: physical and economic planning. Applications like linear programming, critical path analysis, input-output tables, material requirement planning. In the early 90s when I was working for Feranti, I mean GEC, I mean BAE Systems, I worked on the edges of an MRP II manufacturing resource planning system and I was surprised that such a thing actually existed, while we were at the same time turning over the NHS to pseudo-market systems being developed at vast expense. Robust planning systems already exist but are often disrupted by economic turbulence arising out of the activities of all those Stock Exchange rocket scientists. Which is where we came in. What I would like to wind up with in terms of thinking about the future is, as advertised, things that are unlikely to happen and things that look unlikely now but probably will happen and hit us all upside the head. As a science fiction writer I'm under a contractual obligation to mention aliens. I boldly say we will not encounter aliens. OK, so much for that. What about other unpredictable developments? There are plenty of catastrophic risks out there, as well as exciting possiblities. Leaving aside giant meteor impacts, local or global nuclear war, new diseases, Greenland's ice sliding into the North Atlantic in one go, or another Yellowstone supervolcano eruption - i.e. things about which the IT industry can do very little to prepare for, other than having very good off-site back-up and disaster recovery programmes in place, preferably in the middle of Australia - there are a few things we can think about that tie in rather neatly with my restrained and conservative suggestion that the developed world is about to take a massive lurch to the left. One of the things that recent SF, the Neuromancer-type SF of recent decades, got right was that even if humanity's power over big things seemed to have slipped from its grasp, its power over small things such as molecules was increasing very quickly indeed. This is in part a consequence of Moore's Law, in that researchers in genomics, proteomics, physiomics and all the other omics found they had more and more computer resources to throw at the problem. I don't think this is going to change or that it's anywhere near its limits, so I wouldn't be at all surprised if some of us here live long enough to find that biotechnology has come up with a cure for death. Well, for aging, cancer, and most kinds of crippling injury and disability, at least. It might even be a cosmetics company that does it. Immortality - because you're worth it. This would set the whole pension funds business in something of a new light, let's say. But it would also have consequences so profound that no SF writer to my knowledge has ever actually tackled in depth the possibility of a near-future world where the most annoying generation in history - mine - is still around in 2100 if not 3100. Perhaps the prospect just doesn't bear thinking about. Nevertheless, if you do want to imagine that kind of world, the world of the next couple of decades, I can recommend works like Bruce Sterling's Distraction and Holy Fire, Charles Stross's Accelerando, and Vernor Vinge's Rainbows End. If people are going to live longer and healthier lives the next step might be to make people smarter. That seems a bit unlikely in the next ten years but it's not impossible. I understand that physicists are already notorious for popping pills to speed up their synapses.The only part of human intelligence enhancement that really impacts IT security is that people might then stop writing down their passwords on Post-It notes. With that however I suspect I may have strayed into the realms of fantasy. So we can expect ever more emphasis on biometric security, and you can be sure that by the time it's implemented it too will be hackable: not just in terms of hacking the IT systems that store the biometric data, but the live data itself - by tweaking bits of DNA to change the expression of particular genes, perhaps to change iris patterns or fingerprints. Eventually we will find ourselves in the world of the open source human. Another area where ever-increasing computing power is expected to have radical consequences is of course Artificial Intelligence (AI). I've been saying for twenty years that human-equivalent AI has all my life been where it was when I was twenty years old myself - twenty years away. But ethereally-networked supercomputers in every pocket, with increasingly fine-grained interfaces with our own brains and bodies, may yet have some surprises in store. The biggest surprise might be that an AI that emerged out of that wouldn't notice us at all, any more than we notice our neurons. It wouldn't come to eat our brains because our brains are already eating for it. It would only be interested in talking to the aliens who we are likewise unaware of. That is in fact the end of that novel I mentioned earlier, Neuromancer, where the emergent AI tells a character about how an alien AI has already tried to make contact, but ''Til there was me, natch, there was nobody to know, nobody to answer.' Finally, we can be sure that if there is indeed a big turn away from neo-liberalism and towards regulation and large-scale projects, lots and lots of things will go horribly wrong, fortunes unimaginable today will be squandered on gigantic schemes that never pay off, and conflicts and contradictions will build up until that trend, too, is reversed or turns into something different in twenty years time or less. And, no doubt, there'll be science fiction writers and futurologists standing up in front of conferences like this and saying that everything they've been saying for the past twenty years is all out of date and we have to brace ourselves for a sharp turn to the free market. With or without life-extension and smart pills, I hope we are all there to see it. [1] From "Guyal of Sfere", The Dying Earth. Vance wikiquote consulted 29/9/08 28 Comments:
not enough uranium, Ken, not enough, it should be saved for space travel, no better pack of punch per pound...
damn wrong link If you want to save uranium for space travel then go implement big solar and wind. They complement each other nicely. Peak wind and peak sun mostly (not always) occur at different times. Use solar thermal not photovoltatic and you can store heat rather than electricity at the solar end. Wind tends to come and go in short burst so you can use comparatively small amounts of electrical storage. Connect them by long distance transmission, and voila stable baseload, load following and peak power, made reliable by the addition of only a little natural gas (5% or less).
Millenium! He wrote Millenium!
Re. Moore's Law: '"Millenium" bug fixed' - oh, no, it isn't. You have fallen into the error of "just one small change" rather than going over everything, doing unit testing rather than system testing.
Meika wrote "heaps of thorium though, much better bet, still a good thirty years away from being a real power source though" - probably only ten years away, given past work and current efforts in places like India. It's straightforward development work from here on, not open ended research.
Utterly unrelated, but I'm happy to inform Ken McLeod that Cosmonaut Keep has just been translated and published in Italy (the two other instalments will follow in 2009)
I agree, we need to save 'non-renewable' resources for projects that require them for the ultra long-term. Escape velocity from the gravity well requires (for now) the use of essentially irreplaceable fuels. Sunlight can make your car go, but it won't put a single stage to orbit vehicle up. Thanks, TechSlave - interesting comment. A denial of service attack on someone's clothing is sure to go into one of my stories unless Charlie gets there first.
Ken,
"Escape velocity from the gravity well requires (for now) the use of essentially irreplaceable fuels."
John,
Minor quibble: hydrazine proper is not used much these days, apart from "cold" attitude rockets. Its derivatives like monomethyl or unsymmetric dimethyl hydrazine - yes, sometimes blended with hydrazine - are far more common (and not usually burned with oxygen but with other oxidants like hydrogen peroxide or nitrogen tetroxide). Since these fuels can be produced by electricity & electricity can be produced by nuclear power it depends on whether nuclear should count as a renewable. Since it can be kept going for at least the next 5 billion years if it doesn't count as such then "renewable" is merely an idealogical rather than engineering description. It would be very nice if science-fiction writers were to stop fantasizing about what kinds of internet applications could be constructed if only packets could travel through the network at superluminal speeds. Some problems just can't really be hand-waved away by inverse multiplexing over mesh networks, and I'm getting pretty tired of seeing that trope played for more than it's worth. Denial of service attacks on your clothing are so old school - think of man-in-the-middle attacks on your vision (or your augmented memory - shades of 1984). Woops - Vernor beat you to it, as well as invention of a new profession: code archaeology. Ken, I didn't know that you have been discussing the failure of AI for TWENTY years. That's great! I've been having similar gripes since 1977. In that year I read a (so-called) psychology textbook called HUMAN INFORMATION PROCESSING. I got annoyed by its overwhelming neglect of physiology and its use of all sorts of AI notions AS IF they were based on solid results. A close reading (for as long as my patience lasted) showed that few if any were. I became suspicious immediately and a bit enraged at what I considered to be scientific charlatinism. I've been assured that real results, e.g. about chess-playing programs, have been attained recently. That's fine: but in most philosophy and science that's post-behaviorist, consciousness, say visual sensing, is considered to be up for grabs. Is it there? If so, how do you best describe it? What's its function? Is it computational? Is it necessary for intentional, intelligent action? etc. Nobody knows! For an excellent SF novel solidly based on all this, read Peter Watts' brilliant and intense BLINDSIGHT. You'll learn some cutting-edge biology and neuroscience. OOPS! Ken, i am certain that you have read BLINDSIGHT and that you know lots of the cutting-edge and---often more important---durable science involved.
Your passing joke about Australia is already obsolete.
Tech Slave said:
If the Austrian (Mises, Hayek, ...) school of economics is right, the current recession and coming economic debacle has its source in [too much] government meddling in money, rather than in too little of same.
Our dear leader's current solution isto borrow another £500 billion. If the current fall in the £ & interest rates means nobody sane will lend they now appear to be thinking about just printing the stuff ("quantitative easing"). This does indeed rather suggest an even more spectacular crash.
To expand on what Nedbrek mentioned, not only have we hit the wall in Ghz on CPU speed, a Sandia white paper shows that beyond 8 cores you don't get much performance boost. So not only is Moore's Law hitting a wall, it's hitting a wall pretty soon.
If Moore's Law were to hit the wall it could reasonably be taken as evidence that we were reaching the upper half of the S curve of human progress. On the other hand so far we aren't & indeed the doubling time, which was originally 2 years sank to 18 months & may now be nearer 1 year.
|
>What if the common sense of the age is about to flip from free-market capitalism to state-regulated capitalism? Of course, turning that into actual policy won't happen overnight, or smoothly - too much political legacy code - but if it does happen then over the next ten years or so we'll be in an age of big government projects, some sort of new New Deal. We'd find ourselves back in the day before yesterday's tomorrow.
See my recent post in Grist Magazine on
Big Green, something I've been pushing since 1998.
By Anonymous, at Tuesday, December 02, 2008 8:02:00 pm