The following Feb. 21, 2003, New York Transhumanist Association :: World Futures Symposium keynote address, Predicting The Future, was given by Eliezer Yudkowsky. He has granted ImmInst permission to repost here.
Good afternoon. My name is Eliezer Yudkowsky, and I work at the Singularity Institute for Artificial Intelligence. My name was Eliezer Yudkowsky yesterday, and it will probably still be Eliezer Yudkowsky tomorrow. I could change my name any time I wanted to, but I'm not likely to want to, which is too bad, because it's not an easy name to pronounce. So do I wish that I wanted to change my name? No; I'm satisfied with being a person who doesn't want to change his name. I don't want to change what I want. That, more than anything, is what locks in my future. Does that make me logically consistent, or just boring? That is for you to decide.
When you heard the title of this presentation would be "Predicting the Future", you probably formed some picture in your mind of what to expect from this presentation, based on your past experiences with futurism. For example, many of you probably expect me to mention Moore's Law for the exponential doubling of computer processing power. That prediction has just come true, but it was a self-fulfilling prophecy.
There's a standard cultural mythology of a place called "the future". It has towering skyscrapers, flying cars, and, of course, slightly more advanced computers. It's strange, really; we have beliefs, sometimes strongly emotional beliefs, about a place we've never been. It's as if we all had definite opinions about the people and culture of Atlantis. This... is Hollywood's fault. Hollywood keeps showing us people living and working and having adventures in starships and space stations. Those movies become our vicarious memories; we remember what the characters said, and what they looked like, and what happened to them, in the world Hollywood tells us is "the future".
Fifty thousand years ago, there was no such thing as television; humans did not evolve in a world where you can actually see fictions. We don't have strong innate safeguards to prevent us from reasoning using fictional memories. When reporters write about advances in medical prostheses or brain-computer interfaces, they can never seem to resist mentioning the Borg. What are the Borg? A dream. Something a Star Trek scriptwriter made up. We've invented a new logical fallacy; generalization from fictional evidence. The Borg are slow, clumsy, uncompassionate, inexorable, deadly; so humans with a brain-computer interface must be like that as well. We've seen them... haven't we?
A hundred years ago it was very common to have strong negative opinions about people you'd never met. Today we call that racial stereotyping. This case of racial stereotyping is unusual because no one has ever met a real Borg, but the concept of "Borg" is still an ordinary racial stereotype that survives by the same mechanism as all other racial stereotypes; people repeating to each other what everyone knows. Since there are no real Borg to be hurt by the stereotype, what I mean to show is not that the stereotype is morally wrong, but that it's rationally a very poor way of predicting the future.
Okay, so what's a better way? Let's start by looking at the early days of true brain-computer interfacing. Actually those early days are right now. You've already got people working on visual prostheses for the blind that tap straight into the visual cortex. Currently the best visual interface I've heard of has 1024 neural taps, enough for a 32-by-32 visual resolution, which, as it turns out, is enough to let a blind man drive a car around a parking lot. But despite the ooh-ahh factor, that's just a vastly more impressive kind of hearing aid. You don't get into the truly interesting part until you start developing prostheses that actually help people think. So given that reading thoughts is a lot harder than tapping into the visual cortex, what's the first mind-computer interface likely to be like? I think it's possible to make a good case that the first direct mind interfaces won't be between humans and computers but between humans and humans. Even if you have the technology to tap neural signals, send and receive data from individual neurons, it's very hard to figure out what the heck those neurons are saying. Let's say you have two people, Bob and Chris, both with broadband, two-way brain-computer interfaces. Who'll be the first to figure out what Bob's brain is saying? Human researchers writing computer software? Or Chris's brain learning to decode Bob's brain? Chris's brain might adapt to talk to Bob's brain in a week, while the programmers might not be able to decode the traffic until a year later, much less modify it.
Now we haven't really departed from the Borg premise yet. You can look at what I've just described and say it's the beginning of the Borg groupmind. But if you do this, in real life, does it actually transform Bob and Chris into soulless drones like Star Trek scriptwriters depict? Why would that happen? I mean, cliches aside, why would it actually happen?
There's a science-fiction author named Spider Robinson who has written quite a few stories based on the premise that computer-mediated telepathy is good for you - that if you can break out of the cave of your own skull and touch another mind, this makes you more compassionate, a better person. Spider Robinson probably put a lot more work into extrapolating his future than the Star Trek scriptwriters put into dreaming up the Borg. Science fiction in literature is not the same as what you see on television. There's an ethic that serious science fiction writers have, of getting the science right, of making sure that their premises are logically consistent. Even so it's still fiction. You have to look at Spider Robinson's logic and ask how good it really is as a chain of deductions. You can't treat a science fiction writer's stories as observational evidence about a place called the future, the way that reporters covering cloning seem to think "Brave New World" is a history book. You can't weigh Spider Robinson's reports against Star Trek's reports because they are not reports; they're fiction.
The real future - the future we've never seen - is something that arises out of the present by an immensely complicated process. One human lifetime isn't enough to learn more than a tiny fraction of the rules involved, and in real life all the rules are interacting with each other. So where does our picture of the future come from?
It comes from cliches. Our picture of the future is built entirely out of cliches. Where do the big towering skyscrapers in the movies come from? Did somebody actually do an extrapolation of current trends in building size? Of course not. You can do an extrapolation of trends in building size, but it's not going to help, because the actual size of buildings a few years down the road will be determined by suburb and exurb growth, telecommuting, terrorism in cities, fear of terrorism in cities, property values, which interact with the stock market, and stuff I simply don't know about because I have absolutely no idea what developers take into account when they decide to build skyscrapers. I don't even know how little I know. Every single factor I've named could be entirely irrelevant for all I know. Skyscrapers aren't my specialty.
But skyscrapers are "futuristic", so if you want to show a future city in a movie, you've got to put in lots of great big skyscrapers. If you're showing a happy future, you've got to have lots of shiny chrome. If you have a sad future, you've got to have poor lighting and harsh neon colors. We saw it in Blade Runner so it must be true.
Star Trek's Borg, and Spider Robinson's compassionate telepaths, are both cliches. They're just different cliches. The Borg are the "machine" cliche. They move slowly and clunkily because steam shovels are slow and clunky. They dress in black. They have visible wires and make hydraulics noises. And they behave like the world's most massive bureaucracy, but heavily armed. They are the system. They are the machine. They are a three-hour line at the Department of Motor Vehicles, but with phasers. They are every computer that doesn't show your ticket and every clerk that doesn't have your reservation. That's what we've come to associate with machines; it is the cliche of machines. You can all imagine what a purple sandwich looks like, right? You combine the concepts in your mind to get the mental imagery. Purple plus sandwich equals purple sandwich. Similarly, you've never seen a cat-human in real life, but you probably find it very easy to imagine what one looks like. Pointed ears, for example. Now, in fact, if you unwisely mixed feline with human DNA, you probably wouldn't get a viable organism at all, and if you did, it wouldn't look anything like a cat-human does in your imagination. It would be controlled by the very complex rules that govern gene regulatory networks for the differentiation and development of anatomy, and you'd have to be an expert in genetics and feline anatomy and human anatomy to have the vaguest idea of what it would look like.
But we all know what a "machine human" looks like. It looks like a Borg. You take the cliche for machines, add that to human, and out come the Borg. When I say that we generalize from fictional experiences, I don't mean that we think the Borg are real. What I mean is that we get our concepts from fictional experiences, like our concept of what a "machine human" is like. We know that a futuristic city has skyscrapers, even though the real cities of the real future may look entirely different.
And Spider Robinson is also using a cliche. He's using the New Age cliche of a kindly, loving telepath. Spider Robinson's compassionate telepaths are more believable than the Borg, because Spider Robinson's story has more thinking behind it. The Borg are a nonsensical mishmash of ma-chine stereotypes, not an extrapolation at all. Spider Robinson's logic makes a kind of sense; if you can extend a brain-computer interface into a brain-to-brain interface, two people might be able to see each other's thoughts. That's telepathy. But what exactly do you see? How much do you see? How do you react to seeing someone else's thoughts, and how do you react to knowing someone else can see your own thoughts? Spider Robinson's hypothesis is that at the end of it all, people are nicer to each other. Okay. I think he may be right about that particular aspect of it. But what else happens? Rather a lot. For example, people may get smarter. That sentence really deserves around five exclamation points at the end of it, but I'll save them for later.
The future is strange. And it is the interacting outcome of many rules that would take a human lifetime to understand just individually. Meanwhile, our cultural picture of the future is built around Hollywood recycling its own cliches and journalists recycling Hollywood. So how do we predict the future successfully? The answer, historically speaking, is that we don't; we fail. Futurism has its own history of failure. The famous physicist Lord Kelvin, in the nineteenth century, saying that physics was solved and the only frontier left was the sixth decimal place. The chairman of IBM in 1943, saying there was a world market for maybe five computers. The failure of futurism is another cliche. It's not really a fair cliche because the kind of gadget-obsessed futurism you see in the newspapers isn't the best that futurism has to offer. The media makes cliches, recycles cliches, and when gathering around the campfire and telling stories about "the future" fails as a method of prediction, they tell us how lousy futurists are, which is also a cliche. All you ever see in the media is the lowest common denominator of futurism.
Still, predicting the future is really hard. There's a way to measure how good a supposed expert is at prediction. It's called calibration. Let's say you've got a stock-market expert. The way calibration works is that instead of asking stock-market experts to predict a specific target for the Dow Jones, like nine thousand, you ask them to predict a range and then assign a confidence to that range. Like, they might say that the Dow Jones will be between eight thousand and ten thousand on January 1st, 2004, and that they're ninety percent confident of that. And then you track some predictions like that and you see whether, when the person says "ninety percent confident", the event actually happens nine times out of ten. If it happens nine times out of ten, you're well-calibrated. If it happens six times out of ten, you were overconfident. And if it happens ninety-nine times out of a hundred, you were underconfident.
Now there's a certain replicable result in this field which is very shocking. It shows up when you examine expert predictions across a wide variety of fields. That replicable result occurs when you ask experts in a field to give a wide enough interval that they're really, really sure - say, an upper bound where they're 99% sure the value won't be higher than that, and a lower bound where they're 99% sure the value won't be lower than that, and together those two bounds define a 98% confidence interval. For example, an expert might say they're 99% sure the Dow Jones won't be below five thousand on January 1st 2005, and that they're 99% sure the Dow Jones won't be above thirteen thousand. And those two bounds, the upper bound and the lower bound, define a 98% confidence interval.
How many times are experts wrong on their 98% confidence intervals?
The result, which is replicable across a wide range of experts and predictions, is that the actual value of the variable falls outside of the 98% confidence interval 30% of the time.
I repeat: 30%. For those of you who are skeptical and who are wondering where I'm getting this from, I was also skeptical when I first ran across this result, but it was cited as being from a very famous paper called "Judgment under uncertainty" by Tversky and Kahneman - Kahneman being the guy who just won the Nobel Prize in economics - and I checked the paper, and yeah, it's there.
As a futurist, I once gave a 90% confidence interval on something. I said I was 90% confident a certain event would happen between 2005 and 2020. Well, I don't give estimates like that any more. I've learned my lesson.
So people aren't just imperfect at predicting the future. We're extraordinarily lousy at predicting the future. We've got two modes of predicting the future. We've got people spinning dramatic campfire tales about fifty years in the future by recycling cliches, and we've got trained professional experts making very poorly calibrated guesses about the one or two numbers they understand.
Sounds like a mess, doesn't it? So how do you predict the future?
At this point Sir Karl Popper rides to our rescue. Karl Popper is the philosopher of science who became famous for pointing out that it's much easier to disprove something than prove it. Sometimes what seems like an extremely well-confirmed theory, like Newton's theory of gravity, is displaced by an even better theory, like Einstein's theory of General Relativity. But once an old theory has been disproven, it never comes back. The ratchet of science only turns in one direction.
Our picture of the future is built from cliches. What I'd like to suggest is that if you view the job of futurism as finding and knocking out those cliches, you can sometimes arrive at conclusions about the future that look almost like real predictions.
For example, one of the classic questions in futurism has always been "When will Artificial Intelligences become as smart as humans?" Now, it's already become common to answer that question by saying that AIs and humans will have different abilities, so the question is really nonsense. Now, it is possible to ask nonsensical questions, but I don't think this question deserves to be dismissed that way. Right now computer programs have very different abilities from humans, but there's no doubt that overall, they're completely inferior to human intelligence. Forget human equivalence; I don't think there's anything out there I would call a mind. I don't think there's anything out there I would describe as exhibiting mindlike behavior. We understand source code, and we write source code, but no computer program even begins to understand us. That's why I'm using the phrase "computer program" instead of "Artificial Intelligence"; I think an Artificial Intelligence isn't a great big computer program any more than a human is a great big amoeba. You can ask about a mind whose physical embodiment can be described as computer processing power, but that's around as fair as describing humans as a particularly complex kind of meat. It's not the material that's important, it's how it's shaped. If there are differences between humans and AIs, it will be because of the different shapers - humans were shaped by evolution, and AIs will be shaped by humans. If we don't have AI yet, it's not our materials that have failed us, but our craftsmanship.
There is, however, at least one famous and important difference in the materials. Neurons are slow. Transistors are fast. Neurons fire around 200 times per second. A modern CPU delivers 2 billion operations per second, which is a difference of about ten million. Now, at least in the beginning, we'll probably burn a lot of that speed difference making up for the difference in parallelism. But while you can convert fast into parallel, you can't convert parallel into fast. If you can do 2 billion operations, one after the other, in one second, then you can do 200 operations ten million times, in one second. But the reverse isn't true - you can't take ten million slow neurons and simulate one really really fast neuron. The human brain has incredible massive parallelism - something like a hundred trillion synapses. But how much of that computing power is necessary just do anything at all in real time when your CPU speed is 200 operations per second?
Every time you look at a quote android unquote in a movie, and the actor playing the android hesitates, it's a cliche. The android is hesitating for, say, one second, around as long as a human would. And I'm sure that seems natural to the actors and the scriptwriters. But why one second? Why not a million seconds, or a millionth of a second? By the time you're done porting intelligence to hardware that's massively fast instead of massively parallel, it won't be bound to anything like the humanly natural speed. From our perspective, it might be very fast, or very slow, but there's no reason why it would be bound to our speed.
Now note that I haven't said when you'll see AI. I haven't said whether it will happen in 2010 or 2050. I haven't even said whether it'll be fast or slow. What I've said is that every time AI is depicted as thinking at exactly the same rate humans do, it's a cliche. AI might be much slower, it might be much faster, it might be much slower on some things and much faster on others, but it won't be just exactly the human set of speeds.
Another thing - when I talk about AIs being slower or faster, I'm not talking about differences in speed that are like slow humans or fast humans. We spend our whole lives living and talking with other humans, and we naturally tend to focus on the kind of differences that separate humans from each other. Like, maybe most people take one second to figure something out, but one person might see it intuitively in a fifth of a second, and another person might be slower and take five seconds. We're not talking about that kind of difference. That kind of difference is tiny on the scale we're talking about. The difference in serial speed between a neuron and a CPU is a factor of ten million, and the difference in parallelism between today's computer clusters and the human brain is around ten billion to one in the other direction. Relative to the space of minds in general, all humans are the same make and model of car. That's why when you see an android whose reaction times all fall exactly into the human band of variance, it's a cliche.
Now for a much deeper cliche, one that permeates all our visions of the future, and one that means the future may be much stranger than we expect. In Hollywood, technology is wallpaper. It's part of the scenery. It's what tells you, this story is about "the future". It's very rare to find a movie where the technology is an inseparable part of the plot. Star Wars works just as well if they're fighting with swords and magic instead of lightsabers and the Force. ET works just as well if the kids find a fairy instead of an extraterrestrial. But if you watch a movie that was made in the 1950s. Not a movie about the 1950s, a movie that was made in the 1950s. The people in that movie are far stranger than any character you can find in today's movies. They have different fundamental assumptions about how people are supposed to act. You watch "Psycho", which was made in 1960, which is 30 seconds of G-rated offscreen violence packed into a two-hour movie, and you think: "At some point in human history, this was actually a horror movie. People screamed while watching this movie because they were frightened." Aliens from the past.
And yet, things haven't really changed all that much over, say, the last fifty years. Or the last hundred years. Or the last thousand years. The brain doesn't change. The mind doesn't change. Emotions don't change. People remain people, just in different places and times. It's why, even though movies from the twentieth century seem strange, we can still understand them today.
The last time anything really changed on this planet was fifty thousand years ago, when the species Homo sapiens assumed its modern form. That was the last time anything changed here [point to brain]. And in the end, this is all that counts. All our world is just reflections of this. All the changes of the last fifty thousand years, all the advances in technology, from agriculture to the Internet, are the continuing echoes of this one change. Because of this, we can shape mountains in our image, make the sun's fire burn here on Earth, set our footprints in the dust of the moon, heal the sick. Our technology. Our culture. Our history. All of it patterned here, in the most complicated object the universe has ever produced... so far.
In 1983 a mathematician by the name of Vernor Vinge made a fundamental observation. He said that at some point in the future, technology would advance to the point of creating smarter-than-human intelligence. Augmentation of the brain, or brain-computer interfaces, or computer-mediated telepathy, or true Artificial Intelligence; but any rate, some minds that are smarter than human. At that point, said Vernor Vinge, our model of the future breaks down. If you could predict the actions of a mind smarter than your own, you would be that smart. Vinge called this the "Singularity", by analogy with the singularity at the center of a black hole, where our current model of physics doesn't tell us what happens. Presumably something happens at the center of a black hole. It's not real physics that's incomplete, just our model of physics. So presumably something happens, but we don't know quite what.
There have been a lot of people offering up mutant new definitions of the Singularity over the years. I myself have been guilty of this. But the more I look at Vinge's original definition, the more I realize that Vinge got it exactly right on his first try. The Singularity is the breakdown that occurs in our model of the future when we try to extrapolate that model past the point where it predicts the rise of smarter-than-human intelligence. This is the ultimate cliche.
There's a famous little video, an advertisement that was shown in the 1950s, in which a young woman sings: "Everyone says, the future is strange, but I've got a feeling some things won't change." She sings this in "The Kitchen of the Future". I like to think this woman is now the CEO of a software company somewhere. In one sense she was right. Some things didn't change. But everyone was wrong about what changed. The Singularity is like that. It's possible some things won't change, but we don't know which things. There isn't anything where you can point to it and say, well, at least this isn't up for grabs. I can't think of any way that could happen. Because what you're really saying is, this can't think of any way that could happen.
So if we can't predict the future, why try?
Because we have to. If you can't see the consequences of your actions, you don't know what actions to take. Today futurism has become an industry. For most futurists, in the newspapers, on TV, in books, futurism is an extension of the industry of storytelling. That's not the purpose of prediction. Prediction is more primal than that, it has deeper roots. We each and every one of us predict the future, on a small scale, just to get through the day. We predict that we can get to work by turning left instead of right, that we can find information by searching on Google, that it's worth the inconvenience of carrying an umbrella because the probability of rain is high and the probable inconvenience of being rained on outweighs the certain inconvenience of carrying an umbrella.
There is a moment when a previously unknown part of the future becomes known to you. It is the moment of decision. Before you decide, there's a blank spot in your model of yourself - you don't know which action you'll take. Then you decide, and one value is written onto that blank spot. What value? Which action? That depends on how your model of reality predicts the consequences of each of the possible actions you considered. Taking an umbrella versus not taking an umbrella. Taking the bus versus taking the train. Driving or flying. Pepsi or Coke, Windows or Linux, smooth or crunchy, good or evil, cash or credit, life or death, chocolate or vanilla, ice or fire, fish or cut bait, green or purple, animal or mineral or vegetable, nine to five or eleven to four, to be or not to be, that is the question. The better you can predict the consequences of your actions, the more actions you consider, the more powerful your ability to manipulate reality. Predicting the future is an inherent part of the process of trying to change the future. Self-fulfilling prophecies are the only ones worth making.
That is our power... the human gift. It's not quite unique to our species, but it is stronger in us than anywhere else on Earth. It is what lets us reach into the immense space of possibilities and choose the long and complex series of actions that ends in the creation of a car, or a space station. We are gathered here today, not to predict the future, but to change it. How do we want to change it? We're here to choose that too. It's our goals that determine which of our predictions describe "happy" futures; it's our goals that choose which futures are worth bringing into existence. But whatever we decide we want, it will be a futile wish unless we have the knowledge to choose between our present actions.
Good morning. Or good evening. I don't quite know which. My name is Eliezer Yudkowsky, and I work at the Singularity Institute for Artificial Intelligence. That was my job yesterday, and it will still be my job tomorrow. That job is not to predict the future, but to alter it. That's what I've decided to do with my life. I could decide to do something else with my life, any time I wanted to. I have that power. I choose not to exercise it. I want to want to create a better world. That, more than anything, is what locks in my future and makes it predictable. That aspect of my future is known. For many here, it is still undecided. In the moment that it becomes predictable, I hope you predict it well.