©2002 Singularity Institute for Artificial Intelligence
http://singinst.org
Ordinary technology - all the technology produced in the last ten thousand years; all the knowledge accumulated over the last fifty millennia - is the product of human intelligence. The Singularity - the technological creation of smarter-than-human intelligence - means breaking the upper limits on intelligence that have held since the dawn of the human species. But let's be specific. What happens next and how long does it take to have a real impact on your life?
In one sense this is something like asking Benjamin Franklin in the 1700s to predict computers, Artificial Intelligence, and the Singularity, all on the basis of his experimentation with electricity. Actually, that would have been more reasonable; Benjamin Franklin was at least a human of the same type and species as ourselves. We can't say for certain what a smarter-than-human intelligence would do; to do that, we'd have to be that smart ourselves. We can't set upper limits on what a transhuman can do, or even upper limits on how fast it will happen. Any limit we set, no matter how reasonable-sounding, could turn out to have a simple workaround that we're too young as a civilization or insufficiently intelligent as a species to see in advance. What we can do is try to set lower bounds, based on the humanly comprehensible things we can almost-but-not-quite achieve with our current intelligence and technology.
Given what's currently on the horizon, this will turn out to be more than enough.
One obvious impact of the Singularity is the impact of new technologies invented by the mind or minds at the epicenter of the Singularity. It may not be the most important impact, but when it comes to establishing lower bounds, it's an impact that's relatively easy to talk about. Technology has been improving humanity's standard of living for thousands of years; in a sense we're used to it.
A conservative version of the Singularity would start with the rise of smarter-than-human intelligence in the form of enhanced humans with minds or brains that have been overclocked by purely biological means. This scenario is more "conservative" than a Singularity which takes place as a result of brain-computer interfaces or Artificial Intelligence, because all thinking is still taking place on neurons with a characteristic limiting speed of 200 operations per second; progress would still take place at a humanly comprehensible speed. In this case, the first benefits of the Singularity probably would resemble the benefits of ordinary human technological thinking, only more so. Any given scientific problem could benefit from having a few Einsteins or Edisons dumped into it, but it would still require time for research, manufacturing, commercialization and distribution. (But note that an increasing fraction of the world economy is information, particularly software. Informational goods can be copied six billion times as easily as once, and distributed in a fraction of a second.)
That first effects of this version of the Singularity might or might not have an enormous impact, depending on how smart the enhanced humans were and how fast they worked, but the impact would probably still take place on a roughly human scale and within the context of the existing human system - it would just be an usually large amount of a kind of progress we already know about. However, some of the technologies most likely to receive the attentions of such early transhumans would be the technologies involved in stronger forms of the Singularity; direct brain-computer interfaces, in which some of the thinking elements are fast and reprogrammable, or Artificial Intelligence, in which thought exists on a substrate that is all fast, all reprogrammable, and growable indefinitely. Even if the Singularity starts out slow - and it might not; it might start out with a seed AI - it won't take too long to arrive at the convergent triple punch described in "What is the Singularity?": smarter-than-human intelligence, ultrafast intelligence, and recursively self-improving intelligence.
In the long run, and quite possibly the short run, the question is not how long it will take for our factories to manufacture Singularity technology, but how long it will take Singularity technology to replace our factories; not how much Singularity technologies will cost, but how long it will take Singularity technologies to make the existing economy irrelevant. It may make sense to visualize an enhanced human working in an existing research incubator to produce breakthroughs that will be commercialized and sold in the normal way, but there's something fundamentally wrong with applying this viewpoint to a superintelligence that has a thousand times human brainware capacity, thinks a million times as fast, and is improving itself exponentially. At this point one deals not with the Singularity plugging into the existing economic system but with the Singularity rendering the existing economic system irrelevant. Our manufacturing infrastructure is just not all that impressive on a cosmic scale; it's attuned to our 200Hz brains and our huge, slow hands.
But how does a superintelligence get from the human system to somewhere interesting? Won't that take years and immense amounts of venture capital? Let's suppose for a moment that, having tried our absolute best to think of a way to bypass the system, we've failed - we just can't imagine any way to create nanotechnology that doesn't involve the newborn superintelligence (SI) directing human hands to build the tools to build the tools to build the tools to build the actual nanotechnology, over a period of what would be, to it, millions of subjective years. Well... so what? What matters is not what we can imagine but what the SI can imagine. The SI is smarter than we are.
The SI is faced with the problem of manipulating external reality into a state where that reality contains controllable technology that operates on the SI's own timescale. Usually this is taken to mean nanotechnology, although nanotechnology is really just the human conception of material technology pushed to its limit, not necessarily the means a superintelligence would use. Our own preferred means of manipulating reality is large, clumsy factories, but this is not actually written into the laws of physics. Today's gleaming automated factories didn't exist a few centuries ago. We built them using tools that we built using tools that we ultimately built using our bare hands, and while it may have seemed to take a very long time to us, it all happened incredibly fast from the perspective of evolution, which was the only inventor on Earth up until then. Our creative ability ran very slow by our standards, but it was still incredibly fast by the standards of evolution, the constructor of humans. When we built our technology, we didn't do it using evolution's method of cumulative, blind incremental changes, or even using evolution's favored tool of DNA. We walked around that entire paradigm using methods entirely outside evolution's experience.
Today's inventors are limited by venture capital, but that's a human limit. Consider the matter from a chimpanzee's perspective, if chimpanzees were smart enough to have perspectives. Humans may think that they need "venture capital" as well as "intelligence" to power their human magic, but "venture capital" is an incomprehensible thing that only humans have and only human-level intelligences can understand, so there's not really much point in making the distinction.
If a superintelligence somehow came into existence in the 1950s, it is admittedly hard for us to imagine what tools could have been used to immediately create nanotechnology. This doesn't mean there isn't any way, just that we can't think of any. After all, we humans built up tools to make tools using our bare hands, without accessing evolution's DNA-based manufacturing technology, and we did so on a timescale that was almost instantaneous by evolution's standards. But if a superintelligence came into existence today, when we do have access to evolution's DNA-based manufacturing techniques, not to mention numerous technological toys, then it is fairly straightforward to think of tools which could be used to create nanotechnology in a hurry. At least it would be a "hurry" by human standards. For example, if the protein folding problem can be cracked for artificial proteins (whose designs might be chosen expressly to make the folds easily computable), then any DNA-synthesis and peptide-sequencing laboratory, of which there are currently many online, is a tool for manufacturing arbitrary artificial proteins. It becomes relatively easy to imagine a newborn superintelligence sending out a few dozen emails and receiving as many FedExed boxes, in a few days, containing everything needed to create nanotechnology. Some online labs boast of a 48-hour turnaround time. From an SI's perspective, thinking millions of times faster than human, that might still be too slow. At a million-to-one speedup, 48 hours amounts to five or six thousand years. In that amount of time even a human would probably think of a faster method - it didn't take all that long for someone to come up with the protein-synthesis suggestion. For all we know, if you're smarter than human, you can build nanotechnology in 48 hours of subjective time. But from our perspective, "a few days of human time" is fast enough already.
Except from the viewpoint of an impatient human onlooker, it may not matter all that much - from a cosmic perspective - whether bypassing the existing infrastructure takes a few hours or a few decades. It will matter a great deal to everyone who either lives to see the Singularity or dies before it, and the arrival time of the Singularity is strongly related to our probability of getting there before humanity destroys itself with biotech or nanotech weapons, but the basic crossover into the realm of transhuman intelligence and then superintelligence remains the same. The destination being equal, it would be easier to present a picture of a Singularity that operates on the years-to-decades timescale that humans are accustomed to - it would be more comfortable to think about, easier to think about, and would probably trigger less skepticism. But our best guess is that it wouldn't be true. The Singularity is not bound to operate at a human speed or to respect the limits of human imagination, any more than the 20th century was bound to the timescale of the 4th century, humans are bound to the timescale of evolution, or Homo sapiens respects the limits of Homo erectus's imagination. The criteria determining what we find easiest and most comfortable to visualize are not the same forces that give rise to the actual Singularity event lying in humanity's future.
So how large is the impact of the Singularity, really? Not just large on the human scale, but larger than the human scale - outside the human scale entirely. If you consider something like nanotechnology, it sounds impressive enough on its own: Nanotech is self-replicating, so that it costs roughly as much to make six billion copies as one copy; nanotech is ultrafast, so that a worldwide "utility fog" could be constructed in hours or less; and nanotech operates on a scale small enough, relative to the world of biology, to offer fine-grained control at an invisibly small level. Nanotech is enough to eliminate illness, old age, take a serious potshot at death itself, and provide unlimited material wealth. And yet when you look at it closely, this is just a very large impact measured on a human scale; it provides the same kind of things we're already used to getting as the result of technological progress, just more of it and faster. The impact of the Singularity doesn't have to be that small. What will it really be?
We don't know. We're not superintelligent.