Jump to content



Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Impact of the Singularity


  • Please log in to reply
12 replies to this topic

#1 MichaelAnissimov

  • Registered User
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 27 March 2004 - 09:20 PM


©2002 Singularity Institute for Artificial Intelligence

http://singinst.org


Ordinary technology - all the technology produced in the last ten thousand years; all the knowledge accumulated over the last fifty millennia - is the product of human intelligence. The Singularity - the technological creation of smarter-than-human intelligence - means breaking the upper limits on intelligence that have held since the dawn of the human species. But let's be specific. What happens next and how long does it take to have a real impact on your life?

In one sense this is something like asking Benjamin Franklin in the 1700s to predict computers, Artificial Intelligence, and the Singularity, all on the basis of his experimentation with electricity. Actually, that would have been more reasonable; Benjamin Franklin was at least a human of the same type and species as ourselves. We can't say for certain what a smarter-than-human intelligence would do; to do that, we'd have to be that smart ourselves. We can't set upper limits on what a transhuman can do, or even upper limits on how fast it will happen. Any limit we set, no matter how reasonable-sounding, could turn out to have a simple workaround that we're too young as a civilization or insufficiently intelligent as a species to see in advance. What we can do is try to set lower bounds, based on the humanly comprehensible things we can almost-but-not-quite achieve with our current intelligence and technology.

Given what's currently on the horizon, this will turn out to be more than enough.

One obvious impact of the Singularity is the impact of new technologies invented by the mind or minds at the epicenter of the Singularity. It may not be the most important impact, but when it comes to establishing lower bounds, it's an impact that's relatively easy to talk about. Technology has been improving humanity's standard of living for thousands of years; in a sense we're used to it.

A conservative version of the Singularity would start with the rise of smarter-than-human intelligence in the form of enhanced humans with minds or brains that have been overclocked by purely biological means. This scenario is more "conservative" than a Singularity which takes place as a result of brain-computer interfaces or Artificial Intelligence, because all thinking is still taking place on neurons with a characteristic limiting speed of 200 operations per second; progress would still take place at a humanly comprehensible speed. In this case, the first benefits of the Singularity probably would resemble the benefits of ordinary human technological thinking, only more so. Any given scientific problem could benefit from having a few Einsteins or Edisons dumped into it, but it would still require time for research, manufacturing, commercialization and distribution. (But note that an increasing fraction of the world economy is information, particularly software. Informational goods can be copied six billion times as easily as once, and distributed in a fraction of a second.)

That first effects of this version of the Singularity might or might not have an enormous impact, depending on how smart the enhanced humans were and how fast they worked, but the impact would probably still take place on a roughly human scale and within the context of the existing human system - it would just be an usually large amount of a kind of progress we already know about. However, some of the technologies most likely to receive the attentions of such early transhumans would be the technologies involved in stronger forms of the Singularity; direct brain-computer interfaces, in which some of the thinking elements are fast and reprogrammable, or Artificial Intelligence, in which thought exists on a substrate that is all fast, all reprogrammable, and growable indefinitely. Even if the Singularity starts out slow - and it might not; it might start out with a seed AI - it won't take too long to arrive at the convergent triple punch described in "What is the Singularity?": smarter-than-human intelligence, ultrafast intelligence, and recursively self-improving intelligence.

In the long run, and quite possibly the short run, the question is not how long it will take for our factories to manufacture Singularity technology, but how long it will take Singularity technology to replace our factories; not how much Singularity technologies will cost, but how long it will take Singularity technologies to make the existing economy irrelevant. It may make sense to visualize an enhanced human working in an existing research incubator to produce breakthroughs that will be commercialized and sold in the normal way, but there's something fundamentally wrong with applying this viewpoint to a superintelligence that has a thousand times human brainware capacity, thinks a million times as fast, and is improving itself exponentially. At this point one deals not with the Singularity plugging into the existing economic system but with the Singularity rendering the existing economic system irrelevant. Our manufacturing infrastructure is just not all that impressive on a cosmic scale; it's attuned to our 200Hz brains and our huge, slow hands.

But how does a superintelligence get from the human system to somewhere interesting? Won't that take years and immense amounts of venture capital? Let's suppose for a moment that, having tried our absolute best to think of a way to bypass the system, we've failed - we just can't imagine any way to create nanotechnology that doesn't involve the newborn superintelligence (SI) directing human hands to build the tools to build the tools to build the tools to build the actual nanotechnology, over a period of what would be, to it, millions of subjective years. Well... so what? What matters is not what we can imagine but what the SI can imagine. The SI is smarter than we are.

The SI is faced with the problem of manipulating external reality into a state where that reality contains controllable technology that operates on the SI's own timescale. Usually this is taken to mean nanotechnology, although nanotechnology is really just the human conception of material technology pushed to its limit, not necessarily the means a superintelligence would use. Our own preferred means of manipulating reality is large, clumsy factories, but this is not actually written into the laws of physics. Today's gleaming automated factories didn't exist a few centuries ago. We built them using tools that we built using tools that we ultimately built using our bare hands, and while it may have seemed to take a very long time to us, it all happened incredibly fast from the perspective of evolution, which was the only inventor on Earth up until then. Our creative ability ran very slow by our standards, but it was still incredibly fast by the standards of evolution, the constructor of humans. When we built our technology, we didn't do it using evolution's method of cumulative, blind incremental changes, or even using evolution's favored tool of DNA. We walked around that entire paradigm using methods entirely outside evolution's experience.

Today's inventors are limited by venture capital, but that's a human limit. Consider the matter from a chimpanzee's perspective, if chimpanzees were smart enough to have perspectives. Humans may think that they need "venture capital" as well as "intelligence" to power their human magic, but "venture capital" is an incomprehensible thing that only humans have and only human-level intelligences can understand, so there's not really much point in making the distinction.

If a superintelligence somehow came into existence in the 1950s, it is admittedly hard for us to imagine what tools could have been used to immediately create nanotechnology. This doesn't mean there isn't any way, just that we can't think of any. After all, we humans built up tools to make tools using our bare hands, without accessing evolution's DNA-based manufacturing technology, and we did so on a timescale that was almost instantaneous by evolution's standards. But if a superintelligence came into existence today, when we do have access to evolution's DNA-based manufacturing techniques, not to mention numerous technological toys, then it is fairly straightforward to think of tools which could be used to create nanotechnology in a hurry. At least it would be a "hurry" by human standards. For example, if the protein folding problem can be cracked for artificial proteins (whose designs might be chosen expressly to make the folds easily computable), then any DNA-synthesis and peptide-sequencing laboratory, of which there are currently many online, is a tool for manufacturing arbitrary artificial proteins. It becomes relatively easy to imagine a newborn superintelligence sending out a few dozen emails and receiving as many FedExed boxes, in a few days, containing everything needed to create nanotechnology. Some online labs boast of a 48-hour turnaround time. From an SI's perspective, thinking millions of times faster than human, that might still be too slow. At a million-to-one speedup, 48 hours amounts to five or six thousand years. In that amount of time even a human would probably think of a faster method - it didn't take all that long for someone to come up with the protein-synthesis suggestion. For all we know, if you're smarter than human, you can build nanotechnology in 48 hours of subjective time. But from our perspective, "a few days of human time" is fast enough already.

Except from the viewpoint of an impatient human onlooker, it may not matter all that much - from a cosmic perspective - whether bypassing the existing infrastructure takes a few hours or a few decades. It will matter a great deal to everyone who either lives to see the Singularity or dies before it, and the arrival time of the Singularity is strongly related to our probability of getting there before humanity destroys itself with biotech or nanotech weapons, but the basic crossover into the realm of transhuman intelligence and then superintelligence remains the same. The destination being equal, it would be easier to present a picture of a Singularity that operates on the years-to-decades timescale that humans are accustomed to - it would be more comfortable to think about, easier to think about, and would probably trigger less skepticism. But our best guess is that it wouldn't be true. The Singularity is not bound to operate at a human speed or to respect the limits of human imagination, any more than the 20th century was bound to the timescale of the 4th century, humans are bound to the timescale of evolution, or Homo sapiens respects the limits of Homo erectus's imagination. The criteria determining what we find easiest and most comfortable to visualize are not the same forces that give rise to the actual Singularity event lying in humanity's future.

So how large is the impact of the Singularity, really? Not just large on the human scale, but larger than the human scale - outside the human scale entirely. If you consider something like nanotechnology, it sounds impressive enough on its own: Nanotech is self-replicating, so that it costs roughly as much to make six billion copies as one copy; nanotech is ultrafast, so that a worldwide "utility fog" could be constructed in hours or less; and nanotech operates on a scale small enough, relative to the world of biology, to offer fine-grained control at an invisibly small level. Nanotech is enough to eliminate illness, old age, take a serious potshot at death itself, and provide unlimited material wealth. And yet when you look at it closely, this is just a very large impact measured on a human scale; it provides the same kind of things we're already used to getting as the result of technological progress, just more of it and faster. The impact of the Singularity doesn't have to be that small. What will it really be?

We don't know. We're not superintelligent.
  • 0

#2 David

  • Registered User
  • 618 posts
  • -1
  • Location:Perth Australia

Posted 28 March 2004 - 11:45 PM

Very interesting and very positive. Nice. But would AI concentrate its efforts outward, or inward? Why would it bother itself with our world and us when it could effectively create any world it wanted inside itself? Hey! Perhaps it's already happened!

Dave
  • 0

sponsored ad

  • Advert

#3 NickH

  • Registered User
  • 22 posts
  • 0

Posted 30 March 2004 - 05:50 AM

Two reasons why the outside world could be worth the bother:
* In as much as your internal world is implemented on hardware in the outside world you want to make sure your hardware is in working condition. This might involve hiding from or neutralising intelligent agents, rearranging dangerous other dangerous objects (eg. potential supernovae), or something else.

* You could care about the outside world. Perhaps you want to fill the universe with nice shiny paperclips. Or perhaps you're actually concerned with the plight of humans and want to help them.

The idea, however, is not that any arbitary AI would necessarily care about the outside world, but that perhaps we can create one that does. Wouldn't it be neat if it could help us out?
  • 0

#4 Mind

  • Lifetime Member, Moderator, Secretary
  • 13,363 posts
  • 3,766
  • Location:Wausau, WI
  • yes

Posted 30 March 2004 - 11:06 PM

But would AI concentrate its efforts outward, or inward? Why would it bother itself with our world and us when it could effectively create any world it wanted inside itself?


This is an interesting thought. Certainly a SI could create a virtual world (or worlds) for itself but computation takes energy and a substrate (as far as we know). An SI may quickly find its virtual world limited by the amount of matter available for computation, and thus look outward again for expansion.

If the SI was friendly then perhaps it would consume the universe for its computational purposes except for our solar system. It could probably just simulate everything outside our solar system and we would remain oblivious to this fact until we developed the technology for deep space travel.
  • 0

#5 MichaelAnissimov

  • Registered User
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 31 March 2004 - 02:59 AM

If the SI was friendly then perhaps it would consume the universe for its computational purposes except for our solar system. It could probably just simulate everything outside our solar system and we would remain oblivious to this fact until we developed the technology for deep space travel.


Heh, stealing 99.9999999...% of the universe's resources entirely for itself doesn't sound too friendly to me.
  • 0

#6 Thomas

  • Registered User
  • 129 posts
  • 0

Posted 31 March 2004 - 07:39 AM

Using no less than 100% of the Universe, does sound friendly to me.

But only, if the use (computation available) is maximized for the good felling of every and all instances of the consciousness routine.
  • 0

#7 David

  • Registered User
  • 618 posts
  • -1
  • Location:Perth Australia

Posted 05 April 2004 - 06:44 AM

Its interesting to see that we assume that AI would even have a survival drive. After all, ours (and our biological counterparts) has developed through millions of years of evolution. If you remove parts of the human brain, we have no survival drive. We just die. These parts aren't the thinking parts, they're in the primitive areas of the brain. If AI were to suddenly become "aware" what's to keep it that way? Might it without prior programming just fade out out of apathy?

Nick, those are good questions, but if I understand Michael correctly, the computational power of such an organism, if it managed to figure out that it even wanted to survive, would be sufficient to figure out how to survive without the primitive infrastructure it awoke in, and pretty damn fast.

Heh, then again maybe it would just want to make toast!

DAVE
  • 0

#8 Thomas

  • Registered User
  • 129 posts
  • 0

Posted 05 April 2004 - 07:31 AM

Only if we mistakenly cause a drive to survive, or a drive to expand uncontrollably, SAI is perfectly safe.

It is quite easy to inadvertently cause a runaway reconstruction of the Universe at the molecular level, but not too hard to prevent it in advance - either.

This is quite a realistic view, IMHO.
  • 0

#9 MichaelAnissimov

  • Registered User
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 05 April 2004 - 02:55 PM

Heya Dave,

Part of building a mind is building a mind with enough motivation to conduct the basics necessary for it to even continue on existing; the AI equivalents of wiping its own ass and dressing itself. This doesn't require a special "drive for survival" per se - just a minimum set of self-maintenance protocols. (See http://www.singinst....FAI/anthro.html for some of the reasons why.) I think part of the idea here is that any AI really worth caring about would have the ability and desire to enhance its intelligence and accomplish goals in the real world, and that's the hypothetical type of AI focused upon in this paper.

Also, just to clear things up, I'm not the one who wrote the above document - I'm just posting it here. The above document is one of the Singularity Institute's short intros; see the rest here:

http://singinst.org/intro/
  • 0

#10 Thomas

  • Registered User
  • 129 posts
  • 0

Posted 05 April 2004 - 08:08 PM

A dummy program on the top of the recursive self enhancing loop (RSEL), which monitors some parameters after every step of the RSEL, and stops it every time at least one parameter is out of the safe zone, may be enough for us, to actually control much greater intelligence than ours.

It is a kind of Drexler cage and that may be enough for humans to persevere. At least to the "next step". We will be able to reconsider our further actions then.

It may be like a rock climbing. Always securing the next step.
  • 0

#11 David

  • Registered User
  • 618 posts
  • -1
  • Location:Perth Australia

Posted 07 April 2004 - 04:01 AM

I see. I was assuming that the first AI would be an accident, caused when the net reaches a certain size (a tipping point, no less) and "wakes up".
  • 0

#12 arrogantatheist

  • Registered User
  • 56 posts
  • 1

Posted 09 May 2004 - 09:41 AM

I think one thing is that it is a step by step process as well. You don't bring out a computer say 5 times the power of the human brain and on the first day hook it up to all of the world's nuclear weapons, and control of millions of autonomous military robots and thousands of factories that make military robots.

Right now for example you have programs with growing amounts of 'thinking'. Like in the simplest form a chess program, it has objectives like winning the game, or lower objectives like taking more value of pieces then it loses. Then it looks at its options and calculates them out to see if they help its objectives. My point is that although it is incredibly intelligent when it comes to chess, it has no ability to think about things not related to chess.

Overall you aren't going to have no artificial thinking ability then all of a sudden a highly technological society all linked up to an intelligence many times smarter then any human. Its a step by step process, with some aspects of your 'machine' intelligence becoming more general and some staying more specific. And in addition there will be many ai machines around the world when it comes to a certain strength.
  • 0

sponsored ad

  • Advert

#13 arrogantatheist

  • Registered User
  • 56 posts
  • 1

Posted 09 May 2004 - 09:54 AM

I think you are right that the singularity will lead to things it is impossible for us to imagine. And in fact evolutionary proccesses themselves lead to that. For example if some of humanity did become contented, but even a few continued putting the resources at their disposal to developing technology further, soon those few would control the majority of the power. However power might be defined by then.

I mean the easy to see things are end of disease, aging, death even. Everything you could currently want for free or essentialy for free. For example made in autonomous factories in plants powered by nuclear reactors many times cheaper then we have now, delivered in autonomous vehicles whose fuel is produced in autonomous factories etc.

And ultimately the research itself done only by computers. For example ultimately if a computer can think much better then any human, you couldn't compete having humans as thinkers. In addition even if those computers were concious they could easily be designed to have a huge yearning to research and learn more. If they needed emotions just make those emotions guide them in whatever direction you wanted.

In the longrun though I believe it will become increasingly virtual. Why bother solving all these problems or building some thing in reality when it is much cheaper to build it in virtuality. Having peoples brains hardwired to computers, or even better just replacing outright their brains with massively powerful computers connected in whatever way to networks.

Then I believe people would take on sexual fantasies more and more. Down to the fundamental human drives, like lust, dominance, submission, hunting etc. Everyone has wild sexual fantasies, lets face it people would be living those not thinking about how to explore some distant planet. We'd put our automations on autopilot to increase knowledge.

Then beyond those instinctual human drives that evolution brought us, you could design new ones and with thinking power millions of times beyond current human ability, those instinctual drives could be truly bizzare and exquisite. And you as a concious entity could try out different ones in the flash of a second. Of course you would be living those out in some sorts of virtual reality. But one that would seem just as real as this one.

Wouldn't it be insane to really be in a video game and really be getting stronger as you gained power. Or really getting more intelligent as you gained levels? You could make that happen.
  • 0




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users