• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

What Is The Singularity?


  • Please log in to reply
9 replies to this topic

#1 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 19 September 2002 - 04:55 PM


©2002 Singularity Institute for Artificial Intelligence

http://singinst.org


The Singularity is the technological creation of smarter-than-human intelligence. There are several technologies that are often mentioned as heading in this connection. The most commonly mentioned is probably Artificial Intelligence, but there are others; direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity - several different technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.

A future that contains smarter-than-human minds is genuinely different in a way that goes beyond the usual visions of a future filled with bigger and better gadgets. Vernor Vinge originally coined the term "Singularity" in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human.

Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds. This loop appears most clearly in the example of an AI improving its own source code, but it would also arise, albeit initially on a slower timescale, from humans with direct brain-computer interfaces creating the next generation of brain-computer interfaces, or biologically augmented humans working on an Artificial Intelligence project.

Some of the stronger Singularity technologies, such as Artificial Intelligence and brain-computer interfaces, offer the possibility of faster intelligence as well as smarter intelligence. Ultimately, speeding up intelligence is probably comparatively unimportant next to creating better intelligence; nonetheless the potential differences in speed are worth mentioning because they are so huge. Human neurons operate by sending electrochemical signals that propagate at a top speed of 150 meters per second along the fastest neurons. By comparison, the speed of light is 300,000,000 meters per second, two million times greater. Similarly, most human neurons can spike a maximum of 200 times per second; even this may overstate the information-processing capability of neurons, since most modern theories of neural information-processing call for information to be carried by the frequency of the spike train rather than individual signals. By comparison, speeds in modern computer chips are currently at around 2GHz - a ten millionfold difference - and still increasing exponentially. At the very least it should be physically possible to achieve a million-to-one speedup in thinking, at which rate a subjective year would pass in 31 physical seconds. At this rate the entire subjective timespan from Socrates in ancient Greece to modern-day humanity would pass in under twenty-two hours.

Humans also face an upper limit on the size of their brains. The current estimate is that the typical human brain contains something like a hundred billion neurons and a hundred trillion synapses. That's an enormous amount of sheer brute computational force by comparison with today's computers - although if we had to write programs that ran on 200Hz CPUs we'd also need massive parallelism to do anything in realtime. However, in the computing industry, benchmarks increase exponentially, typically with a doubling time of one to two years. The original Moore's Law says that the number of transistors in a given area of silicon doubles every eighteen months; today there is Moore's Law for chip speeds, Moore's Law for computer memory, Moore's Law for disk storage per dollar, Moore's Law for Internet connectivity, and a dozen other variants.

By contrast, the entire five-million-year evolution of modern humans from primates involved a threefold increase in brain capacity and a sixfold increase in prefrontal cortex. We currently cannot increase our brainpower beyond this; in fact, we gradually lose neurons as we age. (You may have heard that humans only use 10% of their brains. Unfortunately, this is a complete urban legend; not just unsupported, but flatly contradicted by neuroscience.) One possible use of broadband brain-computer interfaces would be to synchronize neurons across human brains and see if the brains can learn to talk to each other - computer-mediated telepathy, which would try to bypass the problem of cracking the brain's codes by seeing if they can be decoded by another brain. If a sixfold increase in prefrontal brainpower was sufficient to support the transition from primates to humans, what could be accomplished with a clustered mind of sixty-four humans? Or a thousand? (And before you shout "Borg!", consider that the Borg are a pure fabrication of Hollywood scriptwriters. We have no reason to believe that telepaths are necessarily bad people. A telepathic society could easily be a nicer place to live than this one.) Or if the thought of clustered humans gives you the willies, consider the whole discussion as being about Artificial Intelligence. Some discussions of the Singularity suppose that the critical moment in history is not when human-equivalent AI first comes into existence but a few years later when the continued grinding of Moore's Law produces AI minds twice or four times as fast as human. This ignores the possibility that the first invention of AI will be followed by the purchase, rental, or less formal absorption of a substantial proportion of all the computing power on the then-current Internet - perhaps hundreds or thousands of times as much computing power as went into the original AI.

But the real heart of the Singularity is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are better chimps. This is the hardest part of the Singularity to discuss - it's easy to look at a neuron and a transistor and say that one is slow and one is fast, but the mind is harder to understand. Sometimes discussion of the Singularity tends to focus on faster brains or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe. This doesn't mean the subject is impossible to discuss; Section III of Levels of Organization in General Intelligence, on the Singularity Institute's website, does take a stab at discussing some specific design improvements on human intelligence. But that involves a specific theory of intelligence, which we don't have room to go into here.

However, that smarter minds are harder to discuss than faster brains or bigger brains does not show that smarter minds are harder to build - deeper to ponder, certainly, but not necessarily more intractable as a problem. It may even be that genuine increases in smartness could be achieved just by adding more computing power to the existing human brain - although this is not currently known. What is known is that going from primates to humans did not require exponential increases in brain size or thousandfold improvements in processing speeds. Relative to chimps, humans have threefold larger brains, sixfold larger prefrontal areas, and 98.4% similar DNA; given that the human genome has 3 billion base pairs, this implies that at most twelve million bytes of extra "software" transforms chimps into humans. And there is no suggestion in our evolutionary history that evolution found it more and more difficult to construct smarter and smarter brains; if anything, hominid evolution has appeared to speed up over time, with shorter intervals between larger developments.

But leave aside for the moment the question of how to build smarter minds, and ask what "smarter-than-human" really means. And as the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down. We don't know because we're not that smart. We're trying to guess what it is to be a better-than-human guesser. Could a gathering of apes have predicted the rise of human intelligence, or understood it if it were explained? For that matter, could the 15th century have predicted the 20th century, let alone the 21st? Nothing has changed in the human brain since the 15th century; if the people of the 15th century could not predict five centuries ahead across constant minds, what makes us think we can outguess genuinely smarter-than-human intelligence?

Because we have a past history of people making failed predictions one century ahead, we've learned, culturally, to distrust such predictions - we know that ordinary human progress, given a century in which to work, creates a gap which human predictions cannot cross. We haven't learned this lesson with respect to genuine improvements in intelligence because the last genuine improvement to intelligence was a hundred thousand years ago. But the rise of modern humanity created a gap enormously larger than the gap between the 15th and 20th century. That improvement in intelligence created the entire milieu of human progress, including all the progress between the 15th and 20th century. It is a gap so large that on the other side we find, not failed predictions, but no predictions at all.

Smarter-than-human intelligence, faster-than-human intelligence, and self-improving intelligence are all interrelated. If you're smarter that makes it easier to figure out how to build fast brains or improve your own mind. In turn, being able to reshape your own mind isn't just a way of starting up a slope of recursive self-improvement; having full access to your own source code is, in itself, a kind of smartness that humans don't have. Self-improvement is far harder than optimizing code; nonetheless, a mind with the ability to rewrite its own source code can potentially make itself faster as well. And faster brains also relate to smarter minds; speeding up a whole mind doesn't make it smarter, but adding more processing power to the cognitive processes underlying intelligence is a different matter.

But despite the interrelation, the key moment is the rise of smarter-than-human intelligence, rather than recursively self-improving or faster-than-human intelligence, because it's this that makes the future genuinely unlike the past. That doesn't take minds a million times faster than human, or improvement after improvement piled up along a steep curve of recursive self-enhancement. One mind significantly beyond the humanly possible level would represent a full-fledged Singularity. That we are not likely to be dealing with "only one" improvement does not make the impact of one improvement any less.

Combine faster intelligence, smarter intelligence, and recursively self-improving intelligence, and the result is an event so huge that there are no metaphors left. There's nothing remaining to compare it to.

The Singularity is beyond huge, but it can begin with something small. If one smarter-than-human intelligence exists, that mind will find it easier to create still smarter minds. In this respect the dynamic of the Singularity resembles other cases where small causes can have large effects; toppling the first domino in a chain, starting an avalanche with a pebble, perturbing an upright object balanced on its tip. (Human technological civilization occupies a metastable state in which the Singularity is an attractor; once the system starts to flip over to the new state, the flip accelerates.) All it takes is one technology - Artificial Intelligence, brain-computer interfaces, or perhaps something unforeseen - that advances to the point of creating smarter-than-human minds. That one technological advance is the equivalent of the first self-replicating chemical that gave rise to life on Earth.

For more information, please continue with "Why Work Toward the Singularity?"

#2 Chip

  • Guest
  • 387 posts
  • 0

Posted 19 September 2002 - 06:31 PM

Whenever I see a word capitalized within a sentence I am alarmed. Another level of abstraction from meaning is being added. You obviously are not meaning the common definition of the word which immediately comes to me as being a thing with the traits of being alone, separate, unique, etc. Just as my name or yours doesn’t say who we are, more data must be sought. I see the capitalization of a thing within a sentence to be part of a process where something is offered to humanity to be tested for its validity and if acceptable as a working recognition of a functionally distinct phenomenon, becomes a non-capitalized word or concept. Until then, a capitalized word has the risk of not being science, more belief than reason, as science seems to be largely characterized by seeking less double meaning to terms, less inherent need for discerning context, concrete shareable universally consistent knowledge. An immediate example might be electrical voltage. I wouldn’t be surprised if the first inventor of a wagon might have been called “Cart.” I notice that the paragraph has an inordinate use of the word “technology” or “technologies” so I am pointed to try to figure out exactly what you mean by this word in greater depth to understand the whole paragraph. If I were a copy editor offering constructive criticism I might say there’s a bit of wordiness there that you might address but then, can’t say I’m not guilty of that myself. J

The second paragraph starts out with a sentence that appears to mix subjects in such a way tense needs be used that connotes that you know and are sharing with us indisputable facts of what a future with “smarter-than-human minds” would entail. It becomes difficult for me to continue reading. Surely you don’t mean to imply that you can foresee the future?

“our model of the world breaks down when it tries to model a future that contains entities smarter than human.”

EXACTLY! I truly hope it remains a characteristic only of the modeling and not become a blueprint for the breakdown of our mother ship, this fine world that offers such a breath-taking manifestation of order sustaining this incredible and apparently quite singular event known as life needs some intelligence to continue functioning, yes, but so far the models and manifestation have not spoken well of AI. The many sci-fi tales on the subject appear predominantly to place AI in a destructive light. “Open the pod bay doors HAL.” I am happy to learn of Vernor Vinge and will seek out his works. I love speculative sci-fi.

In the third paragraph I have to bring up the capitalization thing again when in the last sentence you use Artificial Intelligence. I thought that clause would be further clarification of supplementing human intelligence as opposed to a separate and distinct non-human intelligence from the context but then am led to a loop in your syntax when you are even literally talking about another loop. Do us humans have to be kept out of the looping you refer too? Do you know for sure that human intelligence improvement can’t grow fast enough to be more superior to any AI? Isn’t this dependent on what we choose to do?

Okay, 4th paragraph. I hate to do this, it’s a lot of my time and if I seem to pick at every little thing in your whole post I can appear more concerned about making a nuisance than communication. Listen, I have a 1.2 GHz P3 on my desktop. If I could only have recourse to my neurons for information processing I never would have bought this computer. I’m not totally out-of-date as far as affordable personal computers go and yet, it is an augmentation of my abilities. The net brings a lot more to me too, the potential computing power of millions of computers and their information storage. The last two sentences are not clear to me. Are you equating thinking time to intelligence? A thinking entity can have a lot more time to think about something than another but if it lacks certain information or contextual understanding, it is still going to come up with some stupid conclusions.

Enough for now. I have appointments to keep. As you might gather I am commenting as I read. I’ll come back and read more, perhaps comment more, later. It is helping me to organize my thoughts and I am grateful for this and I do hope you realize that I’m not just trying to be a bother.

sponsored ad

  • Advert

#3 Infinity Lover

  • Guest
  • 9 posts
  • 0

Posted 19 September 2002 - 09:47 PM

Whatever would constitute as 'smarter than human', which qualities would you want it to include?

Me?
1)NO superior attitude.
2)Peacefull non/violent
3)capable of relating to 'us dummies'

to name but three.

Here's one problem... we'd still need to be wise enough to take it's advise.

Or smart enough to see it isn't deceiving us.

(oh; hi by the way. I posted a little introduction in the open forum. Don't expect any lenghty brainy posts from me. Just simple (I think) relevant additions)

Marcel.

#4 amar

  • Guest
  • 154 posts
  • 0
  • Location:Paradise in time

Posted 18 July 2005 - 06:56 AM

I too am confused why they chose the word "singularity". What does singularity have to do technological evolution? Maybe they'll make the machine into a Jesus who will claim solipsist rule over all and we merehumes™ will be burnt to a crisp by apocolyptic machine war.

#5 signifier

  • Guest
  • 79 posts
  • 0

Posted 14 August 2005 - 04:58 PM

A singularity is a point in the center of a black hole where our classical physical model of the universe is meaningless.

The "Technological Singularity" is any point in the future where our classical model of the world and change breaks down. For example: Right now, I can, with some shared degree of reliability, predict what a future with genetically enhanced super fat-free cows in it would be like. Not very different at all - hell, we may already be living in it. But I can't say anything about what a future with smarter-than-human intelligences in it would be like. I can't tell you what something smarter than me would do, because I would have to be that smart myself. My traditional model of things is meaningless.

The Singularity Institute for Artificial Intelligence is working toward eventually developing smarter-than-human Friendly Artificial Intelligence (FAI). (Friendliness is the key word here. It's a tricky subject; I recommend everyone read a few of the Singinst's philosophical documents such as What is Friendly AI? and Creating Friendly AI.) The Singinst believes that the Singularity would be a good thing, and that safely moving to a post-Singularity world is one of humankind's greatest tasks.

For people who "just" want to live forever, the idea of smarter-than-human AIs and massive, unbelievable worldchange is pretty disconcerting. But it's important to turn off the "that's just science fiction" switch in our minds (the same switch that flips on when an you talk to an "average" person about living millions of years). It's also important to avoid the scifi cliche route - IE, any general AI will inevitably destroy us or misunderstand some subtle complexity of human life or etc. Our traditional view of AI is idiotic. AIs could easily destroy us, which is why creating Friendly AI is so important. But they won't do it simply by virtue of their being "artificial intelligences."

#6 jans

  • Guest
  • 75 posts
  • 7
  • Location:London, mobile: 0783

Posted 29 January 2006 - 11:26 PM

Summary:
“smarter-than-human intelligence
smarter-than-human Friendly Artificial Intelligence
beyond the usual visions of a future filled with bigger and better gadgets
singularity at the center of a black hole
world breaks down when it tries to model a future that contains entities smarter than human.
Humans are not just bigger chimps; we are better chimps.
what "smarter-than-human" really means
We don't know because we're not that smart.
Singularity is beyond huge
metastable state in which the Singularity is an attractor;
once the system starts to flip over to the new state, the flip accelerates
That one technological advance is the equivalent of the first self-replicating chemical that gave rise to life on Earth.
center of a black hole
any point in the future where our classical model of the world and change breaks down
My traditional model of things is meaningless.”

my thoughts:

So if we think that we can get to that point, (flip over to the new state), then we shouldn’t maybe doubt, that someone has got there before us, travels in light, cosmic energy, cosmic radiation, live in a black hole, we don’t have this knowledge as you say.

One story here of the possibility. And I am just interested in the truth of how things work in the universe, and getting to that incomprehensible state.
„While he was conversing with me about the plates, the vision was opened to my mind that I could see the place where the plates were deposited, and that so clearly and distinctly that I knew the place again when I visited it. After this communication, I saw the light in the room begin to gather immediately around the person of him who had been speaking to me, and it continued to do so until the room was again left dark, except just around him; when, instantly I saw, as it were, a conduit open right up into heaven, and he ascended till he entirely disappeared, and the room was left as it had been before this heavenly light had made its appearance. I lay musing on the singularity of the scene, and marveling greatly at what had been told to me by this extraordinary messenger; when, in the midst of my meditation, I suddenly discovered that my room was again beginning to get lighted, and in an instant, as it were, the same heavenly messenger was again by my bedside.” http://scriptures.lds.org/js_h/1/44#44

Another twist,

the previous thoughts brings to mind, that singularity (single or one) is not really the answer. Plurality is the answer – meaning the plurality organized in to one. When did plus or minus alone create energy, or were in motion with out an outer influence. The energes make each other move, My thinking is that man moves the women, and the woman the man, as positive negative energies compensate and influence each other. So if there was an outer world intelligence a creator (singularity) what could be our potential. There just could be a symbol given to us about this by adam, who was first alone on the earth, he was androganous just laying there, he become envigorated, when another force, a female, was taken from him. He needs to go through this life with his companion, because it was not good for man to be alone.

A lot of ancient writings also talk of ancient marriage being of a major power in the universe, and most sacred. See gospel of philip- scrapped from bible, or mormon D&C writings which you don’t usually run across.
http://scriptures.lds.org/tgm/mrrgclst
http://scriptures.lds.org/dc/131/2#2 Celestial marriage is essential to exaltation in the highest heaven; 5—6, How men are sealed up unto eternal life; 7—8, All spirit is matter.
Ending of the Nag Hammadi’s , Gospel according to Philip. (just that you know – bridal chamber or the mirrored bridal chamber is in Christs temple, in the Holy of Holies http://www.webcom.com/cgi-bin/glimpse)
http://www.gnosis.org/naghamm/gop.html „But the mysteries of that marriage are perfected rather in the day and the light. Neither that day nor its light ever sets. If anyone becomes a son of the bridal chamber, he will receive the light. If anyone does not receive it while he is here (while mortal- my note), he will not be able to receive it in the other place. He who will receive that light will not be seen, nor can he be detained. And none shall be able to torment a person like this, even while he dwells in the world. And again when he leaves the world, he has already received the truth in the images. The world has become the Aeon (eternal realm), for the Aeon is fullness for him.”

What i mean is that the male and female forces bound on earth can become a powerful singularity after mortality. I know you don’t like to let go of mortality, but if this was the only way, the „world breaks down” as you say for another dimension or point for progress out of physical laws we comprehend, for the 2 energyes to gain a new form (as adam was one at first he needs to become one again with his wife in the eternal energy realm. There is a lot of talk of eternal, after mortal life, „ super androgynous beings”, in litterature the cases may differ), in energy of the universe, in visible or invisible spectrum to us, but still they are viewed as human forms „or bodies with flesh and bones” for us dumb people to be able to understand.

I thought this interesting:
The most central tenet of Israel's faith had been the proclamation that "our God is One." But Kabbalah asserted that while God exists in highest form as a totally ineffable unity--called by Kabbalah Ein Sof, the infinite--this unknowable singularity had necessarily emanated into a great number of Divine forms: a plurality of Gods... Not only was the Divine plural in Kabbalistic theosophy, but in its first subtle emanation from unknowable unity God had taken on a dual form as Male and Female; a supernal Father and Mother, Hokhmah and Binah, were God's first emanated forms. http://www.gnosis.org/jskabb1.htm

So my conclusion is that maybe someone has got there ahead of us, someone who may even desire us to get there as well, it may be possible to go in from the window (go by any cost by technology), but it is much nicer to be invited in by the door (the door is open, few there be who find it), having earned the trust to use the power that comes with the state of the “people of singularity”. I agree, “Singularity is beyond huge.”

Edited by jans, 29 January 2006 - 11:44 PM.


#7 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 24 February 2006 - 12:44 PM

Jans - WTF?

#8 ameldedic2

  • Guest
  • 91 posts
  • 1
  • Location:South Dakota, United States

Posted 20 October 2006 - 02:52 AM

Why not just make an computer or Super Artificial Intelligence that is capable of thinking at the human level at first, observe its behavior, consciousness, etc. to make sure it's safe. The next step (if everything goes well) is to build a little more intelligent AI to do research, discover, etc. I recommend this process because we have control over these machines and not just try to great super AI right away without knowing the consequences.

#9 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 20 October 2006 - 03:24 AM

After seeing the responses to this essay (IMO one of the best intros to the Singularity)...

Why does the Singularity always bring out the dumbest people with the craziest crap to say? Its rather infuriating...

sponsored ad

  • Advert

#10 mattbrowne

  • Guest
  • 41 posts
  • 0
  • Location:Frankfurt

Posted 22 September 2007 - 09:54 AM

Eventually humans will build very smart androids (this will take another few decades). To avoid the problem of the technological singularity Asimov's so-called 3 laws of robots need to be implemented (with additional safeguards):

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users