• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Singularity Poll: Arrival


  • Please log in to reply
47 replies to this topic

Poll: Singularity Poll: Arrival (51 member(s) have cast votes)

Singularity Poll: Arrival

  1. 2030+ (18 votes [46.15%])

    Percentage of vote: 46.15%

  2. 2020-2030 (5 votes [12.82%])

    Percentage of vote: 12.82%

  3. 2010-2020 (12 votes [30.77%])

    Percentage of vote: 30.77%

  4. Soon (4 votes [10.26%])

    Percentage of vote: 10.26%

Vote Guests cannot vote

#1 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 23 August 2002 - 07:43 AM


<- from bjklein


When will smarter-than-human intelligence be created?

#2 caliban

  • Admin, Advisor, Director
  • 9,154 posts
  • 587
  • Location:UK

Posted 25 August 2002 - 01:18 AM

I just had a chat about this issue and I am disturbed enough to comment on this. [angry]

In my humble opinion: The whole "Signularity" biz is not only inadequately named, but a lot of hyped-up cerebral masturbation.
When it comes to truly "uploading" YOUR mind onto a computer, NONE OF US will ever see that happen. Certainly not within our natural life span.

I do not think exponential growth takes place,
even if it did, exponential growth does not equal exponential development
even if it did exponential development does not warrant ex potential connectedness
even if it did, it would not get around the law of physics
even if it did, it is not feasible to occur in the next two decades (*hysterical giggle*)

I am not saying that "Singularity" is not feasible, or that it will never happen. I am not saying that the concept does not entail very interesting philosophical problems that challenging (if not very useful) to explore... But to ask me and others to vote for at most "30 years+" is an insult.

Ok. Granted. People are wasting their time with worse. Far worse. But to find this crap discussed all over the place by transhumanist people in the "Immortality Institute" is worrying me.

Believing, discussing, dreaming about "Singularity" is just like believing in God. It dulls your mind, and takes away a lot of responsibility it makes life much more bearable. Because BEHOLD: "Singularity" will happen! (in 30+ years at worst!)
No people, you are deluding yourself. Bigtime.
Now, I have always believed that it is important to leave people their dreams, their hopes and their delusions. For that reason, (but certainly not for that reason alone) I have no interest in continuing a debate about the feasibility of "Singularity" I will stay away from this section, from the associated chat and from the associated people just like I did in the old forum. I just wanted make it heard once, that not everybody is enthralled by the idea of what some call "Singularity" and especially not by the notion that it is "near". (in a temporal sense I am well aware that in this forum at least it is lamentably very "near" indeed)

I now stand aside for priests and believers of Singularity to tear this commentary apart. (there are plenty of good angles for that too)
Have fun. I have made my point.

Regards and pleasant dreams
caliban

PS:
Excuse the fuming, [blink]
I had to let it out [ph34r]
It will never happen again. [blush]

sponsored ad

  • Advert

#3 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,364 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 August 2002 - 01:47 AM

It is hard to imagine a scientifically inclinded person not able to percieve radical changes on the way before 2030. Of course maybe the majority of society will not notice the changes. Maybe a few scientists and programmers will go quietly into their posthuman future leaving the rest to the biology they love.

#4 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 August 2002 - 04:34 AM

Caliban, Ahh the beauty of forums... tell us how you really feel. Maybe your reaction is more a result of human nature rather than a leveled assessment of objective scientific fact. I think our first instinct as humans is to reject and discredit the enthusiasm of others simply because it's "enthusiasm".

I do not think exponential growth takes place,

Ehh?

even if it did, exponential growth does not equal exponential development

Development? as in what kind of development? What we're talking about with AI is improvement in processing speed and storage capacity.. this has been increasing exponentially for a while now... just take a look at the computer your using to read this post with.

even if it did exponential development does not warrant ex potential connectedness

We only need one good program (Seed AI) that has the ability to write and improve it's own source code, no connectededness needed

even if it did, it would not get around the law of physics

Who says we need to break any physical laws of physics... other than the heat death problem.. which is a little ways off, there's no physical laws that we need to break in order to reach the singularity.

even if it did, it is not feasible to occur in the next two decades (*hysterical giggle*)

Well, I may agree with you.. but I doubt it will take much more time after that.

I am not saying that "Singularity" is not feasible, or that it will never happen. I am not saying that the concept does not entail very interesting philosophical problems that challenging (if not very useful) to explore... But to ask me and others to vote for at most "30 years+" is an insult.

Goodness ;)

#5 Mangala

  • Guest
  • 108 posts
  • 3
  • Location:Brooklyn, NY

Posted 21 September 2002 - 01:23 AM

I agree with Caliban whole-heartedly. We've all seen people from the 1950's on believe that the flying car, sky high building, and planetary colonization future was just around the corner. Truth is, although we may see some cool new product that seems so futuristic now, that does not infer a general increase in advancement that is necessarily exponential or even linear for that matter! As we advance in technology more and more, more problems become evident that do not even get answered with the new technology, meaning that more problems are found than problems actualy overcome. In order to be totally succesful in our quest to build an SI, we must doubt ourselves to the fullest, we must work our hardest not to fall victim to the concept that "Oh yeah, the singularity is coming" while we are typing away the same thing in 2030. Please, if anything work toward life extension first so that even if the first AI were built in 2110, we would be around to say "Well at least we got our priorities straight first."

I also think there might be an increased anticipation at this site from older members that spawns an earlier projection date than realistically speaking. This earlier projection then makes the younger members feel like "well everyone else seems to think it'll happen in 2030, so it'll proboably happen in 2030."

Maybe I should start a post about how to really formulate a gameplan or something...

#6 Mangala

  • Guest
  • 108 posts
  • 3
  • Location:Brooklyn, NY

Posted 21 September 2002 - 09:33 PM

Agreed! But still, the increase of universal computation is exponential.


Just because we can fit faster computers into smaller spaces does not mean that an AI will rise out of the internet randomly. In order to build a self-improving machine that actually knows what improvements are, we must organzie the code, infrastructure and structure specifically as to create a working intelligence that is anything more than a simplistic insect brain. Increases in computation and increases in organization of source code or for that matter advanced improvement code do not necessarily go hand in hand.

I completely agree, but (no offense), I'm not sure if you've gone over enough literature to give you a clear sense of when the Singularity is coming! See the resources above. What date do you set for the Singularity and why? Have you ever heard of "strong self-improvement"?


Maybe you are right, I've only read about 70% of singinst.org and briefly looked over kurzweilai.net. I've also been to your site and read the fanfics ;) . I will go deeply into the text.

Currently I imagine the first self-improving artificial intelligence will come about in the year 2025, and the first intelligence just capable of surpassing human intelligence in most fields will come about in...oh....60 years. I strongly believe in the idea that its all too easy to make technological intelligence seem a function of computational improvement.

And no I've never heard of strong self-improvement, ill look into that too.

#7 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 22 September 2002 - 04:16 AM

Mangala wrote:
Just because we can fit faster computers into smaller spaces does not mean that an AI will rise out of the internet randomly. In order to build a self-improving machine that actually knows what improvements are, we must organzie the code, infrastructure and structure specifically as to create a working intelligence that is anything more than a simplistic insect brain. Increases in computation and increases in organization of source code or for that matter advanced improvement code do not necessarily go hand in hand.


Obviously! Do you happen to know why folks always assume that Singularitarians don't realize this? The only reason why I argue for the exponential growth of computation is because it's obvious and true, and is loosely correlated with the creation of greater-than-human intelligence, as in, it sets the stage for its creation. For example, it only costs $31 million as of early this year to buy human-equivalent computer power (not software complexity!) in a Beowulf cluster. See also:

SL4 Search: "crossover"

Currently I imagine the first self-improving artificial intelligence will come about in the year 2025


It takes human-equivalency to truly self-improve at a high level...

first intelligence just capable of surpassing human intelligence in most fields will come about in...oh....60 years.


So you think it will take 35 years for a human-level mind with complete access to its own source code and the entire Internet to surpass human intelligence? A mind that can brute-force nanotechnology and create massive amounts of supplementary architecture? Where did you get your prediction?

I strongly believe in the idea that its all too easy to make technological intelligence seem a function of computational improvement.


Not quite sure what you mean here...do you mean "it's all too easy to make technological progress seem like moral progress?" "Computational improvement" is an objective, quantitative variable which has been analyzed thoroughly in Kurzweil's "The Singularity is Near" precis, most notably. I'm not exactly sure what you mean by "technological intelligence". And *why* would someone want to "make technological intelligence seem a function of computational improvement?" So they can give themselves a warm fuzzy feeling? I don't get it. Please clarify.

#8 Guest_Enter your name_*

  • Lurker
  • 0

Posted 22 September 2002 - 03:15 PM

For example, it only costs $31 million as of early this year to buy human-equivalent computer power (not software complexity!) in a Beowulf cluster.


Anyone who thinks a greater-than-human intelligence is not coming before 2030 should re-read the above quote.

Thanks for the rational defense of singinst.org and other like minded individuals Michael. I agree that it is likely some people will be seduced by the singularity concept as a religion replacement. But these are not the current group of people working towards this goal. And I do mean to call it a "goal". Improvement of our intelligence should be a goal.

#9 Psychodelirium

  • Guest Philosopher
  • 26 posts
  • 0

Posted 01 October 2002 - 05:48 PM

...quickly skimming, as in my opinion it wasn't worth total scrutiny


Well, that's a shock... Considering nothing in your reply has any connection to anything written in the paper - which I've now read for the second time, more carefully, to try and see what you were so "aghast and disgusted" about. [hmm]

But getting back to the topic, I'm going to have to agree with caliban that claims to the effect that the singularity will occur in the next decade, or within twenty years, or whatever, are really quite silly. And to make this point, I'm going to echo the most common - and the most commonly unanswered - criticism of the aforementioned predictions. Sure we have lots of computing power, and sure it's growing pretty damn quick, and sure it's going to change the world and positively affect a lot of different fields... but come on... it is a well-known fact in the IT industry that software lags behind hardware - far behind, and where software in general lags, AI in particular lags even more. The notion that our development of a human-equivalent or able-to-upgrade-itself-to-human-equivalence cognitive system will coincide with our development of the hardware requisite to successfully implement this system is - in my honest opinion - ridiculous. To condense all of the above into a single statement: I think Kurzweil might lose his bet.

And what happens then? This section of my reply really belongs in the velocity thread, but the debates seem to overlap. Apparently, certain people have got it in their heads that once we develop human-equivalent AI, everything else will magically and spontaneously unfold from that point on - the AI will somehow assume control of Drexlerian nanotech (which, I guess, we are presumed to have developed within the time it takes us to build the AI, something I am far too skeptical about), and immediately launch itself into superintelligence, taking control of global networks, and so forth. All this plus the reshaping of the 'global ontology' happens, apparently, without any informed consent on our part. That seems to me like a manifestly uncool thing for a Friendly AI to do. In short, it strikes me that all the talk about AI lifting itself into superintelligence ignores a host of variables that exist even for an "Actual Intelligence", variables like designing the technology, getting the requisite materials and facilities, obtaining access, and, needless to say, talking it over with us first.

#10 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 October 2002 - 09:38 PM

Well Psycho I find that we are basically making the same arguments at the same time, in different places.. I also agree that it is a little silly to be taking or giving bets at this time about the fabled "Singularity".

That said I also think that to a certain extent we will reap what we sow. Meaning that much of what AI ultimately comes into existence as and acts like, will be pedicated on how we define this period of development.

But I, as I keep trying to emphasize to Michael think that it is more than a little premature to rule out a quantum level advance in human cognition from the game.

"There is much more between heaven and Earth Horatio than is dreamt of in your philosophies.."

There we go, I have a better idea then Friendly AI. How about Optimistic AI? ;)

#11 Omnido

  • Guest
  • 194 posts
  • 2

Posted 02 October 2002 - 03:57 AM

I do not personally think that the "Singularity" will occur anytime soon. When I refer to "Anytime" I mean, not within the next century.
My reasons for this are mostly due to the obvious obstructions of human evolvement; i.e. Social politics, religious dogma, and capitalistic self-interests.
Before we can expect to see a singularity, those issues will have to be addressed.
I don't see such an addressment anytime soon, due to many factors; including corruption, hedonism, greed, and selfishness.

Personally, I see no need for a singularity at all, but that's my opinion.

#12 Psychodelirium

  • Guest Philosopher
  • 26 posts
  • 0

Posted 02 October 2002 - 05:31 PM

Well Psycho I find that we are basically making the same arguments at the same time, in different places..  I also agree that it is a little silly to be taking or giving bets at this time about the fabled "Singularity".


Actually, Kurzweil's wager is that a computer will pass the Turing test by 2030. Personally, I am completely agnostic with regard to this suggestion, but if betting on a date for AI development is kind of silly, betting on the date for the singularity would be even worse, for all the aforementioned reasons.

But I, as I keep trying to emphasize to Michael think that it is more than a little premature to rule out a quantum level advance in human cognition from the game.


Well, we may agree on a number of points, but I don't think this is going to be one of them. In all my ponderings about minds and consciousness, I've grown increasingly intolerant of that old Penrose axe about quantum minds, and now subscribe to the theory that the phenomena in question are to be explained solely in terms of system interactivity and computation. The quantum effects on human brains simply do not occur on a high enough level to mean anything interesting, if they occur at all. Penrose has suggested that information processing in biological neural networks is somehow influenced by quantum effects occurring in the microtubules of neural cytoskeletons, but there is no evidence for this, nor would this be evidence for anything else if it turned out to be true. Most of the talk about quantum minds is premised on a hunch that computation just isn't enough, rather than on any thorough analysis of the evidence, but I digress.

It is, of course, possible that we will develop quantum computing substrates which will be radically more efficient than what we have to work with now, but once again, hardware != software.

#13 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 03 October 2002 - 03:22 PM

Omnido says:
Personally, I see no need for a singularity at all, but that's my opinion.


It isn't what we need as individuals that is driving the creation of Greater than Human Intelligence, as Michael aptly put it. It is an evolutionary imperative of cognition juxtaposed with competition for survival and the "need" of humanity as a collective to obtain an objective relative viewpoint for its social and philosophical dilemmas.

The Universe works fine with it, or without it, and if humans destroy themselves in this process the Sun will go on shining, unless we cause it to nova in the process of failing. But if we succeed then even those engines of creation will be subject to manipulative will.

#14 Mangala

  • Guest
  • 108 posts
  • 3
  • Location:Brooklyn, NY

Posted 03 October 2002 - 03:50 PM

OK I finally found the time to write a response, but most of what I was going to say has already been written:

Sure we have lots of computing power, and sure it's growing pretty damn quick, and sure it's going to change the world and positively affect a lot of different fields... but come on... it is a well-known fact in the IT industry that software lags behind hardware - far behind, and where software in general lags, AI in particular lags even more. The notion that our development of a human-equivalent or able-to-upgrade-itself-to-human-equivalence cognitive system will coincide with our development of the hardware requisite to successfully implement this system is - in my honest opinion - ridiculous.


I fully agree. Just because we might have the hardware to power some kind of sentience in artificial form:

For example, it only costs $31 million as of early this year to buy human-equivalent computer power (not software complexity!) in a Beowulf cluster.


Anyone who thinks a greater-than-human intelligence is not coming before 2030 should re-read the above quote.


Does not necessarily mean at all that we could build a self-improving machine capable of reaching SI.

Sure, if given the above quote if you had a vast array of scientists, computer programmers, engineers, Development analysts, friendly AI evangelists, philosophers, psycologists, personel from civic, government and military positions, security teams, and about $40 million dollars a month to pay for these positions and house the entire AI building operation, you could proboably have a full functioning AI on its way to Superintelligent enlightenment in about 15 years.

This however can only happen if the majority of this country and any other powers on the planet do not have a problem with the building of a machine that has the potential to wipe out the human race.

Who would do this? Why would any senator any time soon use government money to build such a thing. Granted this whole scenario can be brought back to the building of the first nuclear weapon during the WWII era. But obviously the reason why we built such a thing is about 90% because we wanted to defeat two horrible powers; Nazi Germany and militarist Japan. Nobody would ever decide to start constructing the kind of AI that anyone in this forum has thought of building to reach the singularity unless there were some great reason for it to be built.

Most people know that if we had built the AI just right, it would be the greatest thing that humanity has ever created, it could provide for all our needs and finally end "dying and suffering." But the whole of this planet, more importantly this country, does not believe this, they don't care about what could be, they see too much risk in building something that could outsmart every single person they know.

And also, just because some intellectual predicted the coming of the greater than human intelligence sometime between then and 30 years from now does not mean that there is nothing stopping such an event from happening. Mr. Vinge just was a little too optimistic.

Not to mention applying direct exponential graphs to the whole of human development is a little too simplistic.


My next question would be: Where's the business model?

My reasons for this are mostly due to the obvious obstructions of human evolvement; i.e. Social politics, religious dogma, and capitalistic self-interests.
Before we can expect to see a singularity, those issues will have to be addressed.
I don't see such an addressment anytime soon, due to many factors; including corruption, hedonism, greed, and selfishness.


Exactly! As I tried to illustrate in my other post "Socialists vs. Capitlaists" no one wants to build this thing because there simply is no money to be made. How would you get a profit from building an AI? Even if a private company wanted to build an AI for a toy or something like that it would of course have to be centered around who would buy such a thing, not who would benefit from such a thing. Nobody is going to profit from an ongoing project of constructing a greater-than-human intelligence! Just like nobody would ever profit from going to the moon. The Apollo missions were all government funded, fueled by the American psycological condition in which they believed they would prove American superiority to the Russians by getting to the moon first. As in the case of the AI, no one really wants to get to SI but a few of us here, and I highly doubt any of us own a company or are prominent government figures. The AI will not be build any time soon not because we cannot do it, but because we will not do it. So since companies will only build toys with some semblance of artificial intelligence as in A.I. the movie, governments are really the only hope we have. But since the public form of our government sees no reason to build any such thing, they won't build it either. The only chance we have to get an AI project up and running that I see, is if we could get the military to see that we'd need to build an SI first, so that it could outsmart any SI's in other countries, only then might we be able to persuade the military to bring the SI into the public and scientific forum, to solve problems that would help society.

Plus theres the problem of age. Most of the senators we have today in power were born from 1950 and back. They still have the mindset of sci-fi movies and shows that seemed to say that if any technological version of intelligence were ever built, it would have human feelings, human wants and human needs. That is, if an AI were ever government funded and built, it would automatically try to seize power, that is why there is absolutely no reason to build such a thing because it is way too dangerous too humanity.

Do you really think that Tom Daschle would ever try to draft a bill calling for this kind of program to be implemented anytime in the near future?

Have to go to class.

- Mangala

#15 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 03 October 2002 - 05:44 PM

Psychodelerium says:
Well, we may agree on a number of points, but I don't think this is going to be one of them. In all my ponderings about minds and consciousness, I've grown increasingly intolerant of that old Penrose axe about quantum minds, and now subscribe to the theory that the phenomena in question are to be explained solely in terms of system interactivity and computation. The quantum effects on human brains simply do not occur on a high enough level to mean anything interesting, if they occur at all. Penrose has suggested that information processing in biological neural networks is somehow influenced by quantum effects occurring in the microtubules of neural cytoskeletons, but there is no evidence for this, nor would this be evidence for anything else if it turned out to be true. Most of the talk about quantum minds is premised on a hunch that computation just isn't enough, rather than on any thorough analysis of the evidence, but I digress.


Here what I agree with, and I think Michael will too, even many others on this list, is is that we need a separate thread on the subject. That said, it isn't just a hunch, we are however still legitimately debating the validity of each other's evidence.

The Penrose model is flawed by a number of limitations in data but as I have been pointing out there are also tanatalizing new ways of seeing the idea that are reason to return to the subject Like DNA Computational Ability and the Super Model for Collectivized Social Behavior. But it is a different path to the Singularity, at worst parallel, at best a wormhole application of "String theory". Imagine the Web Mind as profoundly more involved and developed in just a decade or two but not just an Artificial Construct of electronic nodes but also of organic living brains that are woven into this tapestry to enhance comprehension and descion ability along with the social element they bring collectively to greet the Seed AI's awakenning mind.

Anyway relax you "Psycho", intolerance of ideas is how we become living fossils. ;)

#16 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,364 posts
  • 2,000
  • Location:Wausau, WI

Posted 04 October 2002 - 01:55 AM

Plus theres the problem of age. Most of the senators we have today in power were born from 1950 and back. They still have the mindset of sci-fi movies and shows that seemed to say that if any technological version of intelligence were ever built, it would have human feelings, human wants and human needs. That is, if an AI were ever government funded and built, it would automatically try to seize power, that is why there is absolutely no reason to build such a thing because it is way too dangerous too humanity.

Do you really think that Tom Daschle would ever try to draft a bill calling for this kind of program to be implemented anytime in the near future?


The answer is no.

This is a good point, Mangala. Especially about older people in control of government. Most of them do not have the imagination to see past the next election, much less see how advancing technology will chance things a couple years from now. I was once watching a show on PBS...a bunch of artists and musicians were talking about how they lost a lot of creativity once they grew older than 50. I just do not want to believe that. However, the more I converse and live with people of all ages it seems at least a little bit true. I am not sure if it is psyiological or sociological but there seems to be a trend towards less creativity with age. I find myself having to work...really work...to maintain a nimble and creative mind.

That all said, I doubt SI will come from a big government program sponsored by political leaders. However...a lot of government money does find its way into Universities. These various research projects along with private endeavors...I feel...will lead to SI sooner than 2020.

#17 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 04 October 2002 - 02:34 AM

Do I have to spell it out for everyone so clearly that I get busted for breach of National Security? Do the math. one tenth of one percent of nine billion? Is still a budget of 9 million dollars a month that could be diverted easilly for Security Interests.

They are talking about spending as much as nine or ten billion dollars a month to fight in Iraq and they have spent some of that already to bring whole new areas of Defense Department allocation into AI Research through DARPA and even Anti Cyber Terrorism groups.

Diverting 100 of millions of dollars into AI research is not only easy for this group they have done it before , that is how this all began. At this point I do think it would be interesting to discuss Orson Scott Card's Ender's Game. I think that Government is very interested in AI as a vested aspect of operations and security for the core infrustructure of our entire economy. And you bet they would all like to feel they personally have their fingers on the controls, right up throught the level of the Federal Reserve.

Never understimate the ability of the Government to produce seemingly unlimited wealth for a problem if they feel they must play in the game. And from the private sector I have built houses for people that spent tens of millions of dollars on their toys, so a 31 million dollar price tag is not beyond the scope for the more serious players. Those that can pay are going to begin to force developmental trends. The "Game" is competition. Now it becomes every more a race between competiting interests to see who gets to develop Seed AI first and with what Ethical Architecture. I also think that poorer governments with more practical and serious commitments of GNP might achieve dominance in this area before the US even realizes that control of global web operations isn't in their hands anymore. Nations like China and India are serious players in the development of AI. It is just that most people here don't take them seriously. Which may be whyy we will end up in a Virtual Cold War leading to Natin Based Struggle for AI control. Would everybody grant up front tht this might be better something we did together liek the International Space Station?

#18 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 04 October 2002 - 02:36 AM

Here what I agree with, and I think Michael will too, even many others on this list, is is that we need a separate thread on the subject.

Yup.

#19 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 04 October 2002 - 02:47 AM

Lazarus Long wrote:
I also agree that it is a little silly to be taking or giving bets at this time about the fabled "Singularity".


"Fabled"? The approach of the Singularity is as real as your breakfast cereal, and will impact your life a whole lot more.

That said I also think that to a certain extent we will reap what we sow. Meaning that much of what AI ultimately comes into existence as and acts like, will be pedicated on how we define this period of development.

To a certain exetent. Much of what "the AI" will behave like depends on what causes ver cognitive system generates. I also would like to point out here that we're not talking about an AI here, but a superintelligence. Totally different things.

But I, as I keep trying to emphasize to Michael think that it is more than a little premature to rule out a quantum level advance in human cognition from the game.


You mean through something like neurotechnology? AI is a quantum level advance in human cognition, too - the will of the AI will initially be an extension of the will of the programmers, but radically surpassing and encompassing it. Any line drawn in human enhancement methods is arbitrary - if an AI screws up and kills everyone, it's just like any piece of technology - carelessness or malice can happen. There was a slight risk that a nuke would ignite the atmosphere, luckily that didn't happen.

There we go, I have a better idea then Friendly AI. How about Optimistic AI?


An truly Friendly AI would prepare for both optimistic and pessimistic possibilities.

#20 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 04 October 2002 - 02:54 AM

OmniDo wrote:
I do not personally think that the "Singularity" will occur anytime soon. When I refer to "Anytime" I mean, not within the next century.


Heh...do you mean the next "subjective century"? Remember that a mind that runs on a faster substrate than human beings could think much, much faster than us, so that this "century" you speak of would be a few billion years. Also - my definition of the Singularity is the "creation of greater-than-human intelligence" - what's yours?

My reasons for this are mostly due to the obvious obstructions of human evolvement; i.e. Social politics, religious dogma, and capitalistic self-interests.


And how many resources do you predict it will take to create greater-than-human intelligence?

Before we can expect to see a singularity, those issues will have to be addressed.


Why is an engineering problem dependent on social politics, religious dogma, or capitalistic self-interest?

#21 Mangala

  • Guest
  • 108 posts
  • 3
  • Location:Brooklyn, NY

Posted 04 October 2002 - 03:23 AM

Before we can expect to see a singularity, those issues will have to be addressed.



Why is an engineering problem dependent on social politics, religious dogma, or capitalistic self-interest?


In our society engineers are paid to do their work, they don't just do it because they enjoy the work that they do(except for the retired ones). Every dime that goes into the building of a seed AI will have to come from somewhere, and that somewhere is either from the government or the private sector. If no senators, CEO's, or cults want to build the AI, no one will. We alone cannot do the job, and that is why we've hardly done anything other than sit here and map out what others could do.

This reminds me of a great movie I saw the other day called "Contact." It really is a great movie starring Jodie Foster and is very relevant in terms of governments and companies around the world participating in the building of something that could be the best thing that humanity has ever built, or the worst. Actually, now that I think about it, it strangely has a whole lot to do with this subject, please rent Contact.

And from the private sector I have built houses for people that spent tens of millions of dollars on their toys, so a 31 million dollar price tag is not beyond the scope for the more serious players. Those that can pay are going to begin to force developmental trends. The "Game" is competition. Now it becomes every more a race between competiting interests to see who gets to develop Seed AI first and with what Ethical Architecture.


When I talk of a toy per se I mean something like a computer program that has some kind of complex conversation with you. A seed AI wouldn't really entertain anybody for a long time and it would have to be constantly contructed upon by AI enthusiasts. There's a big difference between buying a concept car and buying a concept car think tank.

Plus I highly doubt you could even come close to buying the hardware needed to build a seed AI with $50 million. Not to mention the software is totally out of our league in this day and age. But believe me, I'd like to see the SI in 28 years or less as well, it just seems highly unlikely that anybody will take the idea of seriously building an AI and turn it into reality.

I have enough trouble explaining this thing to people without them waving it off as nothing but conjecture coming out of the mouth of someone "who's seen too many sci-fi movies."

- Mangala

#22 Omnido

  • Guest
  • 194 posts
  • 2

Posted 04 October 2002 - 12:58 PM

Also - my definition of the Singularity is the "creation of greater-than-human intelligence" - what's yours?


Then you are speaking of that which is a logical impossibility.
As I posted before, "Greater-than-human-intelligence" must first be quantified.
If you are referrering to "Greater-than-human-speed-of-thought" then that's another subject.
As I previously outlined, anything that is a construct of humans will exist as a reflection, at least in part if not in whole, of human representative qualities.
To create something that is more intelligent than the humans who would use their own intelligence to create it, is an impossibility. You might as well create a generator that requires 10 watts of power to run, but yields 15 watts of power output. Such a generator could then power itself, and this is obviously an impossibility with current models of physics. Unless of course you are intelligent enough to bend those laws in the extreme...

Even so, if a human or group of humans creates a super-intellgence, then by logical demonstration that "Super-intelligence" is nothing more than representation of the possibilities endowed it by its creator; namely us, the "Not-so-super-intelligence."
Therefore, if a human can create such a construct, then it stands to reason that the human(s) in question would already possess the qualitative properties of the pre-defined "super-intelligence." The result of which would only be an increase of speed, not of intelligence.
Therefore, as I (and a few others have already stated) such a super-intelligence will only "buy time". Anything that could be conceived by it, could also be conceived by not-so-super-intelligent humans, as it were.

And how many resources do you predict it will take to create greater-than-human intelligence?

As already stated, such an intelligence can never objectively be realized. However, the resources are not in question, neither is the possibility of the advances that a "subjective singularity" could bring. It is a matter of what will be addressed below that will determine its success/failure.

Why is an engineering problem dependent on social politics, religious dogma, or capitalistic self-interest?


Inherently it is not. However, there is a huge difference between theory and practice.
I'm sure you can come up with an idea right? Sure, we all do. Consider the following:

I have an idea, it just popped into my head. I'm going to get a nuke and go blow up the world.
Ok, theres the idea. Well now, lets see, I need materials to do that, and a workable machine that has the power to bring about such massive destruction...
Ok no prob, I'll design one. Its not impossible, and the laws of physics permit it, so all I have to do is follow those laws...
Ok now I have my design. I know it works because its mathematically flawless.
Now I need some funding...

The problem is Michael, ideas by themselves are wondrous and terrifying. While the engineer can figure out if something "could be" many don't bother to consider why it "should be". To most engineers, their ideas alone are "magical" of sorts, and they merely wish to see them realized. Alot of engineers have lofty ideas, envisioning their inventions and creations to accomplish the subjective "greater good" for their fellow humans, and that is noble. But we all know the end of that story now don't we?
Einstein and what he accomplished. And then there was the Trinity explosion...
Are you aware the massive amount of guilt and regret that he bore after Hiroshima?
To know that the fate of a country, indeed possibly the world, lay dependant on the advisedness of his calculations? Can you even begin to conceive what action you would have taken in his position? He knew his calculations were correct, but he had to demonstrate it to the scientific world during a time of crisis, as so to bring about "Salvation" and "Damnation". In his mind, he had merely proven an enormous energy resource that might have been used for the "greater good". Instead, millions died under its unforgiving precision and exactitude.

Social politics are what allow or prevent a thing or state of affairs from becoming fully realized. "Majority rules" remember? Those of us who hate that rule can spite our fury at a mirror, for most of the time it wont make a difference what we as individuals think.

As for Religious Dogma...I cant believe I even have to explain that one, when we have people killing people in other countries over issues of preferred belief structures and the metaphysically assumed "salvation" of their fellow humans...

As for capitalistic self-interest...
Well theres one that ties the other 2 together. Money = control, control = power, and as we all know, power corrupts. Study that model with relation to human "average personality archetypes" and tell me if you can logically disagree.
As Ive quoted before: "There is no greater ruler than a benevolent king..." but as Plato, Socrates, and Aristotle all knew more than a millennia hence, "Only a philosopher possess such benevolence, for only a philosopher can truly see all that is, as well as all that is not." Making reference to "The Forms", a concept far too abstract for most of todays mundane, capitalistic-consumer minds to comprehend, nor even give a damn about.
Without "True benevolence" the likes of which exists in such a rare quantity, the dream of the "Friendly" AI will never materialize. Because I understand what the human mind is capable of; and more importantly, "Why" it is capable, then I recognize (as did the philosophers of old) the reasons why humans corrupt themselves. The system was set in motion thousands of years ago Michael, and as its founding fathers predicted, it has done nothing but degenerate.
Most people are not aware that Democracy is merely one tiny step above Tyranny, and realizing that as it is, I am in no way apt to trust anything that could be suggested and/or created under the definition of "Friendly" in this day and age. There is simply too much existing corruption amongst those who have the power to bring about such advancements, that I would NEVER trust such a development.

Now lets make a huge assumption here. Lets presuppose that a truly benevolent philosopher were to obtain nearly infinite funding.
Could such a person bring about the "Friendly" AI to which you refer? Possibly.
Then again, such a person would be a threat to the corruption already pre-existing in our world, and such corruption would not stand for its easy removal. Others who had equal or greater "power" would seek to turn or eliminate whatever would be a threat to their "power", and the benevolent philosopher would fit that description from the get-go. Then all of a sudden, the benevolent philosopher must now use their resources to combat the other opposing powers, merely to keep their "Dream" and/or "Vision" if you will, alive. Still, in the end, nothing gets accomplished, and unless one of the two sides gains an edge, they will do battle with each other forever, at the cost of resources, time, and even lives.
Welcome to the real world.

I'm not saying its completely impossible Michael, I'm merely saying that given the majority of "current" human motivations, interests, and concepts, the human race as a whole is not prepared.
If you wish to see such an endeavor succeed, you will have to get by the hundreds, possibly even thousands of others who would possess the qualities of being anything but benevolent. Good luck is all I have to say.

#23 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 04 October 2002 - 02:54 PM

I have decided to share a little levity as a way of emphasizing the point that Omnido is making conccerning the self limiting character of humans and their subsequent social behaviors. The following stories are all true and I remember when more then a few of them occurred. And while you read them please reflect on how will AI incorporate itself into their lives.

TOP EIGHT IDIOTS OF THE YEAR

1. WILL THE REAL DUMMY PLEASE STAND UP?
AT&T fired President John Walter after nine months, saying he lacked intellectual leadership. He received a $26 million severance package.

Perhaps it's not Walter who's lacking intelligence.

2. WITH A LITTLE HELP FROM OUR FRIENDS
Police in Oakland, California spent two hours attempting to subdue a gunman who had barricaded himself inside his home. After firing ten tear gas canisters, officers discovered that the man was standing beside them in the police line, shouting "Please come out and give yourself up."

3. WHAT WAS PLAN B???
An Illinois man, pretending to have a gun, kidnapped a motorist and forced him to drive to two different automated teller machines, wherein the kidnapper proceeded to withdraw money from his own bank accounts.

4. THE GETAWAY!
A man walked into a Topeka, Kansas Kwik Stop, and asked for all the money in the cash drawer. Apparently, the take was too small, so he tied up the store clerk and worked the counter himself for three hours until police showed up and grabbed him.

5. DID I SAY THAT???
Police in Los Angeles had good luck with a robbery suspect who just couldn't control himself during a line-up. When detectives asked each man in the lineup to repeat the words "Give me all your money or I'll shoot", the man shouted, "That's not what I said!"

6. ARE WE COMMUNICATING??
A man spoke frantically into the phone, "My wife is pregnant and her contractions are only two minutes apart!" "Is this her first child?" the doctor asked. "No!", the man shouted, "This is her husband!".

7. NOT THE SHARPEST TOOL IN THE SHED!!
In Modesto, California, Steven Richard King was arrested for trying to hold up a Bank of America branch without a weapon. King used a thumb and a finger to simulate a gun but, unfortunately, he failed to keep his hand in his pocket. (hellllllooooooo!)

8. THE GRAND FINALE Last summer, down on Lake Isabella, located in the high desert, an hour east of Bakersfield, California, some folks, new to boating, were having a problem. No matter how hard they tried, they couldn't get their brand new 22 ft going. It was very sluggish in almost every maneuver, no matter how much power was applied. After about an hour of trying to make it go, they putted to a nearby marina, thinking someone there could tell them what was wrong.

A thorough topside check revealed everything in perfect working condition. The engine ran fine, the outdrive went up and down, the prop was the correct size and pitch. So, one of the marina guys jumped in the water to check underneath, he came up choking on water. He was laughing so hard.
NOW REMEMBER ...
THIS IS TRUE...
Under the boat, still strapped securely in place, was the trailer.

Perhaps on reflection if we really were all geniuses aided by AI we would truly be dangerous as a species, well...

At least a lot more dangerous then we already are. [ph34r]

#24 Omnido

  • Guest
  • 194 posts
  • 2

Posted 06 October 2002 - 07:16 PM

[!] [!] [ggg] [ggg] [!] [!]

Now that is what I call humor!
Suffice it to say, I actually live in Modesto California, and yes, Im quite often surrounded by these "intelligent" people. haha [roll]
Thanks for that post Laz. Brought a smile to my otherwise horrid day.

#25 Mangala

  • Guest
  • 108 posts
  • 3
  • Location:Brooklyn, NY

Posted 07 October 2002 - 07:34 PM

I have to disagree with Omnido on a few points:

Also - my definition of the Singularity is the "creation of greater-than-human intelligence" - what's yours?



Then you are speaking of that which is a logical impossibility.
As I posted before, "Greater-than-human-intelligence" must first be quantified.
If you are referrering to "Greater-than-human-speed-of-thought" then that's another subject.
As I previously outlined, anything that is a construct of humans will exist as a reflection, at least in part if not in whole, of human representative qualities.
To create something that is more intelligent than the humans who would use their own intelligence to create it, is an impossibility. You might as well create a generator that requires 10 watts of power to run, but yields 15 watts of power output. Such a generator could then power itself, and this is obviously an impossibility with current models of physics.


Energy usage and output are not in any way directly correlzted with a less complex thing building a more compex thing. If a less intelligent creature were to build a more intelligent creature, intelligence in it of itself does not need to be "conserved." By statung that your belief is that humans can only build more humans and that is all is one sided. Think about a space station, or an underwater sea lab, or a light bulb. There is no part of a human being that can function in a vaccuum, nor underwater for extended periods of time and yet we have built things that can. Humans have the ability to overcome there own intelligent and physical ability simply because we create things that do the work for us that we cannot do.

For instance, when we try to launch a weather satellite, we are not the only things working on the satellite. We have computers, designed rockets, gases in complex allignement for launch and inertia. We do not have just tons of human beings around just making sure every single thing is in its proper place because we have either used manpower on a previous day, or have a computer to make sure that everything is working out fine.

If we are to say that something less complex cannot in principle build something more complex than all of evolution is incorrect. When a fish gives birth to another fish that has one beneficial gene, it makes a more complex and overall better fish. That fact alone disables the logic that one thing cannot create a more complex thing.

Humans do not plan to just build a superintelligence from scrap, we're going to build a seed AI first, that proboably will be less complex than even a small dog. By building a seed AI we can monitor and improve upon a self-improving creature, we can forster it's own growth. And since we can build an AI that can improve upon itself, an AI capable of full self-awareness in source code, concept, thought and intelligence would be able to improve it's own matrix once it got to our level. So even if it were true that human beings cannot build more intelligent things, we actually only plan to build a less intelligent thing in the first place, that will eventually be able to improve its own intelligence. Humans on there own know that improving our brainspeed by replacing our neurons with transistors would be very beneficial to us. Think of what a fully cognisant AI would be able to deduce with the knowledge of how his entire self works and maintains itself?

We can improve upon ourselves, and so can a human level AI.

Omnido, have you ever visited singinst.org?

- Mangala

#26 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,364 posts
  • 2,000
  • Location:Wausau, WI

Posted 07 October 2002 - 09:29 PM

Omnido, I have to respectfully disagree with your point about "humans not being able to produce greater than human intelligence".

We have already built a machine that can beat the world's best chess player. Now...I know this is just one specialized machine and its program. I know that anyone could argue about how Deep Blue is not intelligent...not in the sense that humans are. I am just talking about the result. Based on the results, the greatest chess player in the world is a computer. No human mind can calculate fast enough to beat Deep Blue. Humans could beat Deep Blue in any other activity you can name. We would whip Deep Blue in any other game. But not chess. We have created a machine that is better than any human at playing chess.

Now logically extending this point. I have no doubt...with the proper resources...we could build gaming machines that could beat us in any deterministic game.

We also are starting to use genetic evolutionary algorythms to solve problems too complex for human minds. (as another example of specialized greater than human intelligence)

What do you think?

P.S....I know we can make all kinds of arguments about consciousness and intelligence and how Deep Blue is not alive...blah..blah...blah. I have engaged in many such discussions before. I just want to know what you think about the results.

#27 Omnido

  • Guest
  • 194 posts
  • 2

Posted 20 December 2002 - 03:43 PM

Hmm...perhaps I am not being clear on this.

Conservation of energy is applied to the known law of thermodynamics. That is to say, energy cannot be created nor distroyed, it merely changes shape.

In my previous post, my intent was to outline that creation of something greater than the sum of its parts is a logical contradiction and warrants definition as "Nearly impossible." Perhaps objective impossibility is too strong a word, but nonetheless it is logically valid. Let me attempt to explain...

Mangala Posted
Humans have the ability to overcome there own intelligent and physical ability simply because we create things that do the work for us that we cannot do.

On the contrary, a computer cannot do that which we cannot, it can merely do what we can at a greater degree of speed. Speed is the issue here.

Mind Posted
We have already built a machine that can beat the world's best chess player. No human mind can calculate fast enough to beat Deep Blue.  We have created a machine that is better than any human at playing chess


While indeed humans fashoned a computer that defeated the worlds greatest chess player, in truth it is not an accurate assessment of superiority. Deep Blue did defeat Kasparov in chess once, Kasparov stalemated the computer in one game, and forefitted the 3rd and final game. This is not a defeat, it is renouncing a game in favor of realization that to defeat this machine would require far more time and energy than was readily availble on hand, and Kasparov himself admitted that the machine would quite probably have stalemated him again if he had really desired to finish the last game. A stalemate does not infer superiority. Since chess allows for stalemates, then there can be no pre-determined victory for either player other than a possible stalemate. So, in essence, the computer didnt "Beat" him at all. The computer merely "out-thought" him in terms of speed.

Let me give an example:
Lets suppose we have an individual who can memorize the entire library of congress, word for word, page for page. Lets also suppose that a computer has been programmed to contain all the information of the library of congress within its digital databanks.

Now we ask both the computer and the Human what is on page 125 of one of the volumes within the library. Which of the two will answer first?
This isnt a question of rocket science. Obviously, with its billions of calulations per second, the computer will answer first. Does that make the computer a "Super-intelligence" ? Absolutely not. It merely performed its tasks faster.
Speed does not equal superior intelligence.

Now lets suppose we ask the computer and the human what the "Meaning" is of a certain statement on a particular page of a volume within the library. Which of the two will answer first? Again, the computer will. It will answer "Insufficient information", while the human will ponder and produce an answer that reflects human experience, imagination, and creativity. The computer has none of these, it is merely a super-fast calculater.

Now I know, I know, many will argue: "What stops the programmers from endowing the computer with those aforementioned qualities?" Time, trial, and error. Could that eventually be accomplished? Possibly. But now we begin to anthropromorphize the mind of the machine as being anything similar to us. Such a machine would have to duplicate the very essence, function, and interactive complexity; all of which are possible, of a human being in order to yield "meaningful results". In the end, all we have done is create a faster human, one that has reduced the law of averages against making mistakes. Again, greater speed, but equal intelligence.

There is no evidence that any machine could ever acquire "Superior-than-human intelligence", insofar as the human being could not endow themselves with the same degree of intelligence as a machine. Granted, it would take far longer, but if we are talking about efficiency and "superiority", then the human has the machine beat, hands down.

Stop for a moment and think about how much energy it takes to sustain a human, indeed to cultivate and educate a human, versus a machine. Machines at present require orders of magnitude more power to operate than a human mind, but also they yield orders of magnititude greater numbers of calculations.
I would argue that if the same ammount of energy could be invested within a human, the human would equal if not supercede its computer equivalent.
Now indeed, as machines become smaller and more efficient, they will consume far less power and continue to caluclate ever faster. Notice the key word there: Faster. That is all. Once humans begin to augment themselves with artificial systems, organic bio-chips or whatever sci-fi equivalent, we too will become faster than our former selves, with increased abilities and capacities. Still, all we will have done is make ourselves faster, not "Superior".

Perhaps the definition of "Superior" needs to be better defined. If superior is to equate to speed, then yes, it is possible to build superior-intelligence. But from the attitudes of many posts, it has been the perceived intent that "superior" meant "Beyond human cognition" or "beyond human capacity for understanding" which is in my opinion, totally bunk.
I challenge any human on this planet anywhere, to come up with a concept that is beyond my comprehension, and/or would be beyond my comprehension. Obviously if anyone could accomplish such a feat, that would imply that they were able to comprehend it, which in turn would imply that I am able to comprehend it. Furthermore if that person were able to create an artificial system that could comprehend it, a "Greater-than-human intelligence" would still not have been achieved, as the machine would merely equal its creator in understanding, while surpassing it in speed.

Perhaps it is the search for improvement that drives humans to use such lofty tools as "Greater than" and "superior" etc...
If what is instead meant to be an expression of "Greater than we are now" or perhaps "To a better advantage" or even "With greater efficiency" might do better in conveying possible ideas behind what we hope to define as "Objective Reality."
In the end, semantics trip us up over many things, creating obstacles and barriers, and worse: assumptions. These tend to confuse and gray out many areas where precision and exactitude are of great importance when dealing with objective discernment and useful results.

Did I miss something here?

#28 Gordon Worley

  • Guest
  • 2 posts
  • 0
  • Location:Orlando, FL

Posted 20 December 2002 - 04:45 PM

While I don't know, I suspect that Omnido is making the mistake of thinking that humans represent some grand culmination of intelligence. Humans are evolved organisms and so are their brains. As such, their brains did not evolve to be generally intelligent, capable of solving any solvable problem in finite time. Instead, brians evolved `psychological phenotypes' that, over time, caused some humans to reproduce more effectively than others. The end result is humans who can tell when people are cheating but can't find p and ~q when not in the context of social games. Humans minds have enough generality for our environment. Given a more rapidly changing environment, we would possibly be more generally intelligent. If you program an AI to be completely generally intelligent, I would be able to solve any problem that is solvable in finite time. Of course, the AI is going to need specific modules that are less general to get anything done in a resonable amount of time, but that's an engineering issue.

#29 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 21 December 2002 - 01:47 AM

OmniDo wrote:
In my previous post, my intent was to outline that creation of something greater than the sum of its parts is a logical contradiction and warrants definition as "Nearly impossible." Perhaps objective impossibility is too strong a word, but nonetheless it is logically valid. Let me attempt to explain...


As Mind and Mangala have pointed out above, humanity has already created many devices which extend and amplify our preexisting methods of information gathering, storage, retrieval, analysis, and so on. Evolution has consistently produced better biological designs and features throughout history, slowly adding on layers of specialized cognitive and anatomical machinery, to produce the wealth and diversity of life we see today; all this, and at the beginning our solar system was nothing more than an instellar dust cloud. Are you sure you don't mean something else, here?

On the contrary, a computer cannot do that which we cannot, it can merely do what we can at a greater degree of speed. Speed is the issue here.


How about future computers designed to integrate organically with the human brain? How about cerebral implants with information processing structures as complex and effective as the neural tissue, allowing human beings to intuitively understand a wider range of patterns, retain more memory, immediately notice salient details in a huge project or society, quickly power their way through long chains of sophisticated reasoning, etc? It can be tempting to hastily classify human intelligence as having crossed some sacred threshold which will forever allow us to be on par with even the greatest of future superintelligences, but this is due to the fact that humans don't focus on or demand the execution of superintelligently-challenging tasks of each other or themselves. We can look at a really smart human and say "hey, she's impressive", but that's because the adaptations responsible for our subjective psychological reaction are specifically tuned to living in a human society. All of human society is built around humanity's characteristic level of intelligence, and it's such a self-reinforcing memeplex that it's hard to see the profound qualities that all humans lack. The difference in brilliance between an idiot and Einstein can be isolated to microscopic neurological and genetic differences, and investigating the causes that produce these differences and attempting to exploit them will likely create a fascinating new branch of research just prior to the Singularity. The components that make up our "intelligence" are just a jumble of evolved, content rich neurological adaptations with a dose of self-rewiring plasticity thrown in (also an adaptation), and there's nothing preventing us from eventually adding in additional functionality to create individuals who think qualitatively better than even the most intelligent humans. Such transhuman individuals could potentially walk into a broad range of laboratories or research institutions, point out the obvious, leave, and repeat indefinitely, contributing more to technological progress in general than a 1000 humans ever could. An expert in a field (who's "expertness" can be narrowed down to a few differences in the interneuronal connection map) can solve a problem that a thousand newbies could never solve, and a transhuman expert could think thoughts entirely outside of the human sphere of experience and solve a much wider range of more complex problems immediately, in the same sense that homo sapiens thinks thoughts outside of homo habilis's sphere of experience. The difference is not simply in speed, but qualitatively better observation skills, analysis, innovation, creativity - whatever human skill it is, there are neural processes responsible for it, and these process will be analyzed, enhanced, and run as computer code, opening the door up for further cycles of enhancement.

While indeed humans fashoned a computer that defeated the worlds greatest ... didnt "Beat" him at all. The computer merely "out-thought" him in terms of speed.


True; since chess is a game that emerged specifically in human culture, and requires the broad range of content-rich human cognitive and perceptive mechanisms to do well in, humans still excel at chess over software programs. But Deep Blue and friends aren't AIs, just glorified search trees. The algorithmic complexity of the human brain far exceeds that of these chess playing programs, so that should be factored into our metric of superiority as well. Projecting and analyzing combinatorally explosive games requires lots of specialized cognitive machinery, and we don't know quite enough about ours yet to build a machine that thinks at the same smartness level as we do. But when we do, the impact will be quite huge...

There is no evidence that any machine could ever acquire "Superior-than-human intelligence", insofar as the human being could not endow themselves with the same degree of intelligence as a machine. Granted, it would take far longer, but if we are talking about efficiency and "superiority", then the human has the machine beat, hands down.


What's our definition of "intelligence", anyway? It truly does have something to do with speed - responding quickly to wider ranges of threats is one of the main reasons that intelligence evolved in the first place. You can point at a human's brain, looking very closely at all the machinery, and say; "Why is this so impressive? This organism simply uses the same biochemical and physical laws as a common slug, they're just using a lot in one place." The human brain is based on fundamental evolutionary principles that originally evolved many millions of years ago, but the continued layering of heuristics and plasticity eventually produced beings that can ponder the universe in a qualitatively different way than software programs or chimps. This isn't about a popularity contest between software programs and human beings, but fundamental facts about the way minds work.

Stop for a moment and think about how much energy it takes to sustain a human, indeed to cultivate and educate a human, versus a machine. Machines at present require orders of magnitude more power to operate than a human mind, but also they yield orders of magnititude greater numbers of calculations.


But right now their calculations are very simple and of limited use. This is mostly because computer programming is an activity completely outside of the usual activities that humans are ancestrally familiar with, and specialize in, so we aren't very good at programming anything besides relatively simple code structures. (Especially relative to a hypothetical human with evolved modules specialized for programming computers.) Evolution, having a load of time to work with, has done better so far. But instead of comparing present day computers to present day humans, why can't we compare humans in general (the complexity and intelligence of which is effectively static and doesn't improve much within the life of the individual), and a serious artificial intelligence or upload (with complete self-understanding and self-access, the ability to make arbitrary mental revisions and improvements, instantaneous access to all the information on the Internet, automated cognitive processes for compiling and supercompiling, the ability to split ver consciousness into multiple processing streams, the ability to copy verself at will, freedom from distraction or rationalization, and so on, and so on, and so on...) If you define intelligence as "the capability to solve complex problems in a complex environment", then these future entities will far outscore humans in terms of being able to handle societal, emotional, cognitive, and environmental complexity, among others.

Once humans begin to augment themselves with artificial systems, organic bio-chips or whatever sci-fi equivalent, we too will become faster than our former selves, with increased abilities and capacities. Still, all we will have done is make ourselves faster, not "Superior".


So you're saying that humans, a random consequence of a blind design process, will always be equally intelligent as anything carefully engineered and optimized, or designed by better blind processes, or adapted to a wider range of thinking styles, or endowed with augmentive thinking processes, or any other massive insights that anyone might come up with. I don't buy it. If you were plopped in the middle of a transhuman society as a new social agent, you could easily feel very incompetent at every task they considered important, being completely incapable of understanding or appreciating their art, science, culture, or whatever analogous pursuits that these transhumans engage in. If they experienced distaste towards you due to this, and their method for doing such was something recognizable to you, then you might attract a lot of social ridicule living in such a community. (Although it's unlikely that real transhumans would operate socially in the same way we do, such as ridicule of less competent individuals, or other stuff like that which screams "evolved!")

Perhaps the definition of "Superior" needs to be better defined. If superior is to equate to speed, then yes, it is possible to build superior-intelligence. But from the attitudes of many posts, it has been the perceived intent that "superior" meant "Beyond human cognition" or "beyond human capacity for understanding" which is in my opinion, totally bunk.


Could you understand the aesthetic meaning of a 50 dimensional alien art exhibit, intuit the culmative behavioral patterns of an animal with quadrillions of moving parts, or participate suavely in a "party" with augmented humans who are enhanced such that they can execute and comprehend a whole new range of body language and facial expression previously unavailable to baseline humans?

sponsored ad

  • Advert

#30 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 21 December 2002 - 02:09 AM

Mangala wrote:
In our society engineers are paid to do their work, they don't just do it because they enjoy the work that they do(except for the retired ones). Every dime that goes into the building of a seed AI will have to come from somewhere, and that somewhere is either from the government or the private sector. If no senators, CEO's, or cults want to build the AI, no one will. We alone cannot do the job, and that is why we've hardly done anything other than sit here and map out what others could do.


True; I was exaggerating a bit before, I meant that all this other stuff won't serve as a fundamental barrier, but beatable obstacles.

This reminds me of a great movie I saw the other day called "Contact." It really is a great movie starring Jodie Foster and is very relevant in terms of governments and companies around the world participating in the building of something that could be the best thing that humanity has ever built, or the worst. Actually, now that I think about it, it strangely has a whole lot to do with this subject, please rent Contact.


I've seen it...there are other stories out there that use this same plot device, also.

Plus I highly doubt you could even come close to buying the hardware needed to build a seed AI with $50 million. Not to mention the software is totally out of our league in this day and age. But believe me, I'd like to see the SI in 28 years or less as well, it just seems highly unlikely that anybody will take the idea of seriously building an AI and turn it into reality.


What makes you think the software is *so* far out, especially with hyperexponentially accelerating technological progress? Also, collaborative tools will allow software engineers to cooperate more effectively, allowing us to get more done in a shorter period of time. Rudimentary brain-computer interfaces or more intuitive programming tools could also improve the situation substantially within a very short amount of time. Most of the human brain's functionality wouldn't need to be duplicated in an AI, either, because it's specialized for "physical" rather than "virtual" entities. And last but not least, a subhuman AI could serve as a very valuable assistant for further programming.

I have enough trouble explaining this thing to people without them waving it off as nothing but conjecture coming out of the mouth of someone "who's seen too many sci-fi movies."


Depends on who you're explaining it to. You can always talk about neurological enhancement rather than AI, because that's slightly less foreign and scary.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users