• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Predictions


  • Please log in to reply
44 replies to this topic

Poll: First human-level sentient AI (104 member(s) have cast votes)

before 2030?

  1. Most likely yes (31 votes [29.81%])

    Percentage of vote: 29.81%

  2. Most likely no (41 votes [39.42%])

    Percentage of vote: 39.42%

  3. Maybe (30 votes [28.85%])

    Percentage of vote: 28.85%

  4. No opinion (2 votes [1.92%])

    Percentage of vote: 1.92%

Vote Guests cannot vote

#1 Nihilated

  • Guest
  • 87 posts
  • 0

Posted 11 May 2008 - 10:26 PM


Based on the progress we are reaching - software and hardware - do you think that the first (general) human-level AI will be developed during the 2020s decade? How about mind-uploading?

I believe that mind-uploading will be achieved shortly after or before the AI because the technology is required to create an AI in the first place. But this is just the opinion of a non-professional.

Edited by Nihilated, 11 May 2008 - 10:27 PM.


#2

  • Lurker
  • 0

Posted 11 May 2008 - 11:05 PM

Based on the progress we are reaching - software and hardware - do you think that the first (general) human-level AI will be developed during the 2020s decade? How about mind-uploading?

I believe that mind-uploading will be achieved shortly after or before the AI because the technology is required to create an AI in the first place. But this is just the opinion of a non-professional.


Human level AI by the 2020s is slightly less plausible than the Second Coming.

sponsored ad

  • Advert

#3 digfarenough

  • Guest
  • 26 posts
  • 0
  • Location:Boston

Posted 11 May 2008 - 11:10 PM

Why would mind uploading technology be needed to make systems with the same learning capabilities as humans?

I vote "maybe". Human-equivalence will require significant improvements in theory as well as in computational substrates. The latter can be predicted to follow Moore's law (at least, we hope it can!), but theoretical developments seem to be more fits-and-starts than smooth improvement (that is not based on any particular evidence, just on my sense of things).

#4 Nihilated

  • Topic Starter
  • Guest
  • 87 posts
  • 0

Posted 11 May 2008 - 11:18 PM

Human level AI by the 2020s is slightly less plausible than the Second Coming.


Why do you think that?

#5 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 12 May 2008 - 02:17 AM

Very likely.

and...

I believe that mind-uploading will be achieved shortly after or before the AI because the technology is required to create an AI in the first place

What is your reasoning behind this? I completely disagree.

#6 Nihilated

  • Topic Starter
  • Guest
  • 87 posts
  • 0

Posted 12 May 2008 - 03:02 AM

What is your reasoning behind this? I completely disagree.


Nvm that then. Do you think mind-uploading will be around soon after AI?

#7 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 12 May 2008 - 03:37 AM

"Most likely no" i voted. I just don't see it that near. Call it gut instinct. I just think that there's still way too much to be done in work both in hardware and software of a human-level AI for it to come that soon. My guess? Probably around 2040-2060.

About mind uploading, i think that that, as virtually anything else, will be much easier to do once we reach human-level AI, and consequently and just one step and very little time after, strong AI. Mind uploading would be possible no more than two decades after we reach strong AI. No later than 2080, and it could happen as soon as in 2060.



Now these predictions are based on a few premises. First, that moore's law and the law of accelerating returns don't crack. Second, that no major disaster will happen. And third, that the software requires to build an AI doesn't take too long to be developed due to unforeseen difficulties (which could delay it by centuries and i wouldn't be alive to see it and that would suck).

#8 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 12 May 2008 - 06:39 AM

It is the number 1 Long Bet that “By 2029 no computer - or "machine intelligence" - will have passed the Turing Test.”:
http://www.longbets.org/1

Kurzweil, of course, is betting in the negative. (he thinks it will happen)

#9 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 12 May 2008 - 09:43 PM

Nvm that then. Do you think mind-uploading will be around soon after AI?

I think soon after... soon after an AGI.

I think my prediction range would be centered around 2035, where 2030 would represent (likely), and 2040 would represent (any time now...)

#10 Nihilated

  • Topic Starter
  • Guest
  • 87 posts
  • 0

Posted 12 May 2008 - 10:48 PM

Now these predictions are based on a few premises. First, that moore's law and the law of accelerating returns don't crack. Second, that no major disaster will happen. And third, that the software requires to build an AI doesn't take too long to be developed due to unforeseen difficulties (which could delay it by centuries and i wouldn't be alive to see it and that would suck).


But if neuroscience can map out all of the interneuronal connections in the brain, can't the software be implemented the "hard" way by just copying everything that is in the brain? I'm sure this can't happen anytime after 2030.

#11 digfarenough

  • Guest
  • 26 posts
  • 0
  • Location:Boston

Posted 12 May 2008 - 10:56 PM

But if neuroscience can map out all of the interneuronal connections in the brain, can't the software be implemented the "hard" way by just copying everything that is in the brain? I'm sure this can't happen anytime after 2030.


Far more than just the connectivity of neurons is important for mind uploading. For instance, information processing is influenced by the geometry of the dendrites of a neurons (but that, of course, is also mappable in the same way). More importantly, the activity of a neuron is strongly influenced by the concentration and location of various ionic channels along the dendrites, soma, and axon of a neuron. Additionally, the strength of synapses themselves is not necessarily clear just from the ultrastructure of the synapse, so the receptors found at synapses (and elsewhere, I think, but in lower levels) are also important. Identifying channels and receptors seems to be a more difficult problem.

But I don't think this would be considered "AI" because it is not really artificial.

#12 Nihilated

  • Topic Starter
  • Guest
  • 87 posts
  • 0

Posted 13 May 2008 - 12:31 AM

I was talking about AI in the previous post. But anyways, I think this is what Ray Kurzweil wants as the first AI... just copy everything particle by particle.

#13 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 25 May 2008 - 10:04 PM

Wouldn't an AGI be developed based on behavioural properties (sort of top-down), whereas mind uploading, if possible at all, must be based on interpretation and copying of physical structures of which knowledge of detail and/or abstraction will never be 100% complete (i.e. bottom-up)?

Edited by brainbox, 25 May 2008 - 10:06 PM.


#14 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 25 May 2008 - 10:13 PM

Based on the progress we are reaching - software and hardware - do you think that the first (general) human-level AI will be developed during the 2020s decade? How about mind-uploading?

I believe that mind-uploading will be achieved shortly after or before the AI because the technology is required to create an AI in the first place. But this is just the opinion of a non-professional.


Up until 2006 we thought we already have enough TeraFLOPS's lying in computer labs worldwide to run a functional simulation of a complete human brain, at the neuron level. Then we discovered that glial cells communicate between each other and may process data too, not just neurons. There are several varieties of glial cells, and these cells outnumber neurons 10 to 1 in the first place (we used to think they were "padding" for the neurons).

So now all predictions are off : not only we still can't make assumption as to the exact level of detail you need to simulate a neuron satisfactorily, but we don't know yet how the glial cells contribute to the mind, or even if they do at all.

That being said, reproducing the human brain in digital form is not the only way to make an artificial consciousness, as you seem to believe. Ultimately, the mind is a processor of abstractions, and as such it can be described in many different ways. I suggest you look into "expert systems", for instance. You can also do some mathematical modeling of the mind with software such as Simulink.

My prediction is, software sentience will probably appear first in video games. With every new game that comes out those darn AI's kick ass more efficiently. In Doom, AI's didn't know to duck when I fired at them... in STALKER, they know how to sneak up on me. In Call of Duty, they don't just seek cover, they flush YOU out with grenades if you try to hide. Ten years from now, they'll ninja your ass out of your pants if you so much as look at them the wrong way :-D

Anyway... regarding mind uploading, I believe it's doable, but that we'll need very specific technology in order to achieve it. I'm thinking nanomachine colonies that would analyze all your synapses in-vivo. We aren't there yet, and I have no idea when this might exist. Could equally happen before software sentience, or after.

Nefastor

#15 DukeNukem

  • Guest
  • 2,008 posts
  • 141
  • Location:Dallas, Texas

Posted 31 May 2008 - 11:28 PM

Voted no. Maybe 2050. I also do not believe most of us will live extraordinarily long lives, except the youngest among us. If Bill Gates, Paul Allen or some other money bags was 100% behind SENS, I might be more optimistic. But, IMO, we're moving at far too slow a pace currently.

#16

  • Lurker
  • 0

Posted 10 June 2008 - 01:25 AM

I voted yes. Considering that ten years ago the pentium 2 came out with a blazing 233 MHz, and later this year or early 2009 Intel will be selling 8 core cpu's, probably in the 3 GHz range, 24 GHz total, which would be about 103 times what the pentium 2 was, I wouldn't underestimate the computer industry. At that rate by 2030 PC's will be 10,000 times what they are now. And the military will no doubt have supercomputers 100,000 times better than that. There are even "cpu's" being made out of actual brain cells in some lab somewhere, so if you consider that then AI is already here. That could easily advance way past human intelligence in short order. Also, people don't exactly use their brains to the full extent. Computers have long been able to do things that people could not do. People arn't special, most people don't even have "AI".... "I"... whatever....

#17 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 10 June 2008 - 05:10 PM

I voted yes. Considering that ten years ago the pentium 2 came out with a blazing 233 MHz, and later this year or early 2009 Intel will be selling 8 core cpu's, probably in the 3 GHz range, 24 GHz total, which would be about 103 times what the pentium 2 was, I wouldn't underestimate the computer industry. At that rate by 2030 PC's will be 10,000 times what they are now. And the military will no doubt have supercomputers 100,000 times better than that. There are even "cpu's" being made out of actual brain cells in some lab somewhere, so if you consider that then AI is already here. That could easily advance way past human intelligence in short order. Also, people don't exactly use their brains to the full extent. Computers have long been able to do things that people could not do. People arn't special, most people don't even have "AI".... "I"... whatever....



Yes the hardware is not disappointing but what about the software? The best i've seen out there is blue brain. All i hear is people talking about how we are increasingly knowing more and more about the brain but is that enough i ask myself? Maybe we aren't advancing fast enough to get the software to create an AI until as soon as 2030.

#18

  • Lurker
  • 0

Posted 10 June 2008 - 08:08 PM

I voted yes. Considering that ten years ago the pentium 2 came out with a blazing 233 MHz, and later this year or early 2009 Intel will be selling 8 core cpu's, probably in the 3 GHz range, 24 GHz total, which would be about 103 times what the pentium 2 was, I wouldn't underestimate the computer industry. At that rate by 2030 PC's will be 10,000 times what they are now. And the military will no doubt have supercomputers 100,000 times better than that. There are even "cpu's" being made out of actual brain cells in some lab somewhere, so if you consider that then AI is already here. That could easily advance way past human intelligence in short order. Also, people don't exactly use their brains to the full extent. Computers have long been able to do things that people could not do. People arn't special, most people don't even have "AI".... "I"... whatever....



Yes the hardware is not disappointing but what about the software? The best i've seen out there is blue brain. All i hear is people talking about how we are increasingly knowing more and more about the brain but is that enough i ask myself? Maybe we aren't advancing fast enough to get the software to create an AI until as soon as 2030.


Ok then, the first thing to do from a programming standpoint is to have a well defined goal. What, exactly, would qualify as AI?

http://www.sptimes.c...ehind_the.shtml Is that good enough?
How hard would it be to make a computer model of those brain cells in a dish? That was done 4 years ago... Off the top of my head I would say AI can be defined as: 1. Computer/robot is given orders (as if people are any different). 2. Computer generates a "warmer/colder" scale of some kind relative to achiving the goal. 3. Computer makes random attempts somehow until it achieves some degree of success. 4. computer continues to make random attempts but increases the frequency of times it attempts doing previously successful actions. 5. you get the idea. Eventually the goal might be achieved. Basically the idea would be finding a rigid formula that can sovle any problem. And then program the rigid formula.

#19 John_Ventureville

  • Guest
  • 279 posts
  • 6
  • Location:Planet Earth

Posted 15 August 2008 - 12:31 AM

I believe Ray Kurzweil currently puts the full-blown Singularity as happening at about 2045. And so sometime in the preceding decade, circa 2035, human level intelligence AI would probably materialize (and based on it's supposed eventual exponential self-improvement capacities, the Singularity would come into being).

John Grigg

#20 Nova

  • Guest
  • 79 posts
  • 2
  • Location:Russia

Posted 20 September 2008 - 02:43 PM

The new person should address to a source of mind of the former person. For this purpose it is necessary to download memory of the former person on the computer and after to address to data in process of occurrence of questions in the new person.





And if to connect neuron that it is better to connect a brain to the biomolecular computer. So it is possible to hold in remembrance.



#21 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 04 October 2008 - 02:01 PM

SEE: http://www.novamente.net/bruce/?p=54

#22 Dmitri

  • Guest
  • 841 posts
  • 33
  • Location:Houston and Chicago

Posted 14 October 2008 - 05:48 AM

I voted most likely no. However, I do think that by the year 2030 we will most likely have robots/androids that will serve us much like the majority of robots in the film I, Robot, but I don’t think they’ll have the same capacity as humans or what singularity people are anticipating.

#23 suspire

  • Guest
  • 583 posts
  • 10

Posted 10 November 2008 - 12:34 AM

I voted yes. Considering that ten years ago the pentium 2 came out with a blazing 233 MHz, and later this year or early 2009 Intel will be selling 8 core cpu's, probably in the 3 GHz range, 24 GHz total, which would be about 103 times what the pentium 2 was, I wouldn't underestimate the computer industry. At that rate by 2030 PC's will be 10,000 times what they are now. And the military will no doubt have supercomputers 100,000 times better than that. There are even "cpu's" being made out of actual brain cells in some lab somewhere, so if you consider that then AI is already here. That could easily advance way past human intelligence in short order. Also, people don't exactly use their brains to the full extent. Computers have long been able to do things that people could not do. People arn't special, most people don't even have "AI".... "I"... whatever....



I voted: No.

I mean, maybe there is an outside chance we'd get the hardware to do this, though even that is highly unlikely. The software for it seems even more improbable.

I mean, consider checkers: We've only "weakly solved" checkers, which is to say, the computer cannot lose once there are 10 checkers on the board--it computes all possible solutions to ensure at least a tie. This is considered only a "weak solution"--they still haven't figured out a way to do a "strong solution", which is to figure out every possible variation from the beginning of the game. And it took them 18 years to get there: http://www.taipeitim...2003370642/wiki

Chess? Forget chess. We're nowhere near it. So how we'll create a human level AI in 10 to 20 years, I don't know. I'd say 2050 was overly optimistic, short of something revolutionary happening in the industry in that time period. I think, at our current rate, 2100 is a much more likely for when a human-level AI were to be created.

Edited by suspire, 10 November 2008 - 12:36 AM.


#24 luv2increase

  • Guest
  • 2,529 posts
  • 37
  • Location:Ohio

Posted 11 November 2008 - 03:28 PM

I put most likely no.


I put this because look how long nature devoid of any intelligence "created" the first human being. It took billions of years for her to create us. How can we expect "with intelligence" to create this in a matter of measly years? I know that AI won't be as advanced as we are even though mother nature albeit devoid of any intelligence created us, but I just don't think we primitive lifeforms will be able to pull off a feat. I bet if we just let mother nature take her course, she'd create AI in another few billion years. Hey, it'll take her more time, but she sure did a great job with us :)


Think about it. Mother nature is smarter than us, her creation, isn't she?

Edited by luv2increase, 11 November 2008 - 03:31 PM.


#25 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 11 November 2008 - 03:48 PM

I put most likely no.


I put this because look how long nature devoid of any intelligence "created" the first human being. It took billions of years for her to create us. How can we expect "with intelligence" to create this in a matter of measly years? I know that AI won't be as advanced as we are even though mother nature albeit devoid of any intelligence created us, but I just don't think we primitive lifeforms will be able to pull off a feat. I bet if we just let mother nature take her course, she'd create AI in another few billion years. Hey, it'll take her more time, but she sure did a great job with us :)


Think about it. Mother nature is smarter than us, her creation, isn't she?



She's infinitely dumber than us. Look how long it took her to create us. We will be able to create an intelligence much greater than ours after just 250,000 years of our existence. Nature would take an infinity to create it.

Even if "mother nature took her course" and managed to create superintelligent beings, we would far be gone by then, so i don't really care. I want to live for as long as i want (but mother nature doesn't want me to live for very long, so fuck her).

#26 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 11 November 2008 - 07:41 PM

Mother nature does not exist @@..
Nature is just a description for physical phenomenal happening without intelligence interference, so yes, we can do better.

#27 mpe

  • Guest, F@H
  • 275 posts
  • 182
  • Location:Australia

Posted 12 November 2008 - 04:25 AM

Weak AI should do the job, not only would it be much easier software but it still leave humans in charge.

I'm very uncomfortable with strong AI particularly if it decides we are the problem.

And it could probably be achieved in the next 10 years

#28 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 12 November 2008 - 06:01 AM

Weak AI should do the job, not only would it be much easier software but it still leave humans in charge.

I'm very uncomfortable with strong AI particularly if it decides we are the problem.

And it could probably be achieved in the next 10 years



The benefits of having strong AI definitely outweight the risks.. i don't know how anyone can't want strong AI to be developed, it's just based on senseless fear of the unknown.

#29 Reno

  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 12 November 2008 - 07:43 PM

The new person should address to a source of mind of the former person. For this purpose it is necessary to download memory of the former person on the computer and after to address to data in process of occurrence of questions in the new person.





And if to connect neuron that it is better to connect a brain to the biomolecular computer. So it is possible to hold in remembrance.


Most of these predictions about what will happen in terms of computer power are almost always about 10-15 years off. Back when nasa was working on apollo it was unimaginable for people to have personal computers. They just didn't see it happening in terms of what was available at the time. It only took about 10-15 years for it to become mainstream.

With what we know today AI doesn't look like it will appear for another 25-30 years, BUT when you look at all the advances being made in terms of bottom up assembly and nanotech assemblers it becomes easy to predict a revolution in manufacturing occurring in the mid 2020s. That means AI will probably occur shortly afterwards in the late 20s early 30s. They're already forming artificial neurons. Read this article that was posted a few months back in the nanotech forum. Artifical Neurons

Edited by bobscrachy, 12 November 2008 - 07:44 PM.


sponsored ad

  • Advert

#30 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 13 November 2008 - 01:26 PM

i don't know how anyone can't want strong AI to be developed, it's just based on senseless fear of the unknown.

Not at all.

Strong AI is one of the most dangerous technologies imagineable. It is extremely important that it gets done correctly the first time.

Check out the Singularity Institute.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users