• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Predictions


  • Please log in to reply
44 replies to this topic

Poll: First human-level sentient AI (104 member(s) have cast votes)

before 2030?

  1. Most likely yes (31 votes [29.81%])

    Percentage of vote: 29.81%

  2. Most likely no (41 votes [39.42%])

    Percentage of vote: 39.42%

  3. Maybe (30 votes [28.85%])

    Percentage of vote: 28.85%

  4. No opinion (2 votes [1.92%])

    Percentage of vote: 1.92%

Vote Guests cannot vote

#31 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 13 November 2008 - 03:57 PM

i don't know how anyone can't want strong AI to be developed, it's just based on senseless fear of the unknown.

Not at all.

Strong AI is one of the most dangerous technologies imagineable. It is extremely important that it gets done correctly the first time.

Check out the Singularity Institute.



It does have its dangers, but the possible benefits far outweight the possible setbacks, IMO.

#32 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 19 November 2008 - 09:26 PM

i don't know how anyone can't want strong AI to be developed, it's just based on senseless fear of the unknown.

Not at all.

Strong AI is one of the most dangerous technologies imagineable. It is extremely important that it gets done correctly the first time.

Check out the Singularity Institute.



It does have its dangers, but the possible benefits far outweight the possible setbacks, IMO.

That doesn't really make sense.

sponsored ad

  • Advert

#33 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 25 November 2008 - 04:19 AM

Someone correct me if I'm wrong, but it's my impression that AI is a software problem, and computer speed is not really a factor. So all arguments along the lines of "look how much faster hardware is today compared to X years ago..." don't really seem applicable, IMHO. If all we needed was hardware speed, then we would already have smart machines, but they would just "talk slow". I suspect that the breakthroughs need to come in the areas of knowledge representation and algorithms, and these technologies don't move in Moore's law-like fashion.

#34 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 25 November 2008 - 04:31 AM

Someone correct me if I'm wrong, but it's my impression that AI is a software problem, and computer speed is not really a factor. So all arguments along the lines of "look how much faster hardware is today compared to X years ago..." don't really seem applicable, IMHO. If all we needed was hardware speed, then we would already have smart machines, but they would just "talk slow". I suspect that the breakthroughs need to come in the areas of knowledge representation and algorithms, and these technologies don't move in Moore's law-like fashion.

I agree completely.

Eliezer Yudkowsky argues this point as well:

"I would rather have an additional 5 IQ points than an order of magnitude more computer power any day"


However, the Singularity is not entirely about AI. There is also the possibility of "Whole Brain Emulation", where computer power, and thus Moore's Law, is much more relevant.

#35 lightowl

  • Guest, F@H
  • 767 posts
  • 5
  • Location:Copenhagen, Denmark

Posted 29 November 2008 - 11:06 AM

I suspect that the breakthroughs need to come in the areas of knowledge representation and algorithms, and these technologies don't move in Moore's law-like fashion.

They could if we manage to simulate some form of natural evolution in a virtual substrate. Then it could be a matter of cycles before an intelligence could emerge from that simulation.

#36 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 30 November 2008 - 06:15 AM

i don't know how anyone can't want strong AI to be developed, it's just based on senseless fear of the unknown.

Not at all.

Strong AI is one of the most dangerous technologies imagineable. It is extremely important that it gets done correctly the first time.

Check out the Singularity Institute.



It does have its dangers, but the possible benefits far outweight the possible setbacks, IMO.

That doesn't really make sense.



Why doesn't it? In my POV, if friendly strong AI doesn't show up in my lifetime, i'm gonna die anyways, be it by aging or by a hostile takeover of the world by the machines.

To me, there's nothing to lose and everything to gain by as fast of a development of strong AI as possible. Of course, basic cautions that don't delay much the onset of strong AI can and should be taken.

#37 Reno

  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 30 November 2008 - 05:30 PM

Someone correct me if I'm wrong, but it's my impression that AI is a software problem, and computer speed is not really a factor. So all arguments along the lines of "look how much faster hardware is today compared to X years ago..." don't really seem applicable, IMHO. If all we needed was hardware speed, then we would already have smart machines, but they would just "talk slow". I suspect that the breakthroughs need to come in the areas of knowledge representation and algorithms, and these technologies don't move in Moore's law-like fashion.


To simulate all the elements in a biological brain requires massive amounts of computer power. They were only able to get about half a mouse brain running on BlueGene L. The article says that was about 8 million neurons and 6,300 synapses.

http://news.bbc.co.u...ogy/6600965.stm

#38 Solve

  • Guest
  • 41 posts
  • -6

Posted 16 July 2009 - 05:10 PM

Maybe one should not be too concerned about the full mind uploading treatment.
As long as your brain can be scanned then the technology to put that brain data on a 'computer substrate' can be done many years later when the technology becomes available!
Just keep those fingers crossed that the scanning technology becomes available.
The only problem is how will they know the scan will 'work' unless they prove it will on the computer substrate.

Solve ;)

#39 n25philly

  • Guest
  • 88 posts
  • 11
  • Location:Holland, PA

Posted 30 July 2009 - 01:44 PM

If by human level sentient AI you mean machines that can think and learn, I say definitely yes. Maybe it won't be up to our level by then, but it will definitely will exist. The hardware already exists although it's no where near ready for prime time yet. There needs to be about 5 more years of research, and then another 5 for development before we start to see prototypes. I think somewhere in that second 5 we will start to see robot brains. Machines designed to be just like the human brain, but not really connected to anything, so we can use them to study our brains as well as develop the AI properly over time so we don't end up living in a bad horror movie

#40 n25philly

  • Guest
  • 88 posts
  • 11
  • Location:Holland, PA

Posted 30 July 2009 - 01:49 PM

Someone correct me if I'm wrong, but it's my impression that AI is a software problem, and computer speed is not really a factor. So all arguments along the lines of "look how much faster hardware is today compared to X years ago..." don't really seem applicable, IMHO. If all we needed was hardware speed, then we would already have smart machines, but they would just "talk slow". I suspect that the breakthroughs need to come in the areas of knowledge representation and algorithms, and these technologies don't move in Moore's law-like fashion.


To simulate all the elements in a biological brain requires massive amounts of computer power. They were only able to get about half a mouse brain running on BlueGene L. The article says that was about 8 million neurons and 6,300 synapses.

http://news.bbc.co.u...ogy/6600965.stm


If you are going to depend on today's technology then yeah, it's likely never going happen for just that reason. Too many circuits needed no matter how much you shrink them. Now something like memristors where one can replace a number of circuits and acts just like human synapses, well that might just be the answer. Hope they develop quickly.

#41 Singularity

  • Guest
  • 138 posts
  • -1

Posted 23 November 2009 - 12:02 AM

After studying AI for a while (although I don't know everything) my hunch is that Strong AI exists NOW and is being used to gain some sort of competitive advantage in either business, government, or defense.

When the spark of real intelligence ignites, it will be TIGHTLY controlled so that it does not escape and the competition does not become aware.

#42 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 23 November 2009 - 12:16 AM

After studying AI for a while (although I don't know everything) my hunch is that Strong AI exists NOW and is being used to gain some sort of competitive advantage in either business, government, or defense.

When the spark of real intelligence ignites, it will be TIGHTLY controlled so that it does not escape and the competition does not become aware.



Since you say you've been studying AI for a while i assume you completely understand what the term strong AI means. So do you think we already have the technology to create it? How could that be?

Creating strong AI would need a whole set of supporting technologies and knowledge, most of which aren't developed enough yet. And developing them secretly in a faster rate than the normal rate at which the whole world is currently developing them would require more resources than any single organization/government has. So it is impossible.

#43 Singularity

  • Guest
  • 138 posts
  • -1

Posted 23 November 2009 - 05:35 PM

Since you say you've been studying AI for a while i assume you completely understand what the term strong AI means. So do you think we already have the technology to create it? How could that be?

Creating strong AI would need a whole set of supporting technologies and knowledge, most of which aren't developed enough yet. And developing them secretly in a faster rate than the normal rate at which the whole world is currently developing them would require more resources than any single organization/government has. So it is impossible.


So it is impossible? That's a pretty strong statement. How can you be so sure? I think your last paragraph assumes too much. Large secret government operations have been quite successful in the past... very successful as a matter of fact; The Manhattan Project, ARPANet, stealth technology, cryptography, the list goes on and all of these were leading edge and somewhat isolated from the rest of the world.

Money is power. With a large enough budget, you can do anything.

My hunch is just a hunch. I am amazed when I realize that some of the most cutting-edge algorithms are actually very old. These problems were being worked on since before the 1960's. All that's required now is more processing power and a little engineering luck. And Strong AI will bring massive wealth and power to whomever wields it - I challenge anyone to call that an understatement. So, you know the motivation is there.

#44 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 23 November 2009 - 06:28 PM

Since you say you've been studying AI for a while i assume you completely understand what the term strong AI means. So do you think we already have the technology to create it? How could that be?

Creating strong AI would need a whole set of supporting technologies and knowledge, most of which aren't developed enough yet. And developing them secretly in a faster rate than the normal rate at which the whole world is currently developing them would require more resources than any single organization/government has. So it is impossible.


So it is impossible? That's a pretty strong statement. How can you be so sure? I think your last paragraph assumes too much. Large secret government operations have been quite successful in the past... very successful as a matter of fact; The Manhattan Project, ARPANet, stealth technology, cryptography, the list goes on and all of these were leading edge and somewhat isolated from the rest of the world.


I think that you're the ones making strong statements.

For starters, what exactly do you mean by strong AI? We must not be in the same page here..


Money is power. With a large enough budget, you can do anything.


Not really.. if society doesn't have the technology/knowledge base to achieve something, then all the money in the world won't help us.


My hunch is just a hunch. I am amazed when I realize that some of the most cutting-edge algorithms are actually very old. These problems were being worked on since before the 1960's. All that's required now is more processing power and a little engineering luck. And Strong AI will bring massive wealth and power to whomever wields it - I challenge anyone to call that an understatement. So, you know the motivation is there.


Strong AI will indeed bring massive wealth and power to whoever wields it, but is that justification enough to assume someone already managed to create it? There's motivation to create a lot of extraordinary things, but it doesn't mean that it's possible to create them now.

sponsored ad

  • Advert

#45 Singularity

  • Guest
  • 138 posts
  • -1

Posted 24 November 2009 - 03:11 AM

Since you say you've been studying AI for a while i assume you completely understand what the term strong AI means. So do you think we already have the technology to create it? How could that be?

Creating strong AI would need a whole set of supporting technologies and knowledge, most of which aren't developed enough yet. And developing them secretly in a faster rate than the normal rate at which the whole world is currently developing them would require more resources than any single organization/government has. So it is impossible.


So it is impossible? That's a pretty strong statement. How can you be so sure? I think your last paragraph assumes too much. Large secret government operations have been quite successful in the past... very successful as a matter of fact; The Manhattan Project, ARPANet, stealth technology, cryptography, the list goes on and all of these were leading edge and somewhat isolated from the rest of the world.


I think that you're the ones making strong statements.

For starters, what exactly do you mean by strong AI? We must not be in the same page here..


Money is power. With a large enough budget, you can do anything.


Not really.. if society doesn't have the technology/knowledge base to achieve something, then all the money in the world won't help us.


My hunch is just a hunch. I am amazed when I realize that some of the most cutting-edge algorithms are actually very old. These problems were being worked on since before the 1960's. All that's required now is more processing power and a little engineering luck. And Strong AI will bring massive wealth and power to whomever wields it - I challenge anyone to call that an understatement. So, you know the motivation is there.


Strong AI will indeed bring massive wealth and power to whoever wields it, but is that justification enough to assume someone already managed to create it? There's motivation to create a lot of extraordinary things, but it doesn't mean that it's possible to create them now.


I don't understand what you don't like about my statements. Are you implying that I am not being realistic? Is there a realistic way to extrapolate about the future? If so, then the world would like to know and you yourself will become very rich.

I can't remember an exact definition, but I consider Strong AI to be somewhere between where we think we are now and the singularity. SAI to me is mostly achieved after a certain level of processing/memory requirements are met and an autonomous AGI that can communicate using spoken language easily is realized. Or, a non-autonomous program that could predict any particular publicly traded, or not, financial instrument would be what I would consider the beginning of strong ai. Sentience would come at the later stages of Strong AI which is not part of my hunch.

If Strong AI isn't here now, then I think it's eminent.

That's where I'm coming from...




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users