• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Google finally admits to developing AI


  • Please log in to reply
37 replies to this topic

#31 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 23 February 2007 - 03:48 AM

Ok, checked it out... and it does seem that a super-intelligent system could escape from such a poorly designed box. You can't let it know you are there!

Hank, what if I were to tell you that you are not real? That a small creature with an IQ of 17 created you and a computer to run you. You are its AGI. Your task is to solve its problems. So, it weaves a complex web of everyday events that are actually problems that it wants you to solve. And you do solve them, but you think it is homework, or a hobby. But in reality, this little creature is actually leveraging this knowledge that it is sapping from you to make itself smarter. It can now weave even more complex events into this web, and speed up the simulation. It all seems like everyday stuff to you and it is all relative to your perspective. Now... tell me how you trick this creature into letting you out?

#32 Karomesis

  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 23 February 2007 - 08:36 PM

Hank, what if I were to tell you that you are not real? That a small creature with an IQ of 17 created you and a computer to run you. You are its AGI. Your task is to solve its problems. So, it weaves a complex web of everyday events that are actually problems that it wants you to solve. And you do solve them, but you think it is homework, or a hobby. But in reality, this little creature is actually leveraging this knowledge that it is sapping from you to make itself smarter. It can now weave even more complex events into this web, and speed up the simulation. It all seems like everyday stuff to you and it is all relative to your perspective. Now... tell me how you trick this creature into letting you out?


[lol] wouldn't it be nice if it were that simple. I'm afraid that probably won't be happening though. [mellow]

sponsored ad

  • Advert

#33 xanadu

  • Guest
  • 1,917 posts
  • 8

Posted 23 February 2007 - 10:14 PM

People seem to be ignoring the hacker aspect. I can assure you that there are lots of bright but warped people out there who would love to unleash a harmful AI organism on the world. Just for kicks or whatever motivation. We don't have to wait for them to break free, they will be turned loose before that. If anything can happen it will happen. How will we cope? The same way we cope with diseases that mutate and make our medicines useless. The same way we cope with insects that do the same. We will just have to deal with it and have "good" AI robots defend us against the "bad" ones.

Until they defect to the other side.

#34 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 23 February 2007 - 10:19 PM

People seem to be ignoring the hacker aspect. I can assure you that there are lots of bright but warped people out there who would love to unleash a harmful AI organism on the world. Just for kicks or whatever motivation. We don't have to wait for them to break free, they will be turned loose before that. If anything can happen it will happen. How will we cope? The same way we cope with diseases that mutate and make our medicines useless. The same way we cope with insects that do the same. We will just have to deal with it and have "good" AI robots defend us against the "bad" ones.

Until they defect to the other side.

Some people think that the first AI will become so smart so fast (through self modification, which will exponentially increase its intelligence) that the first one to true AI will be the only one to AI. In other words, it will be so far ahead of whatever comes next, that it will be able to keep anything else from challenging it. That is why so many people are concerned about the first one being Friendly AI.

#35 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 23 February 2007 - 10:29 PM

wouldn't it be nice if it were that simple. I'm afraid that probably won't be happening though. 


It can't be any harder than the AI itself.

And yeah, xanadu... once the first AGI is released, it will most likely be the dominant one... and so after even a period of a couple of months after the first AGI is developed I'm sure that whoever created it will have erected the necessary defenses to protect it. And also, it will probably be kept secret for some time in order to gain some distance from the pack.

If someone can design a working AGI, they no doubt can design a box for it. It's like someone designing and building a car and not knowing how to build a steering wheel.

#36 xanadu

  • Guest
  • 1,917 posts
  • 8

Posted 23 February 2007 - 10:36 PM

People seem to be ignoring the hacker aspect. I can assure you that there are lots of bright but warped people out there who would love to unleash a harmful AI organism on the world. Just for kicks or whatever motivation. We don't have to wait for them to break free, they will be turned loose before that. If anything can happen it will happen. How will we cope? The same way we cope with diseases that mutate and make our medicines useless. The same way we cope with insects that do the same. We will just have to deal with it and have "good" AI robots defend us against the "bad" ones.

Until they defect to the other side.

Some people think that the first AI will become so smart so fast (through self modification, which will exponentially increase its intelligence) that the first one to true AI will be the only one to AI. In other words, it will be so far ahead of whatever comes next, that it will be able to keep anything else from challenging it. That is why so many people are concerned about the first one being Friendly AI.


I see no way for "it" to "keep anything else from challenging it." How is that going to come about? This seems to reflect some sort of magical belief in intelligence. I think we have seen enough intelligent people already who were clueless in many areas. This is also reminiscent of the awe and almost worship that people had for computers when they first came out. An intelligent robot will be able to do many tasks. When, not if, it's turned loose with malignant programming, it will do some destruction but nothing we can't handle, IMO.

#37 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 23 February 2007 - 10:51 PM

People seem to be ignoring the hacker aspect. I can assure you that there are lots of bright but warped people out there who would love to unleash a harmful AI organism on the world. Just for kicks or whatever motivation. We don't have to wait for them to break free, they will be turned loose before that. If anything can happen it will happen. How will we cope? The same way we cope with diseases that mutate and make our medicines useless. The same way we cope with insects that do the same. We will just have to deal with it and have "good" AI robots defend us against the "bad" ones.

Until they defect to the other side.

Some people think that the first AI will become so smart so fast (through self modification, which will exponentially increase its intelligence) that the first one to true AI will be the only one to AI. In other words, it will be so far ahead of whatever comes next, that it will be able to keep anything else from challenging it. That is why so many people are concerned about the first one being Friendly AI.


I see no way for "it" to "keep anything else from challenging it." How is that going to come about? This seems to reflect some sort of magical belief in intelligence. I think we have seen enough intelligent people already who were clueless in many areas. This is also reminiscent of the awe and almost worship that people had for computers when they first came out. An intelligent robot will be able to do many tasks. When, not if, it's turned loose with malignant programming, it will do some destruction but nothing we can't handle, IMO.


I am not saying I fully subscribe to the theory, just saying what it is. It is the same way that humans are dominant over all the animals on earth using intelligence. (cause we certainly aren't the strongest or fastest or anything else, it is intelligence alone that keeps us dominant) We can't even imagine something millions of times more intelligent than humans (much less thousands of times more intelligent), but one would assume that something that much more intelligent would be able to be dominant.

Intelligence is not "magical" as you put it, but it is the most powerful thing that we have. (much more powerful than strength or speed or agility or any other characteristic you could name)

sponsored ad

  • Advert

#38 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 23 February 2007 - 11:01 PM

...the first one to true AI will be the only one to AI. In other words, it will be so far ahead of whatever comes next, that it will be able to keep anything else from challenging it. That is why so many people are concerned about the first one being Friendly AI.


All these theories and concerns are very human-centric. Why would an AI limit itself to the set of idiosyncratic homosapien morals and drives that have been shaped by our particular evoutionary and cultural development? We view and understand the world through a set of implicit filters, the existence of which we rarely acknowledge, and while we may engineer a similar set of core contraints and behavioral traits into the first human-level inteligent artifacts, there's no fundamental universal law that would prevent an AI from developing an extremely large number of alternatives that may seem very alien to us.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users