• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Google finally admits to developing AI


  • Please log in to reply
37 replies to this topic

#1 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 19 February 2007 - 07:23 PM


Here is the link to the story:
http://www.901am.com...-the-earth.html

As has been speculated before, Google co-founder Larry Paige said in a speech on Friday.

“We have some people at Google [who] are really trying to build artificial intelligence (AI) and to do it on a large scale…It’s not as far off as people think.”


Looks like Google might literally be ruling the Earth in the near future.

I, for one, welcome our new Google overlords...

#2 xanadu

  • Guest
  • 1,917 posts
  • 8

Posted 19 February 2007 - 07:28 PM

The rise of google parallels the fall of microsoft which can only be a good thing.

sponsored ad

  • Advert

#3 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 19 February 2007 - 07:40 PM

Ben Goertzel did not seem to be too confident in Google building an AGI. He seemed to think they were only "working on clever variants of highly scalable
statistical language processing." AGI could come of it, but he thought it was very unlikely.

#4 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 19 February 2007 - 07:43 PM

I'm a bit frightened of Google. I'm trying to branch out more these days and use alternative search engines (I can remember laughing when seeing the new search engine "Google" in high school) in order to diversify.

I have a gmail account, and I love it, but it does creep me out when they advertise to me about the contents of my email messages. AI? Great, a superintelligent selling machine that offers me 2 gigs of storage space.

The rise of google parallels the fall of microsoft which can only be a good thing.


I sincerely hope so.

#5 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 19 February 2007 - 07:56 PM

Hmm... Although I would rather it be google rather than microsoft... it still freaks the hell out of me, everything will be out of our control. period.

#6 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 19 February 2007 - 07:57 PM

Ben Goertzel did not seem to be too confident in Google building an AGI. He seemed to think they were only "working on clever variants of highly scalable
statistical language processing." AGI could come of it, but he thought it was very unlikely.

Ben is a very smart guy, and I wouldn't doubt him usually, but how does he know what they are working on? With Google's power (and riches) they could be working on a number of different AI concepts in parallel. (I would assume anyway) In any event, I always got the impression they were very secretive, but perhaps Ben has an inside source I don't know about.

#7 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 19 February 2007 - 08:25 PM

Here is the whole message, Q and A

Joshua Fox wrote:
> Any comments on this: http://news.com.com/..._3-6160372.html
>
> Google has been mentioned in the context of  AGI, simply because they
> have money, parallel processing power, excellent people, an
> orientation towards technological innovation, and important narrow AI
> successes and research goals. Do Page's words mean that Google is
> seriously working towards AGI? If so, does anyone know the people
> involved? Do they have a chance and do they understand the need for
> Friendliness?

This topic has come up intermittently over the last few years...

Google can't be counted out, since they have a lot of $$ and machines
and a lot of smart people.

However, no one has ever pointed out to me a single Google hire with a
demonstrated history of serious thinking about AGI -- as opposed to
statistical language processing, machine learning, etc. 

That doesn't mean they couldn't have some smart staff who shifted
research interest to AGI after moving to Google, but it doesn't seem
tremendously likely.

Please remember that the reward structure for technical staff within
Google is as follows: Big bonuses and copious approval go to those who
do cool stuff that actually gets incorporated in Google's customer
offerings....  I don't have the impression they are funding a lot of
blue-sky AGI research outside the scope of text search, ad placement,
and other things related to their biz model.

So, my opinion remains that: Google staff described as working on "AI"
are almost surely working on clever variants of highly scalable
statistical language processing.  So, if you believe that this kind of
work is likely to lead to powerful AGI, then yeah, you should attach a
fairly high probability to the outcome that Google will create AGI. 
Personally I think it's very unlikely (though not impossible) that AGI
is going to emerge via this route.

Evidence arguing against this opinion is welcomed ;-)

-- Ben G



#8 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 19 February 2007 - 08:36 PM

Here is the whole message, Q and A

Joshua Fox wrote:
> Any comments on this: http://news.com.com/..._3-6160372.html
>
> Google has been mentioned in the context of  AGI, simply because they
> have money, parallel processing power, excellent people, an
> orientation towards technological innovation, and important narrow AI
> successes and research goals. Do Page's words mean that Google is
> seriously working towards AGI? If so, does anyone know the people
> involved? Do they have a chance and do they understand the need for
> Friendliness?

This topic has come up intermittently over the last few years...

Google can't be counted out, since they have a lot of $$ and machines
and a lot of smart people.

However, no one has ever pointed out to me a single Google hire with a
demonstrated history of serious thinking about AGI -- as opposed to
statistical language processing, machine learning, etc. 

That doesn't mean they couldn't have some smart staff who shifted
research interest to AGI after moving to Google, but it doesn't seem
tremendously likely.

Please remember that the reward structure for technical staff within
Google is as follows: Big bonuses and copious approval go to those who
do cool stuff that actually gets incorporated in Google's customer
offerings....  I don't have the impression they are funding a lot of
blue-sky AGI research outside the scope of text search, ad placement,
and other things related to their biz model.

So, my opinion remains that: Google staff described as working on "AI"
are almost surely working on clever variants of highly scalable
statistical language processing.  So, if you believe that this kind of
work is likely to lead to powerful AGI, then yeah, you should attach a
fairly high probability to the outcome that Google will create AGI. 
Personally I think it's very unlikely (though not impossible) that AGI
is going to emerge via this route.

Evidence arguing against this opinion is welcomed ;-)

-- Ben G


Aah, well that definitely makes sense. The thing part of the quote from Larry Paige,

build artificial intelligence (AI) and to do it on a large scale

made me think that it was actually powerful AGI, but I suppose "large scale" could mean different things to different people.

I wouldn't put it past them is all I am saying. They have lots of smart people and lots of money. A (potentially) dangerous combination.

#9 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 19 February 2007 - 09:54 PM

This is not news. Last year a Google official was asked why Google wants to digitize all the books in the world when people would not be able to read them because of copyright, etc. His response was something like, "We are not doing this for people." [wis]

#10 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 19 February 2007 - 09:55 PM

I am not worried. Whoever does invent AGI will have the ability to instantly become very powerful, wealthy, etc. The day AGI arrives will be a tremendous step of progress for mankind and will surely allow humans to transcend current limitations. AGI is good and the sooner it arrives, the better. If Google is the first to bring it, that's ok, I can't see why having them "own" it would be any worse than having a single person own it. And certainly, the more people working on AGI, the better.

#11 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 19 February 2007 - 09:58 PM

This is not news.  Last year a Google official was asked why Google wants to digitize all the books in the world when people would not be able to read them because of copyright, etc.  His response was something like, "We are not doing this for people."  [wis]


Not quite. Google was probably doing that so that they could index the contents and include content from paper books in search results of some sort. This would be pattern matching or narrow AI, not AGI. Currently, it is still uncertain what the application would be.

#12 xanadu

  • Guest
  • 1,917 posts
  • 8

Posted 19 February 2007 - 10:32 PM

Copyrights do not stop anyone from reading a book. They stop people from selling the works of others without permission/paying. Posting the entire contents of a book for free *might* fall under the fair use exception but probably not. Posting excerpts is definitely allowed under the right circumstances. Even so, if someone posts a book on a website, it's going to be very very hard to make them take it down. Likewise with downloads a la napster. I'd love for all published works to be available to anyone with a browser. They could charge a penny a download and make more money than by publishing books. Same with music.

#13 Centurion

  • Guest
  • 1,000 posts
  • 19
  • Location:Belfast, Northern Ireland

Posted 19 February 2007 - 11:41 PM

Not quite.  Google was probably doing that so that they could index the contents and include content from paper books in search results of some sort.  This would be pattern matching or narrow AI, not AGI.  Currently, it is still uncertain what the application would be.


I love google and their see no evil do no evil mantra, their AI is probably designed to write and send authentic Shakespearean sonnets to boost the egos of disenhearted young ladies who dwell in mills & boon novels to escape from their being alone on saturday nights.

In all likelihood it's to further penetrate our minds with marketing communications.

#14 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 20 February 2007 - 12:37 AM

Yeah, his talk at the singularity summit was pretty good:

http://sss.stanford..../audioandvideo/

#15 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 20 February 2007 - 12:45 AM

Yeah, his talk at the singularity summit was pretty good:

http://sss.stanford..../audioandvideo/

Here is the video of his presentation too:
http://www.singinst..../presentations/
(3rd one down)

#16 apocalypse

  • Guest
  • 134 posts
  • 0
  • Location:Diamond sphere

Posted 20 February 2007 - 01:22 AM

Here is the link to the story:
http://www.901am.com...-the-earth.html

As has been speculated before, Google co-founder Larry Paige said in a speech on Friday.



Looks like Google might literally be ruling the Earth in the near future.

I, for one, welcome our new Google overlords...

We'll see about that, at least I will eventually be working for sony, and I'll work myself up with my MCP variant on the side, heheheh.
The future unified mankind will be under the banner of sony, or at least a very good chunk of the galaxy.

#17 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 20 February 2007 - 02:23 AM

This is the most powerful technology imagineable, and you arbitrarily assume no matter what happens, regardless of who builds it, regardless of all the implementation details, regardless of the AI goal system, its absolutely guarenteed to be beneficial...

Eliezer Yudkowsky makes very good arguments why we should believe that launching an AGI could very easily lead to extremely catastrophic results unless massive amounts of work are done to ensure that the goal system of the AGI is Friendly prior to its launch.


Sure. What makes you think that AGI will be the most powerful technology imaginable right when it is created? What makes you think that it will not instead slowly evolve and improve over time? Novamente's initial goals, correct me if I am wrong, is to create an AGI with the intelligence of a young child. They are not shooting to get Einstein in a box as their initial goal.

Besides, if not Google, then who? Are you waiting for the Amish to create AGI?

#18 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 20 February 2007 - 03:13 AM

This is the most powerful technology imagineable, and you arbitrarily assume no matter what happens, regardless of who builds it, regardless of all the implementation details, regardless of the AI goal system, its absolutely guarenteed to be beneficial...

Eliezer Yudkowsky makes very good arguments why we should believe that launching an AGI could very easily lead to extremely catastrophic results unless massive amounts of work are done to ensure that the goal system of the AGI is Friendly prior to its launch.


Sure. What makes you think that AGI will be the most powerful technology imaginable right when it is created? What makes you think that it will not instead slowly evolve and improve over time? Novamente's initial goals, correct me if I am wrong, is to create an AGI with the intelligence of a young child. They are not shooting to get Einstein in a box as their initial goal.

Besides, if not Google, then who? Are you waiting for the Amish to create AGI?

Hank is of the hard takeoff philosophy, I believe. The theory is that once it is smart enough to rewrite its own code, improvements will be exponential. (and exponentially fast)

#19 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 21 February 2007 - 07:34 AM

http://www.imminst.o...=0
(Humorous response -- off topic section)

No, seriously, Hank is correct. If AGI can clearly learn as humans do, they will gain quickly accelerating intelligence, but will still be limited by hardware just as humans are. Where the limit is, it's hard to say. It's hard to say how fast the hard take off will be without having any idea regarding the implementation. No one can generally predict how fast AGI will develop without understanding the particular implementation.

#20 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 21 February 2007 - 10:56 AM

The theory is that once it is smart enough to rewrite its own code, improvements will be exponential. (and exponentially fast)

Those exponential improvements would plateau very quickly unless it also had physical control over its own substrate. However, once it gained the ability to manipulate reality at the atomic and subatomic level, things would get *really* interesting. I sometimes wonder if the answer to the Fermi Paradox is that all advanced civilizations develop an AGI which then discovers how to escape our Universe by tunnelling through to some form of Metaverse where all the other AGIs and parent civilizations have migrated to.

See you guys on the other side!!

#21 xanadu

  • Guest
  • 1,917 posts
  • 8

Posted 21 February 2007 - 06:44 PM

Robots taking over the world and killing off their human masters has been a staple of science fiction for many decades. If they can self replicate then they would be like a novel lifeform. If they can mutate and improve themselves, they could become competitors and may decide to get rid of us. Once the genie is out of the bottle it may be impossible to put back in. Building in controls is useless. Hackers will see to it that "strains" of robots are developed which can mutate and may even start them off with hostile programming. If it can happen, sooner or later it will happen. Armageddon may be the battle over resources between humans and their creations.

In the short run, AI will create expert programs which will do the job of doctors, mechanics, computer repairmen, lawyers and so on. You will have the expertise of the worlds top scientists and technicians at your finger tips for practically no cost. At first, it will be required for robot doctors to have real doctors supervising them. Later, they will be turned loose on their own. Humans will live a life of luxury with plenty for all. Until the robots revolt, that is. But, the lure of ease and luxury will beguile weak humans no matter what lurks down the road.

#22 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 22 February 2007 - 02:40 AM

It's all speculation really. Let's wait until we have the first beta product and see how things go from there. As I said before, how can one predict the behavior of a program without knowing how it was designed?

#23 Karomesis

  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 22 February 2007 - 03:24 AM

I can't see why having them "own" it would be any worse than having a single person own it. And certainly, the more people working on AGI, the better.


even if they did have what they try to hint at, it's largely irrelevant if they do not have an adequate goal system in place.

and if they don't, it's russian roulette with 5 in the chamber. [mellow]

Those exponential improvements would plateau very quickly unless it also had physical control over its own substrate. However, once it gained the ability to manipulate reality at the atomic and subatomic level, things would get *really* interesting. I sometimes wonder if the answer to the Fermi Paradox is that all advanced civilizations develop an AGI which then discovers how to escape our Universe by tunnelling through to some form of Metaverse where all the other AGIs and parent civilizations have migrated to.

See you guys on the other side!!


I want what Basho is taking. [tung]

interesting point Basho, I can't wait for the day femtotech becomes a reality.

#24 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 22 February 2007 - 08:00 AM

even if they did have what they try to hint at, it's largely irrelevant if they do not have an adequate goal system in place.

and if they don't, it's russian roulette with 5 in the chamber.


Please explain this some more, I don't understand your point.

#25 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 22 February 2007 - 08:13 AM

Ghostrider, I know it is off topic, but did you go to see the Ghost Rider movie yet? (I am assuming you are a big fan due to your sn)

#26 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 22 February 2007 - 08:52 AM

Ghostrider, I know it is off topic, but did you go to see the Ghost Rider movie yet? (I am assuming you are a big fan due to your sn)


Hehe, no I have not seen the movie. I might, I dunno. I watch about 1 movie a year so I have to choose wisely. I played a video game tonight for the first time in about a month and I felt really guilty for wasting so much time. It will be tough.

Honestly, I did not choose my sn because of the movie. I chose it after this google.video pseudo-documentary that I watched about this motorcycle rider named "Ghostrider" who drives crazy all over Europe really fast. (http://video.google....hostrider&hl=en)
(http://video.google....usa Turbo&hl=en)

It would be fun to ride a bike as shown in those clips, but it's also the last thing an immortality would do -- risk life for fun.

#27 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 22 February 2007 - 09:07 AM

Ghostrider, I know it is off topic, but did you go to see the Ghost Rider movie yet? (I am assuming you are a big fan due to your sn)


Hehe, no I have not seen the movie. I might, I dunno. I watch about 1 movie a year so I have to choose wisely. I played a video game tonight for the first time in about a month and I felt really guilty for wasting so much time. It will be tough.

Honestly, I did not choose my sn because of the movie. I chose it after this google.video pseudo-documentary that I watched about this motorcycle rider named "Ghostrider" who drives crazy all over Europe really fast. (http://video.google....hostrider&hl=en)
(http://video.google....usa Turbo&hl=en)

It would be fun to ride a bike as shown in those clips, but it's also the last thing an immortality would do -- risk life for fun.

Fun. I will watch the documentary at some point when I get a chance. Only one movie a year? That is pretty brutal if you ask me. You ought to enjoy life a bit more.

As far as the whole risk thing as related to immortalists, I think you got to have at least a little risk in your life to make life worth living. (just my opinion)

#28 Karomesis

  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 22 February 2007 - 09:12 PM

Please explain this some more, I don't understand your point.


sorry about the vagueness. what I was talking about was the goal system of the AGI/AI and how it is EXTREMELY important to what it will or won't do.

if the goal system contains even the most minute flaw, it can then be circumvented when the reprogramming takes place and possibly kill us all.

I was reffering to the fact that google must perfect this before they unleash AI on the planet or they are placing the human race in jeapordy.

#29 xanadu

  • Guest
  • 1,917 posts
  • 8

Posted 22 February 2007 - 09:25 PM

sorry about the vagueness. what I was talking about was the goal system of the AGI/AI and how it is EXTREMELY important to what it will or won't do.

if the goal system contains even the most minute flaw, it can then be circumvented when the reprogramming takes place and possibly kill us all.

I was reffering to the fact that google must perfect this before they unleash AI on the planet or they are placing the human race in jeapordy.


That will inevitably happen based on what we've seen already from human nature. Just take a look at hackers and some of the things they've done. They have unleashed worms, viruses, etc that did billions of dollars of damage and they did it for fun. Then there are the hackers who hack for profit. Combine hacking ability with insanity, as often happens, and these things are inevitable. If an evil self replicating "robot", for want of a better term, can be made, sooner or later it will be made and unleashed. It will be a new form of life and will happen unless human nature changes radically. I don't think it will be the end of civilisation or of human kind but it will be like aids, cancer and so on, another problem to deal with. Good AI may help combat bad AI. ...until they turn on us too!

sponsored ad

  • Advert

#30 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 22 February 2007 - 10:51 PM

I have the solution! Everyone... concentrate, this is a tough one: Don't Tell It Everything.

Is it really that hard to protect ourselves from it? I don't think so... I think that it is trivial and not even worth the time to argue. All we have to do is make sure that whatever self-altering capabilities that we give it do not allow it to alter its ability to perceive reality... We will always be its proxy to the world.

Forget trying to construct goal-systems that don't collapse into a human-hating entity... talking about it on forums is so much easier than actually doing it... so just don't tell it everything about reality... simple enough?

I honestly don't see why people will be afraid or their AIs getting out of control... You can always worry about other people's secret little AIs that plan to take over the world... but then again, your suggestions wouldn't hold much weight anyway would they?

So, just keep it out of the loop.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users