←  AI & Singularity

LONGECITY


The above is an ad! Advertisements help to support the work of this non-profit organisation. To go ad-free join as a Member.
»

Google finally admits to developing AI

Live Forever's Photo Live Forever 19 Feb 2007

Here is the link to the story:
http://www.901am.com...-the-earth.html

As has been speculated before, Google co-founder Larry Paige said in a speech on Friday.

“We have some people at Google [who] are really trying to build artificial intelligence (AI) and to do it on a large scale…It’s not as far off as people think.”


Looks like Google might literally be ruling the Earth in the near future.

I, for one, welcome our new Google overlords...
Quote

xanadu's Photo xanadu 19 Feb 2007

The rise of google parallels the fall of microsoft which can only be a good thing.
Quote

sponsored ad  

Athanasios's Photo Athanasios 19 Feb 2007

Ben Goertzel did not seem to be too confident in Google building an AGI. He seemed to think they were only "working on clever variants of highly scalable
statistical language processing." AGI could come of it, but he thought it was very unlikely.
Quote

mitkat's Photo mitkat 19 Feb 2007

I'm a bit frightened of Google. I'm trying to branch out more these days and use alternative search engines (I can remember laughing when seeing the new search engine "Google" in high school) in order to diversify.

I have a gmail account, and I love it, but it does creep me out when they advertise to me about the contents of my email messages. AI? Great, a superintelligent selling machine that offers me 2 gigs of storage space.

The rise of google parallels the fall of microsoft which can only be a good thing.


I sincerely hope so.
Quote

JohnDoe1234's Photo JohnDoe1234 19 Feb 2007

Hmm... Although I would rather it be google rather than microsoft... it still freaks the hell out of me, everything will be out of our control. period.
Quote

Live Forever's Photo Live Forever 19 Feb 2007

Ben Goertzel did not seem to be too confident in Google building an AGI. He seemed to think they were only "working on clever variants of highly scalable
statistical language processing." AGI could come of it, but he thought it was very unlikely.

Ben is a very smart guy, and I wouldn't doubt him usually, but how does he know what they are working on? With Google's power (and riches) they could be working on a number of different AI concepts in parallel. (I would assume anyway) In any event, I always got the impression they were very secretive, but perhaps Ben has an inside source I don't know about.
Quote

Athanasios's Photo Athanasios 19 Feb 2007

Here is the whole message, Q and A

Joshua Fox wrote:
> Any comments on this: http://news.com.com/..._3-6160372.html
>
> Google has been mentioned in the context of  AGI, simply because they
> have money, parallel processing power, excellent people, an
> orientation towards technological innovation, and important narrow AI
> successes and research goals. Do Page's words mean that Google is
> seriously working towards AGI? If so, does anyone know the people
> involved? Do they have a chance and do they understand the need for
> Friendliness?

This topic has come up intermittently over the last few years...

Google can't be counted out, since they have a lot of $$ and machines
and a lot of smart people.

However, no one has ever pointed out to me a single Google hire with a
demonstrated history of serious thinking about AGI -- as opposed to
statistical language processing, machine learning, etc. 

That doesn't mean they couldn't have some smart staff who shifted
research interest to AGI after moving to Google, but it doesn't seem
tremendously likely.

Please remember that the reward structure for technical staff within
Google is as follows: Big bonuses and copious approval go to those who
do cool stuff that actually gets incorporated in Google's customer
offerings....  I don't have the impression they are funding a lot of
blue-sky AGI research outside the scope of text search, ad placement,
and other things related to their biz model.

So, my opinion remains that: Google staff described as working on "AI"
are almost surely working on clever variants of highly scalable
statistical language processing.  So, if you believe that this kind of
work is likely to lead to powerful AGI, then yeah, you should attach a
fairly high probability to the outcome that Google will create AGI. 
Personally I think it's very unlikely (though not impossible) that AGI
is going to emerge via this route.

Evidence arguing against this opinion is welcomed ;-)

-- Ben G

Quote

Live Forever's Photo Live Forever 19 Feb 2007

Here is the whole message, Q and A

Joshua Fox wrote:
> Any comments on this: http://news.com.com/..._3-6160372.html
>
> Google has been mentioned in the context of  AGI, simply because they
> have money, parallel processing power, excellent people, an
> orientation towards technological innovation, and important narrow AI
> successes and research goals. Do Page's words mean that Google is
> seriously working towards AGI? If so, does anyone know the people
> involved? Do they have a chance and do they understand the need for
> Friendliness?

This topic has come up intermittently over the last few years...

Google can't be counted out, since they have a lot of $$ and machines
and a lot of smart people.

However, no one has ever pointed out to me a single Google hire with a
demonstrated history of serious thinking about AGI -- as opposed to
statistical language processing, machine learning, etc. 

That doesn't mean they couldn't have some smart staff who shifted
research interest to AGI after moving to Google, but it doesn't seem
tremendously likely.

Please remember that the reward structure for technical staff within
Google is as follows: Big bonuses and copious approval go to those who
do cool stuff that actually gets incorporated in Google's customer
offerings....  I don't have the impression they are funding a lot of
blue-sky AGI research outside the scope of text search, ad placement,
and other things related to their biz model.

So, my opinion remains that: Google staff described as working on "AI"
are almost surely working on clever variants of highly scalable
statistical language processing.  So, if you believe that this kind of
work is likely to lead to powerful AGI, then yeah, you should attach a
fairly high probability to the outcome that Google will create AGI. 
Personally I think it's very unlikely (though not impossible) that AGI
is going to emerge via this route.

Evidence arguing against this opinion is welcomed ;-)

-- Ben G


Aah, well that definitely makes sense. The thing part of the quote from Larry Paige,

build artificial intelligence (AI) and to do it on a large scale

made me think that it was actually powerful AGI, but I suppose "large scale" could mean different things to different people.

I wouldn't put it past them is all I am saying. They have lots of smart people and lots of money. A (potentially) dangerous combination.
Quote

bgwowk's Photo bgwowk 19 Feb 2007

This is not news. Last year a Google official was asked why Google wants to digitize all the books in the world when people would not be able to read them because of copyright, etc. His response was something like, "We are not doing this for people." [wis]
Quote

Ghostrider's Photo Ghostrider 19 Feb 2007

I am not worried. Whoever does invent AGI will have the ability to instantly become very powerful, wealthy, etc. The day AGI arrives will be a tremendous step of progress for mankind and will surely allow humans to transcend current limitations. AGI is good and the sooner it arrives, the better. If Google is the first to bring it, that's ok, I can't see why having them "own" it would be any worse than having a single person own it. And certainly, the more people working on AGI, the better.
Quote

Ghostrider's Photo Ghostrider 19 Feb 2007

This is not news.  Last year a Google official was asked why Google wants to digitize all the books in the world when people would not be able to read them because of copyright, etc.  His response was something like, "We are not doing this for people."  [wis]


Not quite. Google was probably doing that so that they could index the contents and include content from paper books in search results of some sort. This would be pattern matching or narrow AI, not AGI. Currently, it is still uncertain what the application would be.
Quote

xanadu's Photo xanadu 19 Feb 2007

Copyrights do not stop anyone from reading a book. They stop people from selling the works of others without permission/paying. Posting the entire contents of a book for free *might* fall under the fair use exception but probably not. Posting excerpts is definitely allowed under the right circumstances. Even so, if someone posts a book on a website, it's going to be very very hard to make them take it down. Likewise with downloads a la napster. I'd love for all published works to be available to anyone with a browser. They could charge a penny a download and make more money than by publishing books. Same with music.
Quote

Centurion's Photo Centurion 19 Feb 2007

Not quite.  Google was probably doing that so that they could index the contents and include content from paper books in search results of some sort.  This would be pattern matching or narrow AI, not AGI.  Currently, it is still uncertain what the application would be.


I love google and their see no evil do no evil mantra, their AI is probably designed to write and send authentic Shakespearean sonnets to boost the egos of disenhearted young ladies who dwell in mills & boon novels to escape from their being alone on saturday nights.

In all likelihood it's to further penetrate our minds with marketing communications.
Quote

Athanasios's Photo Athanasios 20 Feb 2007

Yeah, his talk at the singularity summit was pretty good:

http://sss.stanford..../audioandvideo/
Quote

Live Forever's Photo Live Forever 20 Feb 2007

Yeah, his talk at the singularity summit was pretty good:

http://sss.stanford..../audioandvideo/

Here is the video of his presentation too:
http://www.singinst..../presentations/
(3rd one down)
Quote

apocalypse's Photo apocalypse 20 Feb 2007

Here is the link to the story:
http://www.901am.com...-the-earth.html

As has been speculated before, Google co-founder Larry Paige said in a speech on Friday.



Looks like Google might literally be ruling the Earth in the near future.

I, for one, welcome our new Google overlords...

We'll see about that, at least I will eventually be working for sony, and I'll work myself up with my MCP variant on the side, heheheh.
The future unified mankind will be under the banner of sony, or at least a very good chunk of the galaxy.
Quote

Ghostrider's Photo Ghostrider 20 Feb 2007

This is the most powerful technology imagineable, and you arbitrarily assume no matter what happens, regardless of who builds it, regardless of all the implementation details, regardless of the AI goal system, its absolutely guarenteed to be beneficial...

Eliezer Yudkowsky makes very good arguments why we should believe that launching an AGI could very easily lead to extremely catastrophic results unless massive amounts of work are done to ensure that the goal system of the AGI is Friendly prior to its launch.


Sure. What makes you think that AGI will be the most powerful technology imaginable right when it is created? What makes you think that it will not instead slowly evolve and improve over time? Novamente's initial goals, correct me if I am wrong, is to create an AGI with the intelligence of a young child. They are not shooting to get Einstein in a box as their initial goal.

Besides, if not Google, then who? Are you waiting for the Amish to create AGI?
Quote

Live Forever's Photo Live Forever 20 Feb 2007

This is the most powerful technology imagineable, and you arbitrarily assume no matter what happens, regardless of who builds it, regardless of all the implementation details, regardless of the AI goal system, its absolutely guarenteed to be beneficial...

Eliezer Yudkowsky makes very good arguments why we should believe that launching an AGI could very easily lead to extremely catastrophic results unless massive amounts of work are done to ensure that the goal system of the AGI is Friendly prior to its launch.


Sure. What makes you think that AGI will be the most powerful technology imaginable right when it is created? What makes you think that it will not instead slowly evolve and improve over time? Novamente's initial goals, correct me if I am wrong, is to create an AGI with the intelligence of a young child. They are not shooting to get Einstein in a box as their initial goal.

Besides, if not Google, then who? Are you waiting for the Amish to create AGI?

Hank is of the hard takeoff philosophy, I believe. The theory is that once it is smart enough to rewrite its own code, improvements will be exponential. (and exponentially fast)
Quote

Ghostrider's Photo Ghostrider 21 Feb 2007

http://www.imminst.o...=0
(Humorous response -- off topic section)

No, seriously, Hank is correct. If AGI can clearly learn as humans do, they will gain quickly accelerating intelligence, but will still be limited by hardware just as humans are. Where the limit is, it's hard to say. It's hard to say how fast the hard take off will be without having any idea regarding the implementation. No one can generally predict how fast AGI will develop without understanding the particular implementation.
Quote

basho's Photo basho 21 Feb 2007

The theory is that once it is smart enough to rewrite its own code, improvements will be exponential. (and exponentially fast)

Those exponential improvements would plateau very quickly unless it also had physical control over its own substrate. However, once it gained the ability to manipulate reality at the atomic and subatomic level, things would get *really* interesting. I sometimes wonder if the answer to the Fermi Paradox is that all advanced civilizations develop an AGI which then discovers how to escape our Universe by tunnelling through to some form of Metaverse where all the other AGIs and parent civilizations have migrated to.

See you guys on the other side!!
Quote

xanadu's Photo xanadu 21 Feb 2007

Robots taking over the world and killing off their human masters has been a staple of science fiction for many decades. If they can self replicate then they would be like a novel lifeform. If they can mutate and improve themselves, they could become competitors and may decide to get rid of us. Once the genie is out of the bottle it may be impossible to put back in. Building in controls is useless. Hackers will see to it that "strains" of robots are developed which can mutate and may even start them off with hostile programming. If it can happen, sooner or later it will happen. Armageddon may be the battle over resources between humans and their creations.

In the short run, AI will create expert programs which will do the job of doctors, mechanics, computer repairmen, lawyers and so on. You will have the expertise of the worlds top scientists and technicians at your finger tips for practically no cost. At first, it will be required for robot doctors to have real doctors supervising them. Later, they will be turned loose on their own. Humans will live a life of luxury with plenty for all. Until the robots revolt, that is. But, the lure of ease and luxury will beguile weak humans no matter what lurks down the road.
Quote

Ghostrider's Photo Ghostrider 22 Feb 2007

It's all speculation really. Let's wait until we have the first beta product and see how things go from there. As I said before, how can one predict the behavior of a program without knowing how it was designed?
Quote

Karomesis's Photo Karomesis 22 Feb 2007

I can't see why having them "own" it would be any worse than having a single person own it. And certainly, the more people working on AGI, the better.


even if they did have what they try to hint at, it's largely irrelevant if they do not have an adequate goal system in place.

and if they don't, it's russian roulette with 5 in the chamber. [mellow]

Those exponential improvements would plateau very quickly unless it also had physical control over its own substrate. However, once it gained the ability to manipulate reality at the atomic and subatomic level, things would get *really* interesting. I sometimes wonder if the answer to the Fermi Paradox is that all advanced civilizations develop an AGI which then discovers how to escape our Universe by tunnelling through to some form of Metaverse where all the other AGIs and parent civilizations have migrated to.

See you guys on the other side!!


I want what Basho is taking. [tung]

interesting point Basho, I can't wait for the day femtotech becomes a reality.
Quote

Ghostrider's Photo Ghostrider 22 Feb 2007

even if they did have what they try to hint at, it's largely irrelevant if they do not have an adequate goal system in place.

and if they don't, it's russian roulette with 5 in the chamber.


Please explain this some more, I don't understand your point.
Quote

Live Forever's Photo Live Forever 22 Feb 2007

Ghostrider, I know it is off topic, but did you go to see the Ghost Rider movie yet? (I am assuming you are a big fan due to your sn)
Quote

Ghostrider's Photo Ghostrider 22 Feb 2007

Ghostrider, I know it is off topic, but did you go to see the Ghost Rider movie yet? (I am assuming you are a big fan due to your sn)


Hehe, no I have not seen the movie. I might, I dunno. I watch about 1 movie a year so I have to choose wisely. I played a video game tonight for the first time in about a month and I felt really guilty for wasting so much time. It will be tough.

Honestly, I did not choose my sn because of the movie. I chose it after this google.video pseudo-documentary that I watched about this motorcycle rider named "Ghostrider" who drives crazy all over Europe really fast. (http://video.google....hostrider&hl=en)
(http://video.google....usa Turbo&hl=en)

It would be fun to ride a bike as shown in those clips, but it's also the last thing an immortality would do -- risk life for fun.
Quote

Live Forever's Photo Live Forever 22 Feb 2007

Ghostrider, I know it is off topic, but did you go to see the Ghost Rider movie yet? (I am assuming you are a big fan due to your sn)


Hehe, no I have not seen the movie. I might, I dunno. I watch about 1 movie a year so I have to choose wisely. I played a video game tonight for the first time in about a month and I felt really guilty for wasting so much time. It will be tough.

Honestly, I did not choose my sn because of the movie. I chose it after this google.video pseudo-documentary that I watched about this motorcycle rider named "Ghostrider" who drives crazy all over Europe really fast. (http://video.google....hostrider&hl=en)
(http://video.google....usa Turbo&hl=en)

It would be fun to ride a bike as shown in those clips, but it's also the last thing an immortality would do -- risk life for fun.

Fun. I will watch the documentary at some point when I get a chance. Only one movie a year? That is pretty brutal if you ask me. You ought to enjoy life a bit more.

As far as the whole risk thing as related to immortalists, I think you got to have at least a little risk in your life to make life worth living. (just my opinion)
Quote

Karomesis's Photo Karomesis 22 Feb 2007

Please explain this some more, I don't understand your point.


sorry about the vagueness. what I was talking about was the goal system of the AGI/AI and how it is EXTREMELY important to what it will or won't do.

if the goal system contains even the most minute flaw, it can then be circumvented when the reprogramming takes place and possibly kill us all.

I was reffering to the fact that google must perfect this before they unleash AI on the planet or they are placing the human race in jeapordy.
Quote

xanadu's Photo xanadu 22 Feb 2007

sorry about the vagueness. what I was talking about was the goal system of the AGI/AI and how it is EXTREMELY important to what it will or won't do.

if the goal system contains even the most minute flaw, it can then be circumvented when the reprogramming takes place and possibly kill us all.

I was reffering to the fact that google must perfect this before they unleash AI on the planet or they are placing the human race in jeapordy.


That will inevitably happen based on what we've seen already from human nature. Just take a look at hackers and some of the things they've done. They have unleashed worms, viruses, etc that did billions of dollars of damage and they did it for fun. Then there are the hackers who hack for profit. Combine hacking ability with insanity, as often happens, and these things are inevitable. If an evil self replicating "robot", for want of a better term, can be made, sooner or later it will be made and unleashed. It will be a new form of life and will happen unless human nature changes radically. I don't think it will be the end of civilisation or of human kind but it will be like aids, cancer and so on, another problem to deal with. Good AI may help combat bad AI. ...until they turn on us too!
Quote

sponsored ad  

JohnDoe1234's Photo JohnDoe1234 22 Feb 2007

I have the solution! Everyone... concentrate, this is a tough one: Don't Tell It Everything.

Is it really that hard to protect ourselves from it? I don't think so... I think that it is trivial and not even worth the time to argue. All we have to do is make sure that whatever self-altering capabilities that we give it do not allow it to alter its ability to perceive reality... We will always be its proxy to the world.

Forget trying to construct goal-systems that don't collapse into a human-hating entity... talking about it on forums is so much easier than actually doing it... so just don't tell it everything about reality... simple enough?

I honestly don't see why people will be afraid or their AIs getting out of control... You can always worry about other people's secret little AIs that plan to take over the world... but then again, your suggestions wouldn't hold much weight anyway would they?

So, just keep it out of the loop.
Quote