• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
* * * * * 1 votes

Sunday Evening Update, October 26th, 5pm Central (22:00 GMT)


  • Please log in to reply
41 replies to this topic

#1 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 23 October 2008 - 08:29 PM


Join me for what should be another fascinating trip into the world of artificial intelligence research with Professor Hugo De Garis.

Check out his home page.

and the Wikipedia article about De Garis.

He has certainly had an interesting history and made some dramatic predictions. It will be interesting to get his take on the current state of AI research and development. I would also like to know how it is working at a University in China, being that most information in China is controlled.

Please, please list any questions you might have for Professor De Garis if you are unable to attend the program live.

Attached Files



#2 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 24 October 2008 - 07:36 PM

This sounds like a guest my son would want to listen to and ask questions of, I'll see if he can attend at the recording time.

#3 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 24 October 2008 - 08:58 PM

I really want to ask him if he'll sponsor me for a paper about this: http://www.imminst.o...mp;#entry225442

#4 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 October 2008 - 04:19 PM

One question I would like to ask is about the potential for AI to arise from the the internet and inter-connected computers/software. When I interviewed Eliezer Yudkowsky a few weeks ago he said this scenario was nearly impossible.

#5 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 25 October 2008 - 05:08 PM

One question I would like to ask is about the potential for AI to arise from the the internet and inter-connected computers/software. When I interviewed Eliezer Yudkowsky a few weeks ago he said this scenario was nearly impossible.


I think that's a great question, Mind.

IMnotsoHO, the internet is certainly a type of AGI that helps us to share data with others. Interface and intelligence in search just got a heck of a boost from semantic web technologies. If Yudkowsky says it's impossible, he really means that it's impossible given his definition of an AI.

Aren't memes arising from the inter-connectedness of computers/wetware?

#6 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 26 October 2008 - 02:37 AM

A recent video by De Gairs about the "artilect war".

#7 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 26 October 2008 - 05:16 PM

Hugo De Garis bio at kurzweilai.net?

Prof. de Garis is also a political visionary, predicting a "gigadeath" war over the issue of "species dominance" as godlike massively intelligent machines he calls "artilects" threaten human species dominance. He intends to organize the planet's first international conference on "Species Dominance" and is author of the manuscript The Artilect War.


I suppose a good question would be how he sees the "Artilect War" now that a couple years have passed since he wrote the book. Does he see any other outcome besides a "gigadeath"?

#8 Matt

  • Guest
  • 2,862 posts
  • 149
  • Location:United Kingdom
  • NO

Posted 26 October 2008 - 05:48 PM



Provocative short documentary about an Artificial I Intelligence scientist called Dr Hugo De Garis


There was a 1 hour program on him, I'm trying to find it now, but it was quite good!

#9 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 October 2008 - 07:00 PM

Hugo De Garis is loony. Just FYI.

Edited by Savage, 26 October 2008 - 07:01 PM.


#10 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 October 2008 - 07:10 PM

Starglider:

Last time I checked, Hugo de Garis was all for hard takeoff of arbitrary AGIs as soon as possible, and damn the consequences. This is someone who gleefully predicts massively destructive wars between 'terrans' and 'cosmists', and expects humanity to be made extinct by 'artilects', and actually wants to /hasten the arrival of this/. [...] I'd have to characterise this goal system as quite literally insane [...]. His architecture (at least as of 'CAM-brain') is just about as horribly emergent and uncontrollable/unpredictable as it is possible to get. If you accept hard takeoff, and you're using an architecture like that, then it doesn't make a jot of difference what petty political goals your funders might have; they're as irrelevant as everyone else's goals once the hard takeoff kicks in. Fortunately there's no short term prospect of anything like that actually working, but given enough zettaflops of nanotech-supplied compute power it might start to be a serious problem.


Edited by Savage, 26 October 2008 - 07:11 PM.


#11 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 26 October 2008 - 08:30 PM

Do you have a question Savage?

Do you have a link to the website/person you quoted above?

#12 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 October 2008 - 09:12 PM

Do you have a question Savage?

Do you have a link to the website/person you quoted above?

Probably in the SL4 archives if you use Google.


Ah yes, the AGIRI singularity mailing list created by Ben Goertzel. This is from Michael Wilson, aka Starglider, former Research Associate with the Singularity Institute and Eliezer Yudkowsky.

Last time I checked, Hugo de Garis was all for hard takeoff of arbitrary
AGIs as soon as possible, and damn the consequences. This is
someone who gleefully predicts massively destructive wars between
'terrans' and 'cosmists', and expects humanity to be made extinct by
'artilects', and actually wants to /hasten the arrival of this/. While I'd
have to characterise this goal system as quite literally insane, the
decision to accept funding from totalitarian regiemes is actually a quite
rational consequence. His architecture (at least as of 'CAM-brain') is just
about as horribly emergent and uncontrollable/unpredictable as it is
possible to get. If you accept hard takeoff, and you're using an
architecture like that, then it doesn't make a jot of difference what petty
political goals your funders might have; they're as irrelevant as everyone
else's goals once the hard takeoff kicks in. Fortunately there's no short
term prospect of anything like that actually working, but given enough
zettaflops of nanotech-supplied compute power it might start to be a
serious problem. I'm guessing that his backers are looking for PR and/or
limited commercial spinoffs though.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email



Ask him if he stands by the point of view that Michael Wilson has identified here.

Edited by Mind, 26 October 2008 - 10:00 PM.


#13 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 26 October 2008 - 09:27 PM

Here is the ustream site link:

Lets refrain from name-calling, and just stick to the hard questions if you disagree with someone's view :).

#14 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 October 2008 - 09:31 PM

Here is the ustream site link:

Lets refrain from name-calling, and just stick to the hard questions if you disagree with someone's view :).

I'm sorry, I didn't mean to call him a lunatic. I just wanted to point out that he is insane.

Insanity is the primary occupational hazard of artificial intelligence researchers.

#15 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 26 October 2008 - 09:39 PM

Well, you can say that you think he is--that is your right, but I'm sure the term is subjective here as we are not in a position to say if he is insane or not. I'm looking forward to hearing him speak, and am interested in many of his ideas. So far he has not said anything way out of line in the recorded speech that Mind has playing currently.

The chat seems to not be working yet at the Ustream ImmInst channel, Mind-can you turn it on?

#16 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 October 2008 - 09:40 PM

I'm sure the term is subjective here as we are not in a position to say if he is insane or not.

Hm... you must not have fully read or understood Michael Wilson's post...

Here it is again in case you missed it. Some of this terminology is rather unknown to many people so if you don't understand well you might want to spend some time studying this area of thought, or else just defer to someone that understands. In essence, we are talking about someone who is knowingly (or if he now denies that he knows, he is still actively) working toward the goal of annihilating himself and everyone else.

Last time I checked, Hugo de Garis was all for hard takeoff of arbitrary
AGIs as soon as possible, and damn the consequences. This is
someone who gleefully predicts massively destructive wars between
'terrans' and 'cosmists', and expects humanity to be made extinct by
'artilects', and actually wants to /hasten the arrival of this/. While I'd
have to characterise this goal system as quite literally insane, the
decision to accept funding from totalitarian regiemes is actually a quite
rational consequence. His architecture (at least as of 'CAM-brain') is just
about as horribly emergent and uncontrollable/unpredictable as it is
possible to get. If you accept hard takeoff, and you're using an
architecture like that, then it doesn't make a jot of difference what petty
political goals your funders might have; they're as irrelevant as everyone
else's goals once the hard takeoff kicks in. Fortunately there's no short
term prospect of anything like that actually working, but given enough
zettaflops of nanotech-supplied compute power it might start to be a
serious problem. I'm guessing that his backers are looking for PR and/or
limited commercial spinoffs though.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email


Edited by Savage, 26 October 2008 - 09:58 PM.


#17 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 26 October 2008 - 09:58 PM

I did, and I don't think that A.I. will inherently have the goal of making humanity extinct--yet I concede that is a distinct possibility. I agree that we need to continue to work for A.I. though, and feel at the same time advancing to that level of intelligence is what our species needs to survive. I know there are many humans that think humanity has been pretty brutal to its own and can understand those that would not care to see humanity stick around the way that it is. I myself would like to see many changes, but I feel that our highest ideals of consciousness and search for truth will extend to our next levels of increased intelligence/awareness. Is that only a belief of mine? Yes, but none of us know what will happen--too many wild cards :)

#18 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 October 2008 - 10:01 PM

I don't think that A.I. will inherently have the goal of making humanity extinct

Of course not. AI is not a singluar entity. Any AI could have any goal, all depening on what is programmed into it. It turns out that the vast majority of goals and goal systems, ESPECIALLY those that would be generated by the processes Hugo De Garis uses, would almost certainly lead to the extinction of humanity.

Edited by Savage, 26 October 2008 - 10:14 PM.


#19 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 October 2008 - 10:02 PM

I humbly suggest you visit the Singularity Institute's website.

gotta run now.

#20 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 26 October 2008 - 10:28 PM

Yes, I've long been a SI supporter. I followed up that what I thought, is not likely to be what will be--as no one really knows. There are many proposed goals of AI. de Garis in particular does feel humans will become extinct, but does not view that as a bad thing.

#21 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 26 October 2008 - 10:44 PM

Well Mind asked him your question Savage, and he said he is not for war, or the destruction of the human species he's "not a monster". You can listen to the whole recording at the Ustream channel, it is quiet fascinating and he is articulate--does not sound insane to me. He is talking about the actual debate currently going on between those that want to create A.I. and those that do not--will they be beneficial "Gods" or will the be "exterminators"? He thinks that as soon as the upcoming Kurzweil movie comes out, and some other documentaries on "singularity" subjects --this public debate will intensify.

#22 Shepard

  • Member, Director, Moderator
  • 6,360 posts
  • 932
  • Location:Auburn, AL

Posted 26 October 2008 - 10:47 PM

Those that worship at the altar of Yudkowsky tend to lash out against those that don't.

But, I have to admit that I hear flashes of what Savage is talking about in the interview.

Edited by shepard, 26 October 2008 - 11:08 PM.


#23 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 27 October 2008 - 12:13 AM

Well, I did ask the question about this perception that De Garis wants to see a gazillion people die. He said this is not the case. He identifies as a cosman (would like to see the creation of AGI to achieve immortality and other wondrous things), realizes there is risk involved in creating AGI, and is pessimistic/realistic about the outcome. He thinks the odds are high that an artilect war will happen, but he would rather not see it happen.

#24 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 27 October 2008 - 01:30 AM

I had a nice discussion with Avryn (age 9) who watched the interview, we also had the Amazon review of his book up, and the wiki entry about him. Avryn thought a very intelligent A.I. "a trillion times smarter than a human" would just want to leave Earth. I asked if it would not have empathy for its "roots" and try help out humanity by ending aging, ending inequality etc. he said it wouldn't because the humans would still kill each other. I questioned whether or not they would if many of the current things they fight over are removed. Avryn is quite interested in robotics and A.I. so it was fun to have him listen to de Garis and engage him an many scenarios of advanced A.I. :)

#25 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 03:27 AM

Those that worship at the altar of Yudkowsky tend to lash out against those that don't.

But, I have to admit that I hear flashes of what Savage is talking about in the interview.

...oh thanks a lot ... now I'm worshipping and lashing out lol

You could NOT say Michael Wilson, and others of the same knowledge, worship Yudkowsky or lash out at anybody.

Edited by Savage, 27 October 2008 - 03:32 AM.


#26 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 03:29 AM

I asked if it would not have empathy for its "roots"

WHOOSH

I swear some things just go right over people's heads.

Pardon me for getting pissy about it.

Edited by Savage, 27 October 2008 - 03:33 AM.


#27 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 03:46 AM

The basic philosophical hurdle here is:

1. AI is not a singluar entity. Any AI could have any goal, all depening on what is programmed into it.
2. The vast majority of goals and goal systems in a true AI would almost certainly lead to the extinction of humanity.

That's our lesson for today.

I think the basic point about de Garis being loony is that first of all, he agrees with 2, at least in some sense. He "expects humanity to be made extinct by 'artilects'."

However, he blazes ahead in creating these brain structures of totally unsafe, and even totally arbitrary goal content and structure- "His architecture (at least as of 'CAM-brain') is just about as horribly emergent and uncontrollable/unpredictable as it is possible to get. If you accept hard takeoff, and you're using an architecture like that, then it doesn't make a jot of difference what petty political goals your funders might have; they're as irrelevant as everyone else's goals once the hard takeoff kicks in."

Flying totally in the face of 1, and even his own stated agreement with 2, "Last time I checked, Hugo de Garis was all for hard takeoff of arbitrary AGIs as soon as possible, and damn the consequences."

Thus, "I'd have to characterise this goal system as quite literally insane"

Edited by Savage, 27 October 2008 - 03:50 AM.


#28 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 03:47 AM

Is anybody NOT on board with me now?

#29 Shepard

  • Member, Director, Moderator
  • 6,360 posts
  • 932
  • Location:Auburn, AL

Posted 27 October 2008 - 03:58 AM

he is articulate--does not sound insane to me.


Outside of the fact that I can't rationalize his ideas, his voice took on this tone of zealousness at one point that makes me question his objectiveness and view of reality. While I could be way off base, the absoluteness of his attitude toward AI seemed more disturbing than his approach.

#30 Ben

  • Guest
  • 2,010 posts
  • -2
  • Location:South East

Posted 27 October 2008 - 01:29 PM

Got this from wikipedia:


De Garis:

"
  • Humans should not stand in the way of a higher form of evolution. These machines are godlike. It is human destiny to create them.

— as quoted in New York Times Magazine of August 1, 1999, speaking of the 'artilects' of the future.

"




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users