• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
* * * * * 1 votes

Sunday Evening Update, October 26th, 5pm Central (22:00 GMT)


  • Please log in to reply
41 replies to this topic

#31 Shepard

  • Member, Director, Moderator
  • 6,360 posts
  • 932
  • Location:Auburn, AL

Posted 27 October 2008 - 01:37 PM

[*]Humans should not stand in the way of a higher form of evolution. These machines are godlike. It is human destiny to create them.


Screw that, I'm taking out Skynet before it takes me out.

#32 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 02:24 PM

Got this from wikipedia:


De Garis:

"

  • Humans should not stand in the way of a higher form of evolution. These machines are godlike. It is human destiny to create them.

— as quoted in New York Times Magazine of August 1, 1999, speaking of the 'artilects' of the future.

"

Ah yes, thank you. I remembered that quote.

#33 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 27 October 2008 - 02:25 PM

"

  • Humans should not stand in the way of a higher form of evolution. These machines are godlike. It is human destiny to create them.

— as quoted in New York Times Magazine of August 1, 1999, speaking of the 'artilects' of the future.

"


"Humans should not stand still on the path to a higher form of evolution. We are godlike. It is human destiny to create us."

I don't really like the religious language at all. If anything, that only incites people. We in the thinking business have to always be conscious of branding.

#34 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 02:28 PM

"

  • Humans should not stand in the way of a higher form of evolution. These machines are godlike. It is human destiny to create them.

— as quoted in New York Times Magazine of August 1, 1999, speaking of the 'artilects' of the future.

"


"Humans should not stand still on the path to a higher form of evolution. We are godlike. It is human destiny to create us."

I don't really like the religious language at all. If anything, that only incites people. We in the thinking business have to always be conscious of branding.

I have no problem at all with "These machines are godlike. It is human destiny to create them." - I agree completely.

This is the problem: "Humans should not stand in the way of a higher form of evolution."
What he means by this is that he fully expects humanity to be made extinct in this "evolution", and that he is happy to see us all go in favor of the new "higher form".

Edited by Mind, 27 October 2008 - 04:47 PM.


#35 Matt

  • Guest
  • 2,862 posts
  • 149
  • Location:United Kingdom
  • NO

Posted 27 October 2008 - 02:33 PM

What a brilliant discussion! I really enjoyed that one, great job Mind for getting him on the show. He seems quite an intelligent guy and his ideas are well thought out. He didn't sound a 'loon' as savage said him to be. The idea artilects will continue on from us, I'm unsure if that is actually a bad thing, and like he said, this process from biological to totally artificial might have taken place many times before in the universe already. I've never heard him actually say he wanted all these billions of people to die, he just thinks it's probably inevitible outcome of our progress. The 'war' at the moment is still science fiction, It doesn't mean it will happen! I'm glad that is being discussed, even on a small scale.

Yes maybe he is hastening the progress by his 'brain building' projects, but aren't the guys involved in 'nanotech' hastening the arrival of a more unpredictable and dangerous future with the whole 'grey goo' scenario and 'nanoteched' weaponary. Which if in terrorist hands could cause much more damage than the explosives we see used now. You can look at many fields of science where it leads us into a much more dangerous future. But this is just on a potentially bigger scale.

If artilects take over from humans, I'm not sure this is a bad thing. My optimisic future is that I evolve along side machines. I hope that his idea about this artilect war is avoidable.

Anyway I really enjoyed that episode!

Edited by Matt, 27 October 2008 - 02:38 PM.


#36 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 04:20 PM

I've never heard him actually say he wanted all these billions of people to die, he just thinks it's probably inevitible outcome of our progress.

The point was that he seemed to be knowingly persuing this as a goal o_O

Yes maybe he is hastening the progress by his 'brain building' projects

Missing the point here. For one, his AI approach isn't hastening progress toward AI in any real way...

If artilects take over from humans, I'm not sure this is a bad thing. My optimisic future is that I evolve along side machines.

Agreed.

refer to the previous "WHOOSH"

ugh...

Edited by Savage, 27 October 2008 - 04:25 PM.


#37 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 04:21 PM

I don't even know why I keep responding. I'm just repeating myself over and over again.

I guess this is one of those really counter-intuitive things.

#38 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 04:27 PM

Yes, I've long been a SI supporter. I followed up that what I thought, is not likely to be what will be--as no one really knows. There are many proposed goals of AI. de Garis in particular does feel humans will become extinct, but does not view that as a bad thing.

You can listen to the whole recording at the Ustream channel, it is quiet fascinating and he is articulate--does not sound insane to me.

sometimes you just have to pretend you didn't hear someone say something... Shannon ... O_O

Edited by Savage, 27 October 2008 - 04:30 PM.


#39 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 27 October 2008 - 04:59 PM

Missing the point here. For one, his AI approach isn't hastening progress toward AI in any real way...


....then nothing he is doing is very dangerous and whether he speculates as to a possible demise of human beings in the future is just speculation.

I asked him if he sees the artilect war and gigadeath as something desirable and he said no. All I can do is take him for his word. He does seem to be an eccentric type, but he didn't seem mean or insane to me.

Also, saying "humans should not stand in the way of a higher form of evolution" is not the same as saying "I want billions of people to die". You are putting words in his mouth. It is dishonest, unfair, and it reflects poorly on you. Saying his research is risky or reckless might be appropriate. Saying he hasn't thought the consequences through thoroughly could be somewhat accurate. Saying he is deliberately developing AI to kill billions of people is not accurate based on the interview and all of the material you have quoted. If you can find a video or a quote where he says "I am developing AI in order to kill billions of people", please quote it/link it and prove me wrong. My feeling based on the content of the interview is that he would like to see AI developed (or a cyborg evolution) because he is curious to see the next step in evolution (as many of us are) and achieve things like practical immortality.

At least his research is open. His pessimistic outlook about the future is for all to see. I would be more worried about people conducting AI research out of the public eye.

One thing I think skews his outlook is a generational perception. Being 60 years old I think he is more likely to see the world as severely divided along national and political lines. I see the world as more connected than ever and any major conflicts between China, the U.S., or other large countries or block of countries as highly unlikely.

#40 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 05:13 PM

My feeling based on the content of the interview is that he would like to see AI developed (or a cyborg evolution) because he is curious to see the next step in evolution (as many of us are) and achieve things like practical immortality.

k. not the impression i have been getting. anyway, enough of the de garis bashing :)

#41 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 27 October 2008 - 05:42 PM

k. not the impression i have been getting. anyway, enough of the de garis bashing


Hey, you did drive a pretty lengthy discussion, so that is good.

I doubt de Garis is REAL close to developing a TRUE AGI, so we will have a few years to monitor things and find out his true intentions.

#42 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 October 2008 - 05:56 PM

I doubt de Garis is REAL close to developing a TRUE AGI, so we will have a few years to monitor things and find out his true intentions.

Hahaha. pretty funny.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users