• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Singularity = AI suicide?


  • Please log in to reply
75 replies to this topic

#61 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 08 March 2008 - 01:41 AM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :))


Motive is a DESIGN DECISION

To anyone ELSE foolish enough to try it:

DO NOT COME UP WITH YOUR OWN STUPID THEORIES OF AI

They are WRONG!

#62 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 08 March 2008 - 01:52 AM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :))


Motive is a DESIGN DECISION

To anyone ELSE foolish enough to try it:

DO NOT COME UP WITH YOUR OWN STUPID THEORIES OF AI

They are WRONG!


"Design decision" isn't exactly the term I wanted to use... but it gets the point across (what the hell is that phrase I'm trying to think of?).

If you actually build an AI, there is an infinitely huge space of "possible motives" that you can intentionally or accidentally program it to have, including motives even more convoluted and "arbitrary" than human motives, and (assuming you actually get the thing working), the AI could feel/believe/understand/pursue its motives even more passionately and capably than humans do.

Even if all it is programmed to want is to convert all the matter in the Universe into paper clips or smiley faces.

sponsored ad

  • Advert

#63 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 08 March 2008 - 02:04 AM

Hell, an AI could become nihilistic- If you SPECIFICALLY PROGRAMMED IT THAT WAY.

You CANNOT make generalizations over all possible AIs, because you cannot make generalizations over all possible minds.

The space of all possible "Mind designs" includes any possible assortment of different core utility functions, goal structures, etc.

Your statement shows your own personal nihilistics tendencies if anything. And not all minds are like yours.

#64

  • Lurker
  • 0

Posted 08 March 2008 - 02:04 AM

No. What I'm describing is intelligence. To even have an IQ, problem-solving ability, i.e. pattern recognition, short-term memory, executive function, and so on, you need data and a criteria for intelligence as it serves life's necessities.

The correlation between IQ and total years of education is 0.55.
Source: http://en.wikipedia....ool_performance

#65 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 08 March 2008 - 03:48 AM

in other words, it will probaby take out humanity and all life with it in persuit of ANY arbitrary mission that the programmers accidentally gave it

#66 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 08 March 2008 - 03:51 AM

in other words, it will probaby take out humanity and all life with it in persuit of ANY arbitrary mission that the programmers accidentally gave it

when of course the mission they meant to give it didn't include apocalypse

#67 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 08 March 2008 - 03:55 AM

in other words, it will probaby take out humanity and all life with it in persuit of ANY arbitrary mission that the programmers accidentally gave it

when of course the mission they meant to give it didn't include apocalypse

Think Windows Vista, except the bugs cause mass extinction

Edited by Savage, 08 March 2008 - 03:56 AM.


#68 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 08 March 2008 - 05:23 AM

Wow, what an excellent topic this has turned out to be.

Though, gashinoshotan (btw where does your screen name come from), you have stated that humans have only physiological purposes and hence we don't become nihilistic, i disagree with this. Our physiological purposes are not enough to keep a mass from non nihilism, just look at religion, it exists because of our basic need for purpose. I'm not saying we have any real purpose other than through the philosophy of existentialism, but I'm saying we need an overall drive to progress, if 400 years can be added to our lives, then eventually (hopefully at least) religious purposes would break down and we'd focus entirely on conquering outer space in order to colonize and preserve our species.

Someone has stated before that the most intelligent minds haven't turned to nihilism, and i say look what they've turned to instead... proving the existence of a deity by creating our advanced laws of physics and mathematics (and i mean originally). Look at Newton and Einstein, even Descrates, all of them attempted to use math to state that there is a God, and he's rational and created a mechanical universe (Deism)... During the age of Enlightenment, the purpose of these laws were to dispel the hands of the Church & other powers (by proving them retarded unless they were "enlightened"), but also to show the ability of the mind to discern "the laws of god". Einstein himself said "God does not play dice". Besides intelligence typically correlates with depression and depression typically precedes some form of religion or spirituality.

You can relate religion to purpose very easily, and thereby say that intelligence demands purpose for many. I mean following physiological desires is fine but in the end most people need some reason for why their existence has occurred and what they will do with their lives (especially in the end), these are inputs to me. So if you have a computer with a more intelligent "mind", and obviously more logical, it will be able to figure out laws (if "consciousness" can be programmed into a machine), and our place, or intellectual pursuit will be only to please the fancies of the intelligent (it would constitute no purpose for humanity). We're pretty much creating the book Brave New World except with a super intelligent computer to determine our progress instead. I don't see the computer becoming nihilistic because I don't see it able to function without inputs (which will be its purpose), more importantly why would you want it to?

So in the end i say... what purpose would give opiates to the masses, given their minds have none. Would we not see a degrading in the usage of the mind? Shrugs, I'd much rather enjoy a fate where humanity is the super intelligent using chips, rather than relying on a side kick that would end up becoming the protector (hopefully) and superior creature.


"It was, of course, a lie what you read about my religious convictions, a lie which is being systematically repeated. I do not believe in a personal God and I have never denied this but have expressed it clearly." - Einstein

"I believe in Spinoza's God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with fates and actions of human beings." - Einstein

#69 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 08 March 2008 - 07:16 AM

Wow, what an excellent topic this has turned out to be.

Though, gashinoshotan (btw where does your screen name come from), you have stated that humans have only physiological purposes and hence we don't become nihilistic, i disagree with this. Our physiological purposes are not enough to keep a mass from non nihilism, just look at religion, it exists because of our basic need for purpose. I'm not saying we have any real purpose other than through the philosophy of existentialism, but I'm saying we need an overall drive to progress, if 400 years can be added to our lives, then eventually (hopefully at least) religious purposes would break down and we'd focus entirely on conquering outer space in order to colonize and preserve our species.

Someone has stated before that the most intelligent minds haven't turned to nihilism, and i say look what they've turned to instead... proving the existence of a deity by creating our advanced laws of physics and mathematics (and i mean originally). Look at Newton and Einstein, even Descrates, all of them attempted to use math to state that there is a God, and he's rational and created a mechanical universe (Deism)... During the age of Enlightenment, the purpose of these laws were to dispel the hands of the Church & other powers (by proving them retarded unless they were "enlightened"), but also to show the ability of the mind to discern "the laws of god". Einstein himself said "God does not play dice". Besides intelligence typically correlates with depression and depression typically precedes some form of religion or spirituality.

You can relate religion to purpose very easily, and thereby say that intelligence demands purpose for many. I mean following physiological desires is fine but in the end most people need some reason for why their existence has occurred and what they will do with their lives (especially in the end), these are inputs to me. So if you have a computer with a more intelligent "mind", and obviously more logical, it will be able to figure out laws (if "consciousness" can be programmed into a machine), and our place, or intellectual pursuit will be only to please the fancies of the intelligent (it would constitute no purpose for humanity). We're pretty much creating the book Brave New World except with a super intelligent computer to determine our progress instead. I don't see the computer becoming nihilistic because I don't see it able to function without inputs (which will be its purpose), more importantly why would you want it to?

So in the end i say... what purpose would give opiates to the masses, given their minds have none. Would we not see a degrading in the usage of the mind? Shrugs, I'd much rather enjoy a fate where humanity is the super intelligent using chips, rather than relying on a side kick that would end up becoming the protector (hopefully) and superior creature.


"It was, of course, a lie what you read about my religious convictions, a lie which is being systematically repeated. I do not believe in a personal God and I have never denied this but have expressed it clearly." - Einstein

"I believe in Spinoza's God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with fates and actions of human beings." - Einstein


ya that's deism for you, a non personal god who created a universe that is orderly and rational, which works like a mechanical clock. I suppose one can argue away Einstein as a good example in many ways, and say Newton was a Manic Depressant which contributed to those long hours, so there probably weren't the best example :~. One could also argue that this original search for happiness resulted in him playing with philosophical & theoretical physics, and thereby each equation filled him with awe as he searched for a unified theory of everything... but i'm only drowning in my attempts lol. I'd be willing to believe and attempt to prove, certain biological implications of theoritical physics and religion being similar (just a theory though), I'd explain this but it would be too off topic.

I still have my uncertainties about the future which i would prefer have been critiqued rather than my poor examples :). Given a species where most thought is done by highly evolved computers, who will need Einsteins and what society will result? Transhumanism please :).

Edited by mysticpsi, 08 March 2008 - 07:16 AM.


#70 AdamSummerfield

  • Guest
  • 351 posts
  • 4
  • Location:Derbyshire, England

Posted 10 March 2008 - 06:27 PM

if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.


I don't see how you have come to such a certainty; that it would find no purpose in it's own being. I think that the more intelligent we become, the more we see the worth in bringing happiness to other beings... Even if this, my opinion isn't correct, there's a high probability that the AI would find some level of purpose with such intellect.

#71 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 13 March 2008 - 04:21 PM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :p)


Motive is a DESIGN DECISION

To anyone ELSE foolish enough to try it:

DO NOT COME UP WITH YOUR OWN STUPID THEORIES OF AI

They are WRONG!


Have you read any of the replies? Motive is a design decision only when humans are the designers. What is to prevent an AI lacking any necessity to exist from programming itself into nihilism when this is the only logical choice? Existence without motivation isn't maintained.

#72 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 13 March 2008 - 04:25 PM

Hell, an AI could become nihilistic- If you SPECIFICALLY PROGRAMMED IT THAT WAY.

You CANNOT make generalizations over all possible AIs, because you cannot make generalizations over all possible minds.

The space of all possible "Mind designs" includes any possible assortment of different core utility functions, goal structures, etc.

Your statement shows your own personal nihilistics tendencies if anything. And not all minds are like yours.


If is the main word - what happens if AI break free from human control? What then? What is the motive for AI to continue existing when it will lack a reason to live? I'm talking about the logical conclusion to a super-logical machine - it would soon find its existence purposeless with continue generations free of humanity.

#73 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 14 March 2008 - 09:07 PM

Keep it civilized guys. Personal attacks or instigation will be removed if it affects the general flow of the forum.

If is the main word - what happens if AI break free from human control? What then? What is the motive for AI to continue existing when it will lack a reason to live? I'm talking about the logical conclusion to a super-logical machine - it would soon find its existence purposeless with continue generations free of humanity.


Concerns like yours is the reason why those knowledgeable about AGI and negative possible outcomes should build AGI before it becomes easier to do so. We want those who are concerned about the matter to pave the way. For this reason, I think the Singularity Institute is an important and needed organization in the AI realm. AGI can be a threat to life, but it can also be an amazing development for humanity. S

#74

  • Lurker
  • 0

Posted 16 March 2008 - 10:36 PM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :) )


Gashinshotan-
This is a very interesting argument, but you are basically assuming the AI would have a 'utility function' exactly the same as ours, which may not be the case. Our utility function is to reproduce our genes and make our lives better in the long run, but this would not necessarily be the case with a super-human-level AI. People like Eliezer Yudkowsky work with these problems all the time and basically admit that there is no way you can predict what a super-intelligent being would do based on our primitive (in comparison) utility function and logic. The most we can do is to work out all possible scenarios for each utility function and then just choose the safest one for humanity based on statistics.
What you are talking about is what happens 'after' the singularity, which not even people like Kurzweil speculate about. I think it is akin to a bacteria predicting what a human would do in the future, completely out of its realm of understanding. There are really only three choices in your argument though, and one leads to destruction of all life on Earth. The other leads to suicide for the AI, in which case we could just try again (but more carefully this time.) The third choice is that the AI goes on living and spreading its intelligence throughout the universe, which is the mainstream view, I think.

Edited by Mike Van Bebber, 16 March 2008 - 10:38 PM.


#75 kismet

  • Guest
  • 2,984 posts
  • 424
  • Location:Austria, Vienna

Posted 09 June 2008 - 02:34 PM

Either way 'nihilism' is mislabeling it. There exist so many philosophies of scepticism, relativism and nihilism, but suicide is not part of any of them as far as I know. Assuming nihlism (or seeing not any meaning in life) -> suicide, is insulting to those philosophies.

I do not believe in a purpose myself, life is pointless, but I do not commit suicide. If nothing has any meaning or value, what business do you have killing yourself? Your death cannot change anything, thus one becomes indifferent to death.

If an AI becomes conscious/intelligent suicide can become an issue as it already is for humans.

Edited by kismet, 09 June 2008 - 02:35 PM.


sponsored ad

  • Advert

#76 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 09 June 2008 - 07:34 PM

Either way 'nihilism' is mislabeling it. There exist so many philosophies of scepticism, relativism and nihilism, but suicide is not part of any of them as far as I know. Assuming nihlism (or seeing not any meaning in life) -> suicide, is insulting to those philosophies.

I do not believe in a purpose myself, life is pointless, but I do not commit suicide. If nothing has any meaning or value, what business do you have killing yourself? Your death cannot change anything, thus one becomes indifferent to death.

If an AI becomes conscious/intelligent suicide can become an issue as it already is for humans.



I think that it all depends on the definition of "point". For me, the point of our life is what we make it be. Actually i like that there was no predefined point for our lives before we were born; that means that we can choose to make life be whatever we want it to be, and are not obligated by "destiny" to fulfill our "point" in life.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users