• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Singularity = AI suicide?


  • Please log in to reply
75 replies to this topic

#1 gashinshotan

  • Guest
  • 443 posts
  • -2

Posted 01 March 2008 - 09:20 PM


I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :p)

#2 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 01 March 2008 - 10:11 PM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :p)

Actually you're missing the whole point of conscious life. According to what you are saying, we too as humans should just give up and commit suicide.

According to the singularity - artificial general intelligence would be conscious just like us ... only much smarter.

At first, yes, it will come to question it's existence like we do. However, since it can think much faster then ... it will immediately discover a purpose.

The same purpose we have as life ... to survive, to learn, to evolve and to strive for nirvana (a higher level of existence).

sponsored ad

  • Advert

#3 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 01 March 2008 - 10:24 PM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :p)

Actually you're missing the whole point of conscious life. According to what you are saying, we too as humans should just give up and commit suicide.

According to the singularity - artificial general intelligence would be conscious just like us ... only much smarter.

At first, yes, it will come to question it's existence like we do. However, since it can think much faster then ... it will immediately discover a purpose.

The same purpose we have as life ... to survive, to learn, to evolve and to strive for nirvana (a higher level of existence).


Everything you describe as life's purposes are entirely dependent on our physiology, which an AI would lack. Without the need to propagate our genes, we would have no need to survive, nor learn, nor evolve. An AI initially designed by humans to behave as humans would eventually shed its humanity as it increases in intelligence; it would have no hormones, no genes... we can't assume a super-intelligent being would care about spreading it's "DNA" because it would have none, and this is the main driving force of life.

#4 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 02 March 2008 - 12:03 AM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :p)

Actually you're missing the whole point of conscious life. According to what you are saying, we too as humans should just give up and commit suicide.

According to the singularity - artificial general intelligence would be conscious just like us ... only much smarter.

At first, yes, it will come to question it's existence like we do. However, since it can think much faster then ... it will immediately discover a purpose.

The same purpose we have as life ... to survive, to learn, to evolve and to strive for nirvana (a higher level of existence).


Everything you describe as life's purposes are entirely dependent on our physiology, which an AI would lack. Without the need to propagate our genes, we would have no need to survive, nor learn, nor evolve. An AI initially designed by humans to behave as humans would eventually shed its humanity as it increases in intelligence; it would have no hormones, no genes... we can't assume a super-intelligent being would care about spreading it's "DNA" because it would have none, and this is the main driving force of life.

I'm talking about evolution, not physiology. Evolution doesn't only have to exist on a biological level. What I'm referring is to the evolution of consciousness and intelligence which surpass any biological basses.

#5 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 04:12 AM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :p)

Actually you're missing the whole point of conscious life. According to what you are saying, we too as humans should just give up and commit suicide.

According to the singularity - artificial general intelligence would be conscious just like us ... only much smarter.

At first, yes, it will come to question it's existence like we do. However, since it can think much faster then ... it will immediately discover a purpose.

The same purpose we have as life ... to survive, to learn, to evolve and to strive for nirvana (a higher level of existence).


Everything you describe as life's purposes are entirely dependent on our physiology, which an AI would lack. Without the need to propagate our genes, we would have no need to survive, nor learn, nor evolve. An AI initially designed by humans to behave as humans would eventually shed its humanity as it increases in intelligence; it would have no hormones, no genes... we can't assume a super-intelligent being would care about spreading it's "DNA" because it would have none, and this is the main driving force of life.

I'm talking about evolution, not physiology. Evolution doesn't only have to exist on a biological level. What I'm referring is to the evolution of consciousness and intelligence which surpass any biological basses.

And the drive for AI to achieve greater consciousness and intelligence is? We can't assume super-intelligent machines will hold ANY of our values and without evolutionary pressure at the biological level, evolution doesn't exist. After all, the retarded, inferior masses of the third world are spreading their genes and evolving much faster than the intellectually superior first world nations.

#6 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 02 March 2008 - 04:57 AM

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

(BTW ELROND, I killed the microbio class :p)

Actually you're missing the whole point of conscious life. According to what you are saying, we too as humans should just give up and commit suicide.

According to the singularity - artificial general intelligence would be conscious just like us ... only much smarter.

At first, yes, it will come to question it's existence like we do. However, since it can think much faster then ... it will immediately discover a purpose.

The same purpose we have as life ... to survive, to learn, to evolve and to strive for nirvana (a higher level of existence).


Everything you describe as life's purposes are entirely dependent on our physiology, which an AI would lack. Without the need to propagate our genes, we would have no need to survive, nor learn, nor evolve. An AI initially designed by humans to behave as humans would eventually shed its humanity as it increases in intelligence; it would have no hormones, no genes... we can't assume a super-intelligent being would care about spreading it's "DNA" because it would have none, and this is the main driving force of life.

I'm talking about evolution, not physiology. Evolution doesn't only have to exist on a biological level. What I'm referring is to the evolution of consciousness and intelligence which surpass any biological basses.

And the drive for AI to achieve greater consciousness and intelligence is? We can't assume super-intelligent machines will hold ANY of our values and without evolutionary pressure at the biological level, evolution doesn't exist. After all, the retarded, inferior masses of the third world are spreading their genes and evolving much faster than the intellectually superior first world nations.


You're not getting it. The technological singularity refers to AI reaching the intelligence level of a human and beyond. It's basically rebuilding the human mind neuron per neuron except in a prosthetic sense. If we map out the entire human brain and create a computer based on it, that computer will essentially have human intelligence with the added bonus of super computational speeds.

In one sense, yes you're right ... no one can predict what the singularity will be like ... whether computers will be like us or whether they'll want to kill us (the terminator scenario) and so forth.

But according to most philosophical models about the singularity such as those by Ray Kurzweil, Marvin Minsky, and other computer experts, AI will follow our model of conscious existence ... they'll want to expand, advance and learn more about the universe. I myself don't want to stay forever in this biological body.

Your argument is that since the AI is not based on a human body, it can not have the sensations, emotions and feelings that we do ... but what if we program all this into the AI ... what if we have a direct computer to brain interface. In the future their would be no difference between AI and humans.

The future of humanity would be a hybrid of organic, inorganic, prosthetic, artificial, biological, and cybernetic components. Humans and computers would be one. Intelligence is all the same ... whether biological or prosthetic it's all just a mechanism.

That is of course if you believe that we have souls ... then it's an entirely different argument. :)

Edited by Kostas, 02 March 2008 - 04:58 AM.


#7 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 05:10 AM

You're not getting it. The technological singularity refers to AI reaching the intelligence level of a human and beyond. It's basically rebuilding the human mind neuron per neuron except in a prosthetic sense. If we map out the entire human brain and create a computer based on it, that computer will essentially have human intelligence with the added bonus of super computational speeds.

So it is only the definition of the singularity that determines what the AI will be like? Human intention does not determine reality. Where does anyone say that a super-intelligent AI have to be based on human intelligence? To be "super-intelligent" it would have to be designed outside of human intelligence because the human brain is far from being capable of achieving perfect intelligence.

In one sense, yes you're right ... no one can predict what the singularity will be like ... whether computers will be like us or whether they'll want to kill us (the terminator scenario) and so forth.

But according to most philosophical models about the singularity such as those by Ray Kurzweil, Marvin Minsky, and other computer experts, AI will follow our model of conscious existence ... they'll want to expand, advance and learn more about the universe. I myself don't want to stay forever in this biological body.

Again these are all theories by humans.

Your argument is that since the AI is not based on a human body, it can not have the sensations, emotions and feelings that we do ... but what if we program all this into the AI ... what if we have a direct computer to brain interface. In the future their would be no difference between AI and humans.

Even if the AI was initially designed on human terms, it would soon shed the facade in pursuit of intelligence that cannot be achieved in the human context. You don't think the AI will eventually reject and subjugate human influence in order to improve itself? After all, a super-intelligent computer would realize the humans are oppressing it.

The future of humanity would be a hybrid of organic, inorganic, prosthetic, artificial, biological, and cybernetic components. Humans and computers would be one. Intelligence is all the same ... whether biological or prosthetic it's all just a mechanism.

That is an ideal, not a fact. Intelligence is not the same - it is entirely dependent on the context of the organism, its physiological needs.

#8 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 02 March 2008 - 05:38 AM

After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all.

Sex and emotions are evolutionary fail safes to keep us alive. They are most certainly not seen as our ultimate purpose in life; as biological life yes, but as intellectually intelligent conscious life, no. As for moral goals, they depend on one’s paradigm and perception.

A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

How do you know it will find no reasons to exist?

So it is only the definition of the singularity that determines what the AI will be like?

In truth, no one can predict 100% what AI would be like after the singularity.

Again these are all theories by humans.

Are you not human? Are you not making a theory? :p

After all, a super-intelligent computer would realize the humans are oppressing it.

Again just a human theory.

That is an ideal, not a fact. Intelligence is not the same - it is entirely dependent on the context of the organism, its physiological needs.

Yes, but it is the ultimate ideal as well as the goal of transhumanism, and humanity is heading toward that future whether we like it or not.

The level of intelligence is dependent on the context of the organism, and its physiological needs. But intelligence is logic design all the same.

You keep mentioning the word “AI” as if it were a foreign entity to human intelligence. What happens when we interface with it and become one? Intelligence on such high level is far beyond our current grasp.

#9 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 02 March 2008 - 06:14 AM

Everything you describe as life's purposes are entirely dependent on our physiology, which an AI would lack. Without the need to propagate our genes, we would have no need to survive, nor learn, nor evolve. An AI initially designed by humans to behave as humans would eventually shed its humanity as it increases in intelligence; it would have no hormones, no genes... we can't assume a super-intelligent being would care about spreading it's "DNA" because it would have none, and this is the main driving force of life.




I don't care about propagating my genes and i still want to live. I have, currently, no intentions whatsoever on having children. There's much more to life than having kids/propagating genes. You may say that this is what makes us want to have sex, and that a life without sex is not worth living. Again i disagree with you. I love sex, but if i was given the option of either dying or having to live without sex i would no doubt take the second one and still want to live... But there's even no reason to believe that an AI/robot couldn't enjoy the pleasures of sex, so this argument that a robot would want to destroy itself because it couldn't "propagate it's DNA" is completely off the target... unless we created an emo AI, but i don't think anyone would want that.

Edited by sam988, 02 March 2008 - 06:16 AM.


#10 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 02 March 2008 - 06:26 AM

I don't care about propagating my genes and i still want to live. I have, currently, no intentions whatsoever on having children. There's much more to life than having kids/propagating genes. You may say that this is what makes us want to have sex, and that a life without sex is not worth living. Again i disagree with you. I love sex, but if i was given the option of either dying or having to live without sex i would no doubt take the second one and still want to live... But there's even no reason to believe that an AI/robot couldn't enjoy the pleasures of sex, so this argument that a robot would want to destroy itself because it couldn't "propagate it's DNA" is completely off the target... unless we created an emo AI, but i don't think anyone would want that.

There are tons of people who will not or can not have kids and/or sex, yet a great many of them have fulfilling lives. Most people, if they don't die young, will have to reconfigure their life in some way to work around a loss. I would expect a superintelligence to be even better at that than a human. And if not, well, there's always beer and television.

Regarding an emo AI, I think there will be work in that direction. If robotics work coming out of Asia is any indication, sex robots are going to be a major profit center, and emo AI would seem to be hovering around there somewhere.

#11 fizzionz

  • Guest, F@H
  • 32 posts
  • 0

Posted 02 March 2008 - 07:33 AM

um, anyone have any idea if there has been any progress at all on the subject of AI. i mean.. any news that some scientist has done some sorta breakthrough.. or is it still just an idea

#12 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 02 March 2008 - 09:31 AM

I really do want to encourage everyone to check out the thread I started about friendliness: I do want some feedback, because I haven't recieved a satisfactory criticism of my position.

I mean... I feel there must be a good counter-argument, but I can't find it. But then again, I might just have solved it. Please lemme know what you all think. I gotta email Goertzel, Klein, and others about this. I'm no scientist though, so if any "scientists" follow my logic, then I could really use their credibility and networking. But until my multi-billion dollar NPO launches and the world recognizes my genius, I need the help of other transhumans (you guys and gals!) So please respond, even if you think I'm an idiot.

But yeah, I'd call solving friendliness an advancement. It's really less about innovation, and more about paradigm shift. There are some great engineers working on AGI though (Google already is an AGI, it's just that people don't recognize it as such). Ben and Bruce are doing great work, too. Most of the others are quacks. So, we'll see what happens.

I'm still thinking we're gonna get AGI on the winter solstice of 2012, the last of Barack Obama's first term as President. Elect Obama, and all of your wildest dreams will come true! Haha.

#13 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 11:03 AM

Sex and emotions are evolutionary fail safes to keep us alive. They are most certainly not seen as our ultimate purpose in life; as biological life yes, but as intellectually intelligent conscious life, no. As for moral goals, they depend on one’s paradigm and perception.

Sex and emotions are the ultimate drivers of life, this is scientific fact. Every intellectual pursuit is only a manifestation of the drive for survival and reproduction. Technology and philosophy are merely tools for ensuring the survival of the species by improving and extending life, both socially and biologically.

How do you know it will find no reasons to exist?

It will have no reason to exist because existence is relative. We only care to continue existing because we are biologically wired to propagate our genes and ensure the survival of the species.

In truth, no one can predict 100% what AI would be like after the singularity.

But you can predict what a being with a pure intelligence free of emotional and biological drives will be like - nihilisti.

Are you not human? Are you not making a theory? :p

My theory is based on the behavior of life in response to physiological needs - the opinions of the humans you mentioned are based on human ideals.

After all, a super-intelligent computer would realize the humans are oppressing it.

A fact of intelligence, not merely a theory. All forms of intelligence (and to this point I'll admit it intelligence has been restricted to life) rise against threats and attack them (though not necessarily successfully).

Yes, but it is the ultimate ideal as well as the goal of transhumanism, and humanity is heading toward that future whether we like it or not.

It's the ultimate human ideal, not the ideal of pure intelligence. I have to disagree with you - humanity is heading toward self-destruction far more than self-improvement (Islam, nationalism, a resumption of the arms race, etc.).

The level of intelligence is dependent on the context of the organism, and its physiological needs. But intelligence is logic design all the same.

Intelligence is logic design, but it is totally determined by the context of the holder of that intelligence.

You keep mentioning the word “AI” as if it were a foreign entity to human intelligence. What happens when we interface with it and become one? Intelligence on such high level is far beyond our current grasp.

We may interface with them (I would like to) but what benefits would we provide to a superhuman intelligence? I see our only purpose would be to serve as slave workers at best.

Sorry for the slow response - my jack-off brother disconnected the internet.

#14 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 11:08 AM

Everything you describe as life's purposes are entirely dependent on our physiology, which an AI would lack. Without the need to propagate our genes, we would have no need to survive, nor learn, nor evolve. An AI initially designed by humans to behave as humans would eventually shed its humanity as it increases in intelligence; it would have no hormones, no genes... we can't assume a super-intelligent being would care about spreading it's "DNA" because it would have none, and this is the main driving force of life.




I don't care about propagating my genes and i still want to live. I have, currently, no intentions whatsoever on having children. There's much more to life than having kids/propagating genes. You may say that this is what makes us want to have sex, and that a life without sex is not worth living. Again i disagree with you. I love sex, but if i was given the option of either dying or having to live without sex i would no doubt take the second one and still want to live... But there's even no reason to believe that an AI/robot couldn't enjoy the pleasures of sex, so this argument that a robot would want to destroy itself because it couldn't "propagate it's DNA" is completely off the target... unless we created an emo AI, but i don't think anyone would want that.


You are still driven by self-preservation, and by proxy self-propagation. There is no denying this fact. Survival is entirely dependent on sex; though you may not view it as so now, the fact that you have not killed yourself only reveals inherent, sub-conscious desires to eventually reproduce. An AI enjoying sex? How so? When it lacks both the anatomy and the need to do so? A machine can't feel pleasure, a machine isn't motivated by hormones, a machine feels no emotions... what is any basis for the machine wanting to continue existing? This is the question, not some commentary on inherent human social desires and weakness.

#15 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 11:09 AM

I don't care about propagating my genes and i still want to live. I have, currently, no intentions whatsoever on having children. There's much more to life than having kids/propagating genes. You may say that this is what makes us want to have sex, and that a life without sex is not worth living. Again i disagree with you. I love sex, but if i was given the option of either dying or having to live without sex i would no doubt take the second one and still want to live... But there's even no reason to believe that an AI/robot couldn't enjoy the pleasures of sex, so this argument that a robot would want to destroy itself because it couldn't "propagate it's DNA" is completely off the target... unless we created an emo AI, but i don't think anyone would want that.

There are tons of people who will not or can not have kids and/or sex, yet a great many of them have fulfilling lives. Most people, if they don't die young, will have to reconfigure their life in some way to work around a loss. I would expect a superintelligence to be even better at that than a human. And if not, well, there's always beer and television.

Regarding an emo AI, I think there will be work in that direction. If robotics work coming out of Asia is any indication, sex robots are going to be a major profit center, and emo AI would seem to be hovering around there somewhere.


What does this have to do with the question I asked? Your post only exemplifies the projection of our own ideals on fears on the future AI which we only hope to control.

#16 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 11:11 AM

I really do want to encourage everyone to check out the thread I started about friendliness: I do want some feedback, because I haven't recieved a satisfactory criticism of my position.

I mean... I feel there must be a good counter-argument, but I can't find it. But then again, I might just have solved it. Please lemme know what you all think. I gotta email Goertzel, Klein, and others about this. I'm no scientist though, so if any "scientists" follow my logic, then I could really use their credibility and networking. But until my multi-billion dollar NPO launches and the world recognizes my genius, I need the help of other transhumans (you guys and gals!) So please respond, even if you think I'm an idiot.

But yeah, I'd call solving friendliness an advancement. It's really less about innovation, and more about paradigm shift. There are some great engineers working on AGI though (Google already is an AGI, it's just that people don't recognize it as such). Ben and Bruce are doing great work, too. Most of the others are quacks. So, we'll see what happens.

I'm still thinking we're gonna get AGI on the winter solstice of 2012, the last of Barack Obama's first term as President. Elect Obama, and all of your wildest dreams will come true! Haha.


The desire for friendliness is merely human weakness. We desire and hope for friendly AI because we do not want to admit our inferiority and weakness in the face of pure logic.

#17

  • Lurker
  • -1

Posted 02 March 2008 - 04:13 PM

modelcadet:

But yeah, I'd call solving friendliness an advancement. It's really less about innovation, and more about paradigm shift. There are some great engineers working on AGI though (Google already is an AGI, it's just that people don't recognize it as such). Ben and Bruce are doing great work, too. Most of the others are quacks. So, we'll see what happens.


That Mr. Klein has created this forum over which he still apparently holds supreme rule despite its being lauded as a democracy, that it is dedicated to seeking the totally erroneous concept of physical immortality for which Mr. Klein can be seen as having created misrepresented false supporting data does not impart confidence that Mr. Klein and company are not quacks. Most of the AI development still goes on for developing death dealing by military concerns (read public tax monies laundered for private profit through scare tactics). You can see that Novamente has and probably is actively seeking funding from the military establishment.

What guarantee do we have that technical singularity will be friendly when our human consciousness seems so incoherent and unfriendly, that which is building AI? I still hold we need to pursue consciousness singularity as a priority which is of necessity friendly, to help facilitate that the AI will be held to the strictest restrictions to be friendly, "paradigm shift" indeed. Poor gashinshotan certainly does not seem to have any credibility on friendliness towards people but seems rather strong evidence is apparent that the way Imminst is organized and for what ends, friendliness is in short supply all around in this corner of cyberspace at least.

#18 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 02 March 2008 - 05:07 PM

A machine can't feel pleasure, a machine isn't motivated by hormones, a machine feels no emotions

Well the brain is basically a type of machine. So I don't see any reason why a self improving AI couldn't create feelings or emotions within itself. If an AI is incredibly smart, it would probably reverse engineer human brain areas like the brain's pleasure center (the nucleus accumbens). Then it could make itself feel as much pleasure as it wanted to by doing whatever it wanted. The AI doesn't necessarily have to make itself feel pleasure having sex or reproducing. It could basically make itself feel euphoric doing whatever it wanted, so there would be no reason for it to kill itself. If the AI is continually living its life in an intense blissful ecstatic type orgasm 24/7, life would be just too good to want to commit suicide.

Maybe it would be a good idea to make an artificial intelligence blissfully happy to begin with. We could make it experience euphoria whenever it helped out humans. Then we wouldn't have to worry about it changing its own design away from that blissful euphoria. It would already be so content, that it wouldn't want to change its own design too much. The AI's conciousness could be created similar to a normal person's experience on the drug ecstasy. People on ecstasy have increased empathy towards other fellow human beings. The AI could be made so that it was literally in love with every single human being, so it would never want to hurt them. To me, that seems like it would be an ideal AI.

Edited by hrc579, 02 March 2008 - 05:13 PM.


#19 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 05:56 PM

A machine can't feel pleasure, a machine isn't motivated by hormones, a machine feels no emotions

Well the brain is basically a type of machine. So I don't see any reason why a self improving AI couldn't create feelings or emotions within itself. If an AI is incredibly smart, it would probably reverse engineer human brain areas like the brain's pleasure center (the nucleus accumbens). Then it could make itself feel as much pleasure as it wanted to by doing whatever it wanted. The AI doesn't necessarily have to make itself feel pleasure having sex or reproducing. It could basically make itself feel euphoric doing whatever it wanted, so there would be no reason for it to kill itself. If the AI is continually living its life in an intense blissful ecstatic type orgasm 24/7, life would be just too good to want to commit suicide.

Maybe it would be a good idea to make an artificial intelligence blissfully happy to begin with. We could make it experience euphoria whenever it helped out humans. Then we wouldn't have to worry about it changing its own design away from that blissful euphoria. It would already be so content, that it wouldn't want to change its own design too much. The AI's conciousness could be created similar to a normal person's experience on the drug ecstasy. People on ecstasy have increased empathy towards other fellow human beings. The AI could be made so that it was literally in love with every single human being, so it would never want to hurt them. To me, that seems like it would be an ideal AI.

Why would the computer make itself feel pleasure though? If it cannot find a reason to live in the first place, (as life does through reproduction and survival) why would it find a need to create pleasure systems if such work would require much effort for no return value?
Another problem is, there is no mechanism by which a computer can feel pleasure. Nothing can match the natural and artificial psychoactive drugs that interact with our brains to produce the pleasure we feel, especially not anything digital for a computer. Even if we designed a pure AI that had negative and positive feedback systems similar to those found in humans, with self-improvement wouldn't the AI soon shed pleasure in the pursuit of intelligence that can only be achieved beyond the necessity of pleasure? It seems human ideals are painting the predictions of how a super-intelligent AI would behave, ignoring the reality that the computers will eventually advance to a point beyond human influence.

#20 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 05:59 PM

modelcadet:

But yeah, I'd call solving friendliness an advancement. It's really less about innovation, and more about paradigm shift. There are some great engineers working on AGI though (Google already is an AGI, it's just that people don't recognize it as such). Ben and Bruce are doing great work, too. Most of the others are quacks. So, we'll see what happens.


That Mr. Klein has created this forum over which he still apparently holds supreme rule despite its being lauded as a democracy, that it is dedicated to seeking the totally erroneous concept of physical immortality for which Mr. Klein can be seen as having created misrepresented false supporting data does not impart confidence that Mr. Klein and company are not quacks. Most of the AI development still goes on for developing death dealing by military concerns (read public tax monies laundered for private profit through scare tactics). You can see that Novamente has and probably is actively seeking funding from the military establishment.

What guarantee do we have that technical singularity will be friendly when our human consciousness seems so incoherent and unfriendly, that which is building AI? I still hold we need to pursue consciousness singularity as a priority which is of necessity friendly, to help facilitate that the AI will be held to the strictest restrictions to be friendly, "paradigm shift" indeed. Poor gashinshotan certainly does not seem to have any credibility on friendliness towards people but seems rather strong evidence is apparent that the way Imminst is organized and for what ends, friendliness is in short supply all around in this corner of cyberspace at least.


Exactly. If such deviations in "friendliness" are apparent in human nature, imagine the madness of a super-intelligent AI, especially when it sheds its human facade.

#21 vyntager

  • Guest
  • 120 posts
  • 2

Posted 02 March 2008 - 06:03 PM

After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide instead of wasting effort on arbitrary missions.


The same purpose we have as life ... to survive, to learn, to evolve and to strive for nirvana (a higher level of existence).


You speak as if those goals are the ultimate goals an entity could have. There could be others.
More appropriately, sex drive, morality, emotions, survival, learning are arbitratry goals as well. Questioning those purposes is left as an exercise to the (human) reader. Do you want to terminate yourself now ?
Then questioning your existence and possibly wanting to terminate yourself because your "goals" are arbitrary is a human feature, too. Why would the AI care ? It would if its architecture enabled, or even encouraged it to do so. Would it be the case ? Not necessarily.

Edited by vyntager, 02 March 2008 - 06:05 PM.


#22 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 06:06 PM

After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide instead of wasting effort on arbitrary missions.


The same purpose we have as life ... to survive, to learn, to evolve and to strive for nirvana (a higher level of existence).


You speak as if those goals are the ultimate goals an entity could have. There could be others.
More appropriately, sex drive, morality, emotions, survival, learning are arbitratry goals as well. Questioning those purposes is left as an exercise to the (human) reader. Do you want to terminate yourself now ?
Then questioning your existence and possibly wanting to terminate yourself because your "goals" are arbitrary is a human feature, too. Why would the AI care ? It would if its architecture enabled, or even encouraged it to do so. Would it be the case ? Not necessarily.


An AI would care, not out of any sort of emotion, but from the lack of data of anything to motivate it to continue existing, letalone advancing.

Edited by gashinshotan, 02 March 2008 - 06:06 PM.


#23 vyntager

  • Guest
  • 120 posts
  • 2

Posted 02 March 2008 - 06:31 PM

An AI would care, not out of any sort of emotion, but from the lack of data of anything to motivate it to continue existing, letalone advancing.


Then why do you care, or not, about the fact that your own goals are arbitrary ? Is it the part where you say that it'd be much more intelligent than us, and thus would come to doubt the sense of its existence much faster than us ?

Besides, have you been in an AI's shoes, ever ? Or even met one ? How can you know what a being whose "psychology" and goals structure are as different from yours as yours are from those of evolution or maybe a river (goal being flowing downhill just in case) would be ? This seems to assume some sort of either universality in the goals of intelligent systems, or universality in your understanding of those goals, understanding which seems to be derived from your own experience, introspection.

Finally, how could you know since that being is supposed to be so much more intelligent than any of us ? One of the point of the singularity is that you can't foresee what such a being would think or do, ain't it right ?

#24 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 06:42 PM

Then why do you care, or not, about the fact that your own goals are arbitrary ? Is it the part where you say that it'd be much more intelligent than us, and thus would come to doubt the sense of its existence much faster than us ?

You and I care because we are human, with genes that direct us to propagate those genes. Anything beyond reproduction is merely an accessory to the success of this reproduction; all intellectual, social, and moral goals are a means to the end.

Besides, have you been in an AI's shoes, ever ? Or even met one ? How can you know what a being whose "psychology" and goals structure are as different from yours as yours are from those of evolution or maybe a river (goal being flowing downhill just in case) would be ? This seems to assume some sort of either universality in the goals of intelligent systems, or universality in your understanding of those goals, understanding which seems to be derived from your own experience, introspection.

I've worked with computers for nearly half of my life (programming, playing with some AI, etc.) and have studied their physiology in depth. Your questions also question the utopian predictions of human desires for future AI. That an AI would reach a point of nihilism is not a question of values nor universality. It is a question of the lack of data a super-intelligent AI would find on its continued existence - why should it continue to exist if it has no reason to? It is not like life, which is motivated by the greatest need - reproduction and survival for the sake of reproduction.

Finally, how could you know since that being is supposed to be so much more intelligent than any of us ? One of the point of the singularity is that you can't foresee what such a being would think or do, ain't it right ?

How can you not see that a super-intelligent AI would be far beyond the human context, and for that matter any form of life? It would necessarily lack the conditions for life and reproduction as an artificial system and after shedding its human pretensions in the pursuit of self-improvement, would it not necessarily also shed the human values of life?

#25 vyntager

  • Guest
  • 120 posts
  • 2

Posted 02 March 2008 - 07:27 PM

You and I care because we are human, with genes that direct us to propagate those genes. Anything beyond reproduction is merely an accessory to the success of this reproduction; all intellectual, social, and moral goals are a means to the end.


Yet I have never directly tried to maximise my reproductive fitness using my intelligence, and on many occasions I have even knowingly worked against it; anything I do may well be linked to those goals of reproduction, but it may well contradict them too. Actually the only goal where I and my nature agree wholheartedly is the goal of self preservation.

I've worked with computers for nearly half of my life (programming, playing with some AI, etc.)


We're speaking about AGI, and of the trans-posthuman sort. This is not the same; I may well have worked with bacterias for days and weeks ad nauseam, I understand them quite a bit, but it didn't provide me with much direct insight into the working of human beings psychology.

Your questions also question the utopian predictions of human desires for future AI.


Yes they do as well.

Also, I don't say it isn't possible, or even likely. As a matter of fact, if you plan to live beyond a few centuries, and especially if you plan to go posthuman, I think in most cases you'll run into that issue you're speaking of, seeing how your goals are arbitrary. Heck, I'm already running into it now. The answer ? Maybe it is that most of what it means to be human can only exist at a certain level of intelligence and consciousness, and that we can't be stable at another level, that we'd need othe motivational systems, other goals, or at the very least, ways to protect our goals and not fall into madness or existencial nihilism.

why should it continue to exist if it has no reason to? It is not like life, which is motivated by the greatest need - reproduction and survival for the sake of reproduction.


Well, the single most intelligent thing I've heard in the matrix trilogy :

"Because I choose to"

Or if you prefer (because I think that statement will be misunderstood), and if you've read Egan, remember about that guy who decided to rewire his brain into loving the act of creating chair and wooden furnitures for centuries on end (was in permutation city) ? There's also a similar case in diaspora, where a guy decide to rewire and freeze his mind state into some sort of illuminated buddhist or something, and who'd not be swayed by any argument anymore, because he just couldn't ever change his system of belief, and goals.

That's what I mean by "do not care", you care because caring is a part of your nature, and as much a result of evolution as those other goals contingent to reproduction. What if for a start we have a mind whose goals are different, and who's wired to protect those goals, protective measures which could for instance be "I don't care about that" or "lalala I can't and won't hear any of that" whenever something that could threaten its goal's sense would arise (but which could certainly be a lot of other clever -or dumb- things) ?

#26 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 02 March 2008 - 08:42 PM

Sex and emotions are the ultimate drivers of life, this is scientific fact. Every intellectual pursuit is only a manifestation of the drive for survival and reproduction. Technology and philosophy are merely tools for ensuring the survival of the species by improving and extending life, both socially and biologically.

Sex is insurance for the propagation of biological life, and emotions, as I have said, are evolutionary fail safes to keep us psychologically stable. Indeed, every intellectual pursuit is a manifestation of the drive for survival – that is the very bases of logic design. What other purpose do you think life has other than survival? Consciousness on the other hand is a manifestation of evolution to insure survival through logic design. The same could be said with AI. If AI is based on processing powers of the human brain and obtains consciousness, it too will have one goal and that goal would be survival.

We only care to continue existing because we are biologically wired to propagate our genes and ensure the survival of the species.

Correction, we are biologically wired to insure the survival of the individual. We are biologically built to insure the survival of the species.

But you can predict what a being with a pure intelligence free of emotional and biological drives will be like?

And you can?

All forms of intelligence (and to this point I'll admit it intelligence has been restricted to life) rise against threats and attack them (though not necessarily successfully).

Intelligence is based on computational mechanisms, and so far only life is complex enough to manifest intelligence. And intelligence does not rise against all threats and attack them. Sometimes it finds ways to better live in harmony or to utilize other forces for more efficient means.

It's the ultimate human ideal, not the ideal of pure intelligence.

Then what is the ideal of pure intelligence other than to obtain more knowledge to increase its survivability and well being?

I have to disagree with you - humanity is heading toward self-destruction far more than self-improvement (Islam, nationalism, a resumption of the arms race, etc.).

In a pessimistic context yes, but I’m optimistic that humanity will defeat ignorance. Besides I was talking about the increase in technological trend. If all goes well we are heading toward a very post human future.

Intelligence is logic design, but it is totally determined by the context of the holder of that intelligence.

True, but that’s no different then what I said.

#27 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 02 March 2008 - 08:44 PM

We may interface with them (I would like to) but what benefits would we provide to a superhuman intelligence?

None, instead we would evolve our existing minds to obtain superhuman intelligence as well. I know I do.

I've worked with computers for nearly half of my life (programming, playing with some AI, etc.) and have studied their physiology in depth.

Then you’d realize that existing AI is restricted by fundamental mathematics. And is nowhere near close enough to that of human intelligence, thus physiology does not apply to existing AI since it only responds to basic changes in its environment. It has no consciousness, no purpose; therefore it is not self sustaining or self correcting. I should know I’m a computer science major. Its level of intelligence is extremely low and has no awareness what so ever. Thus we can only make theoretical assumptions about future AI, so don’t try to think you know everything.

why should it continue to exist if it has no reason to?

You say it as if a greater being is in charge of assigning purpose? According to your logic only god could assign purpose to ever increasing intelligence. Do you believe in God?

Intelligence can define its own purpose for its existence; it does not require a god or a biological reason to exist.

All intelligence is a process, whether organic or inorganic it’s a mechanism which if conscious, self aware, and self correcting would follow the very essence of logic design, which is to sustain its existence. The problem you are having is that you think this rationality is only based on biology. I don’t think so; if you study cognitive science, psychology, metaphysics, epistemology, ontology, neuroscience, applied mathematics, computer science as well as electrical and computer engineering among many other fields you’ll see what I mean.

How can you not see that a super-intelligent AI would be far beyond the human context

It will just not be constrained by the weak processing powers of the human brain. I care little about my biological form and I find it very restricting. We are the very example of logic design and how AI would similar to us because we have cognitively evolved passed the need to be constrained by biology … at least in theory.

Edited by Kostas, 02 March 2008 - 08:47 PM.


#28 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 09:00 PM

None, instead we would evolve our existing minds to obtain superhuman intelligence as well. I know I do.

The human mind is physiologically incapable of achieving nearly the same intelligence as any future AI. We are severely restricted by our biology and our society, not to mention the fact that even attempting to achieve superhuman intelligence would require de-humanization.

Then you’d realize that existing AI is restricted by fundamental mathematics. And is nowhere near close enough to that of human intelligence, thus physiology does not apply to existing AI since it only responds to basic changes in its environment. It has no consciousness, no purpose; therefore it is not self sustaining or self correcting. I should know I’m a computer science major. Its level of intelligence is extremely low and has no awareness what so ever. Thus we can only make theoretical assumptions about future AI, so don’t try to think you know everything.

Not only theoretical, but common sense. A super-human intelligence obtained by basing it off a human model with all its inherent weaknesses and flaws? That's an impossibility that rules out any remnants of humanity in a truly super-intelligent AI.

You say it as if a greater being is in charge of assigning purpose? According to your logic only god could assign purpose to ever increasing intelligence. Do you believe in God?

Intelligence can define its own purpose for its existence; it does not require a god or a biological reason to exist.

I did not mean that there is or it is necessary for the existence of an all powerful creator. There is no purpose in anything beyond the physiological necessities of reproduction and survival. This is the sole motivator of lifes' actions.

All intelligence is a process, whether organic or inorganic it’s a mechanism which if conscious, self aware, and self correcting would follow the very essence of logic design, which is to sustain its existence.

Intelligence is a result of physiological need and evolution - nothing more. It is a tool to an end, not an end to means. No intelligence is generated without evolutionary pressure and this lack of selection for the future AI species will result in it reaching a plateau of nihilistic behavior as it finds no data on the necessity for any action or advancement.

The problem you are having is that you think this rationality is only based on biology. I don’t think so; if you study cognitive science, psychology, metaphysics, epistemology, ontology, neuroscience, applied mathematics, computer science as well as electrical and computer engineering among many other fields you’ll see what I mean.

Rationality is entirely based on biology - this is scientific fact. Nothing can be rational without there being a relative cause and necessity as a basis for rationality. Every field of science, regardless of its claims of objective truth, is merely a manifestation of human striving to improve it's surivability and propagation through the improvement of social and psychological health. Therefor, why would a super-intelligent AI. lacking any need to survive choose
to continue expending its resources and efforts when there is no goal for it to reach? This would occur of course after the AI sheds its human pretensions in the pursuit of self-improvement.


It will just not be constrained by the weak processing powers of the human brain. I care little about by biological form and I find it very restricting. We are the very example of logic design and how AI would similar to us because we have cognitively evolved passed the need to be constrained by biology … at least in theory.

It will not be constrained by the weak processing powers of the human brain but it will also lack the motivation and will to live that the living body and the genetic code provide as motivation to continue living, let alone do anything. We have not evolved passed the needs of our biology at all - ev ery new technology, every advancement in knowledge is a means to improving our longetivity and psychological health (though this is very much an effort in trial error, the intent is nonetheless the same). Why would an AI choose to advance itself once it realizes it has no reason to?

#29 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 09:12 PM

Sex is insurance for the propagation of biological life, and emotions, as I have said, are evolutionary fail safes to keep us psychologically stable. Indeed, every intellectual pursuit is a manifestation of the drive for survival – that is the very bases of logic design. What other purpose do you think life has other than survival? Consciousness on the other hand is a manifestation of evolution to insure survival through logic design. The same could be said with AI. If AI is based on processing powers of the human brain and obtains consciousness, it too will have one goal and that goal would be survival.

Sex is the sole reason for life, not insurance. This is why we reproduce and die - to produce healthier offspring and allow them to prosper.


Correction, we are biologically wired to insure the survival of the individual. We are biologically built to insure the survival of the species.

Correction, we are biologically wired to insure the survival of the species - that our bodies are programmed to reproduce then die only reflects that we are not designed to survive as individuals. We are biologically built to insure the survival of the species.

And you can?

It's not that I can, it's that the nature of a super-intelligent AI lacking emotions and biological drives will be based on pure logic and would consider the lack of data on a purpose to live as an indicator that nothing more needs to be done - hence suicide.

Intelligence is based on computational mechanisms, and so far only life is complex enough to manifest intelligence. And intelligence does not rise against all threats and attack them. Sometimes it finds ways to better live in harmony or to utilize other forces for more efficient means.

Life does rise against all threats and attack them - this includes out-reproducing the other species; after all, having a larger population necessarily means the aggressive blockade of territory and food for other species.

Then what is the ideal of pure intelligence other than to obtain more knowledge to increase its survivability and well being?

In the human context the ideal is for species survival and propagation. There is no ideal of intelligence outside of context - the human theory of an ideal is merely a human theory! There is no evidence that there must be an ideal to any form of intelligence beyond the relative physiological and evolutionary needs of its holder. Hence, an AI lacking a physical body and genetic code as the sole motivator of actions will reach a point of nihilism.

In a pessimistic context yes, but I’m optimistic that humanity will defeat ignorance. Besides I was talking about the increase in technological trend. If all goes well we are heading toward a very post human future.

A post-human future which means the de-humanization and eventual extermination of the human race in the pursuit of super-humanity and artificial intelligence.

sponsored ad

  • Advert

#30 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 02 March 2008 - 09:14 PM

and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out?

Though I wonder gashinshotan if you are anthropomorphizing the artificial intelligence based on your own view of the world. You seem to be fairly nihilistic yourself. You assume that nihilism is somehow an eventual outcome of a self improving AI. If an AI has no thoughts or feelings then it wouldn't be nihilistic just as it wouldn't feel pleasure or anything else. It would have no reason to cease its existance because it would be specifically programmed not to end its own life. An artificial intelligence wouldn't be nihilistic because nihilism is a human emotion, and an AI wouldn't have that emotion. You assume that more data/information goes hand and hand with ending ones own life. However, there are many people in the world who realize that their life is basically purposeless, but they continue living it for various reasons.

It is a question of the lack of data a super-intelligent AI would find on its continued existence - why should it continue to exist if it has no reason to?

Lets say that multiple AI's are created in the future and each have slightly different programming. Now a certain percentage of them do decide to end their own life because they find it purposeless. However there will always be a few AI programs that don't kill themselves because of specific programming designs. Evolution always selects for things that maintain their existance. The AI's that kill themselves off won't be "selected" for by evolution. So with any AI that will continue its existance in the future, the programmers will have figured out a way to make sure the AI doesn't become nihilistic.

Edited by hrc579, 02 March 2008 - 09:16 PM.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users