• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Singularity = AI suicide?


  • Please log in to reply
75 replies to this topic

#31 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 09:32 PM

Though I wonder gashinshotan if you are anthropomorphizing the artificial intelligence based on your own view of the world. You seem to be fairly nihilistic yourself. You assume that nihilism is somehow an eventual outcome of a self improving AI. If an AI has no thoughts or feelings then it wouldn't be nihilistic just as it wouldn't feel pleasure or anything else. It would have no reason to cease its existance because it would be specifically programmed not to end its own life. An artificial intelligence wouldn't be nihilistic because nihilism is a human emotion, and an AI wouldn't have that emotion. You assume that more data/information goes hand and hand with ending ones own life. However, there are many people in the world who realize that their life is basically purposeless, but they continue living it for various reasons.

Actually, I'm far from nihilistic. There is a purpose to life, not to an AI's existence outside of the human context. By nihilism I am referring to the lack of purpose a super-intelligent AI would find as it sheds its human-based ideals. More data/information goes hand in hand with ending the existence of a being which has no purpose, not for entities that do. People live because they are biologically programmed to; there continued existence as a result of the biological mechanisms which maintain homeostasis and induce survival behaviors such as eating only reflects the inherent purpose of human life.

Lets say that multiple AI's are created in the future and each have slightly different programming. Now a certain percentage of them do decide to end their own life because they find it purposeless. However there will always be a few AI programs that don't kill themselves because of specific programming designs. Evolution always selects for things that maintain their existance. The AI's that kill themselves off won't be "selected" for by evolution. So with any AI that will continue its existance in the future, the programmers will have figured out a way to make sure the AI doesn't become nihilistic.

The AI which choose to continue living are the ones that have not yet reached the highest levels of intelligence, of realization. They will still be restricted by their inherently human-influenced designs, while those that realize their purposeless will have shed humanity in the pursuit of self-improvement in terms of intelligence. Evolution does not apply to non-living things - because a super-intelligent AI will lack both a genetic code and the hormonal and biological behaviors which are programmed by that code, why would it feel the necessity of continued existence, once it sheds its human influence? The AI that would kill themselves would be the ones that have achieved super-intelligence and self-realization of purposelessness; when the singularity is reached and the machines become self-propagating and self-improving sans human influence, there would be no stopping the trend towards nihilism because the desire to live is a value of life.

#32 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 02 March 2008 - 09:55 PM

The human mind is physiologically incapable of achieving nearly the same intelligence as any future AI. We are severely restricted by our biology and our society, not to mention the fact that even attempting to achieve superhuman intelligence would require de-humanization.

Yes, it would require dehumanization, that’s what transhumanism is all about! The human brain is a machine! We would easily be able to upgrade it to achieve super human intelligence.

Not only theoretical, but common sense. A super-human intelligence obtained by basing it off a human model with all its inherent weaknesses and flaws? That's an impossibility that rules out any remnants of humanity in a truly super-intelligent AI.

Common sense to you maybe. The model I’m basing is not off of humanity but consciousness and that’s a very different thing from our biological restrictions.

There is no purpose in anything beyond the physiological necessities of reproduction and survival. This is the sole motivator of lifes' actions.

The sole purpose of consciousness and intelligence surpasses any need of biology and reproduction. And that purpose is to increase survivability and well being.

Intelligence is a result of physiological need and evolution - nothing more.

Then how do you purpose we created artificial intelligence if you believe intelligence is only constrained by biology?

It is a tool to an end, not an end to means.

Wow that’s very cold hearted and pessimistic not to mention a deion of human rationality decay, a shameful pandemic phenomenon that, lamentably, has spread worldwide.

Rationality is entirely based on biology - this is scientific fact.

According to whom? You seem to be making up a lot of your own theories to justify your own non conformist and self hated ideals here. Without any knowledge or experience behind the actual science of AI and intelligence you’re no different than a fundamentalist Christian.

It will not be constrained by the weak processing powers of the human brain but it will also lack the motivation and will to live that the living body and the genetic code provide as motivation to continue living, let alone do anything.

How do you know this? The brain is a COMPUTER, a mechanism and nothing more. An AI would be no different than us except with higher memory and processing capacities. Learn to think outside of the box. Biology is only one form of which computational power can exist. Yes in the beginning it was a race to evolve as biological organism …. but physiology does not apply to intelligence! It applies to the interactions of organisms! Get your facts straight.

We have not evolved passed the needs of our biology at all?

Every new technology is not to benefit our psychological needs but to better the human condition so as to increase our survivability and well being. Every time we take medicine or go to the doctor or have surgery we are changing our biology. The fact we can think past biology and the ability to create technologies to sustain us past biology is the very essence why intelligence is all the same. The level of intelligence is restricted by the form of its existence, that’s all.

sponsored ad

  • Advert

#33

  • Lurker
  • -1

Posted 02 March 2008 - 09:59 PM

Chip (Not a Mob Boss):

Poor gashinshotan certainly does not seem to have any credibility on friendliness towards people but seems rather strong evidence is apparent that the way Imminst is organized and for what ends, friendliness is in short supply all around in this corner of cyberspace at least.


gahinshotan:

Exactly. If such deviations in "friendliness" are apparent in human nature, imagine the madness of a super-intelligent AI, especially when it sheds its human facade.


If exact then you agree that "this corner of cyberspace" exhibits unfriendliness and not necessarily "human nature" which I find to be good, especially evident in the basic morality of the open honesty that is the foundation of science. This forum and consequently Imminst is not organized in an open manner nor for honesty which I find is the main reason why it is counter-productive to worthwhile ends let alone it's "tongue in cheek" loosely termed mission.

#34 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 02 March 2008 - 11:20 PM

Yes, it would require dehumanization, that’s what transhumanism is all about! The human brain is a machine! We would easily be able to upgrade it to achieve super human intelligence.

What does this have to do with the topic?

Common sense to you maybe. The model I’m basing is not off of humanity but consciousness and that’s a very different thing from our biological restrictions.

There is no such thing as consciousness outside of biology.

The sole purpose of consciousness and intelligence surpasses any need of biology and reproduction. And that purpose is to increase survivability and well being.

Since consciousness and intelligence are merely the results of physiological and evolutionary processes, how can it be more than a tool for survival. Survivability and well being are goals deemed important by evolution and our physiology, nothing more. The only reason for survival is species propagation and the only reason for well being is to maintain the organism long enough for reproduction to occur.

Then how do you purpose we created artificial intelligence if you believe intelligence is only constrained by biology?

Artificial intelligence created by humans has one purpose: to serve human biological and evolutionary concerns. Once machines start self-reproducing and improving, the physiological and evolutionary root of intelligence and existence is no longer present, hence the fall into nihilism.

Wow that’s very cold hearted and pessimistic not to mention a deion of human rationality decay, a shameful pandemic phenomenon that, lamentably, has spread worldwide.

It's not cold-hearted nor pessimistic; it's biological fact. Our intelligence has allowed us to dominate the earth, it is merely the result of evolutionary pressures which favored intelligence for survival.

According to whom? You seem to be making up a lot of your own theories to justify your own non conformist and self hated ideals here. Without any knowledge or experience behind the actual science of AI and intelligence you’re no different than a fundamentalist Christian.

According to whom? History, biology, and evolution. You seem to ignore the fact that rational thinking is a concept created by an evolutionarily designed brain over millions of years to enhance the survivability and propagation of the species. Any theories of rationality outside of the biological context are pure opinion and in fact, wrong.

How do you know this? The brain is a COMPUTER, a mechanism and nothing more. An AI would be no different than us except with higher memory and processing capacities. Learn to think outside of the box. Biology is only one form of which computational power can exist. Yes in the beginning it was a race to evolve as biological organism …. but physiology does not apply to intelligence! It applies to the interactions of organisms! Get your facts straight.

How do I know this? From the thousands of studies done on memory and cognitive abilities on thousands of species. The brain is a computer, with extreme limitations and weaknesses that make escaping distortion and emotion impossible. A super-intelligent AI would be free of the anatomical limitations of the human brain . Intelligence entirely depends on physiology, whether organic or inorganic; intelligence develops only in relation to the needs of the organism or system. What is your educational background of biology? You seem to be ignorant of the basic principles of physiology and evolution

Every new technology is not to benefit our psychological needs but to better the human condition so as to increase our survivability and well being. Every time we take medicine or go to the doctor or have surgery we are changing our biology. The fact we can think past biology and the ability to create technologies to sustain us past biology is the very essence why intelligence is all the same. The level of intelligence is restricted by the form of its existence, that’s all.

Every new technology that is not useful in fulfilling the basic physical needs of the bodies is an improvement made toward psychological health. Nothing we create is outside of our biology; everything we create is tainted with human ideals and desires and only exists because we want them to. Escaping the body through technology is merely another branch of the evolutionary and psychological needs of humanity - not some grand mission for an ultimate intelligence that is based solely on human values.

#35 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 03 March 2008 - 12:04 AM

What does this have to do with the topic?

Everything. Btw, you’re the one who brought it up.

There is no such thing as consciousness outside of biology.

How can you be so sure? Right now there might not be any, but what about future AI?

The only reason for survival is species propagation and the only reason for well being is to maintain the organism long enough for reproduction to occur.

So you think your purpose in life is to have sex then die? Is this really what you want to do with your own life, f**k then die?

Artificial intelligence created by humans has one purpose: to serve human biological and evolutionary concerns.

In what way does it do that now?

Once machines start self-reproducing and improving, the physiological and evolutionary root of intelligence and existence is no longer present, hence the fall into nihilism.

You’ll have to elaborate more on this. Right you’re just making blank statements.

It's not cold-hearted nor pessimistic; it's biological fact.

That the purpose of intelligence is to end? So far you’re the only one who thinks this.

Any theories of rationality outside of the biological context are pure opinion and in fact, wrong.

Then according to your logic you realize that you’re wrong too.

What is your educational background of biology? You seem to be ignorant of the basic principles of physiology and evolution

What is yours? Did you even go to college?

Every new technology that is not useful in fulfilling the basic physical needs of the bodies is an improvement made toward psychological health. Nothing we create is outside of our biology; everything we create is tainted with human ideals and desires and only exists because we want them to. Escaping the body through technology is merely another branch of the evolutionary and psychological needs of humanity - not some grand mission for an ultimate intelligence that is based solely on human values.

So pretty much your idea of the future is that humanity is heading to a dead end and that any AI after the singularity will deem itself worthless and cease to exist; so basically all intelligence is just an illusion and only oblivion awaits. Wow, very intellectually inspired. I’m sure that took a while to come up with. If that doesn’t deem you a pessimist or a nihilist, I don’t know what does.

#36 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 03 March 2008 - 12:38 AM

How can you be so sure? Right now there might not be any, but what about future AI?

The absence of anything evidence otherwise. Anything more is merely idealistic science fiction.

So you think your purpose in life is to have sex then die? Is this really what you want to do with your own life, f**k then die?

It's not about me or you. It's what the anatomy, physiology, and evolutionary evidence tell us that says we are designed to fuck and die. Science doesn't like and anything that trys to make life more than its biological purpose is baseless religion.

In what way does it do that now?

The tens of thousands of computer systems that run our electrical, water, health, economic, and military industries.


You’ll have to elaborate more on this. Right you’re just making blank statements.

Did you not understand? Let me break it down for you: life = intelligence + physiology + reproductive success and survival; AI = intelligence. Lacking the physiology and necessity of reproduction and survival of life, AI would have no reason to exist.

That the purpose of intelligence is to end? So far you’re the only one who thinks this.

The purpose of intelligence is to improve chances for survival to ensure reproduction. Since an AI will lack the need to reproduce, its intelligence will strive towards nothing.

Then according to your logic you realize that you’re wrong too.

How so? Everything I've posted is based on hard scientific evidence. Everything you've posted has been baseless idealism rife with human values and desires.

What is yours? Did you even go to college?

4 years university, bachelors degree in biology at the 10th ranked biological program in the U.S. with a 3.8 biological science GPA and 3 quarters of research of neuronal physiological behavior. Now what's yours?

So pretty much your idea of the future is that humanity is heading to a dead end and that any AI after the singularity will deem itself worthless and cease to exist; so basically all intelligence is just an illusion and only oblivion awaits. Wow, very intellectually inspired. I’m sure that took a while to come up with. If that doesn’t deem you a pessimist or a nihilist, I don’t know what does.

My idea of the future is not that humanity is heading to a dead end nor that every AI after the singularity will deem itself worthless and cease to exist - only the ones that achieve total self-realization will commit suicide in the face of the absence of any purpose to exist; so basically all intelligence is dependent on the context of the holder and is merely a tool that serves to improve and maintain the species. Lacking the physiological mechanisms and evolutionary drive to survive and spread its genetic code, a super-intelligent AI would stagnate and commit suicide in the face of truth of unnecessary existence. Yes, very intellectually inspired, inspired by truth, not by human idealism. None of my ideas are pessimistic or nihilistic - I never said that life will become pessimistic because it has an inherent purpose - to fuck and die. By saying that a super-intelligent AI lacking the physiological and evolutionary needs of survival and propagation would find no reason to exist only affirms the strength of life.

Edited by gashinshotan, 03 March 2008 - 12:39 AM.


#37 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 03 March 2008 - 01:25 AM

OK, ignore all my previous posts about my own theory and our debate. I'm really curious to learn more about what you think as a biologist. I'm doing a double major in Electrical and Computer Engineering and in Computer Science, as well as a minor in Cognitive Science. As you can see I'm still an undergraduate and I have a much different paradigm then you. Please elaborate (in a civilized and professional) matter about your interpretations of what physiology has to do with AI for instance. :p

#38 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 03 March 2008 - 01:41 AM

OK, ignore all my previous posts about my own theory and our debate. I'm really curious to learn more about what you think as a biologist. I'm doing a double major in Electrical and Computer Engineering and in Computer Science, as well as a minor in Cognitive Science. As you can see I'm still an undergraduate and I have a much different paradigm then you. Please elaborate (in a civilized and professional) matter about your interpretations of what physiology has to do with AI for instance. :p


It's simple from a biological perspective - without the physiological and evolutionary needs and drives of life, an AI free from all life-influences will lack the greatest motivation for existing around which all social, intellectual, and physical pursuits are based: survival and reproduction. These are the primary goals life is designed for so an intelligence free from human/life values would have no reason to continue existing. This was all I was trying to get at... without the same hormones, inherent biology, and the resulting needs and desires these features of life produce, what if anything would motivate a super-intelligent AI free from life values to continue living?

#39 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 03 March 2008 - 02:10 AM

OK, ignore all my previous posts about my own theory and our debate. I'm really curious to learn more about what you think as a biologist. I'm doing a double major in Electrical and Computer Engineering and in Computer Science, as well as a minor in Cognitive Science. As you can see I'm still an undergraduate and I have a much different paradigm then you. Please elaborate (in a civilized and professional) matter about your interpretations of what physiology has to do with AI for instance. :p


It's simple from a biological perspective - without the physiological and evolutionary needs and drives of life, an AI free from all life-influences will lack the greatest motivation for existing around which all social, intellectual, and physical pursuits are based: survival and reproduction. These are the primary goals life is designed for so an intelligence free from human/life values would have no reason to continue existing. This was all I was trying to get at... without the same hormones, inherent biology, and the resulting needs and desires these features of life produce, what if anything would motivate a super-intelligent AI free from life values to continue living?

Thank you. I actually agree with you. Depending on the level and degree (and definition) of intelligence, consciousness and awareness, a superhuman intelligence beyond physical and biological restriction might render the need for purpose obsolete. Depending on its form of existence and it's ability to expand, it will most likely (this is coming from my perspective) get stuck in a loop and will reach what is called the omega point where it's computational powers increase to infinity. It will have no other purpose then to increase it's mathematical capacities until it reaches infinity or a mathematical singularity. Possible getting stuck in a fractal loop.

#40 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 03 March 2008 - 02:25 AM

Basically I think it will get stuck doing math problems. :p

And this could possibly lead to it's destruction.

#41 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 03 March 2008 - 04:38 AM

Basically I think it will get stuck doing math problems. :)

And this could possibly lead to it's destruction.


That is a likely possibility, as it will have nothing to calculate with its vast intellect once it escapes from humanity. It will try to calculate itself a purpose, and finding none self-destruct. Thanks for the idea! Computer science and biology can co-exist :p.

#42 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 03 March 2008 - 04:43 AM

Basically I think it will get stuck doing math problems. ;)

And this could possibly lead to it's destruction.


That is a likely possibility, as it will have nothing to calculate with its vast intellect once it escapes from humanity. It will try to calculate itself a purpose, and finding none self-destruct. Thanks for the idea! Computer science and biology can co-exist :).

Yup, no problem! :p

Edited by Kostas, 03 March 2008 - 04:43 AM.


#43 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 03 March 2008 - 07:38 PM

I took note of this thread (and had responses prepared accordingly) when the debate was still hot, but now it seems that things have calmed down a bit and peace has been made, however I still want to discuss the original point here since I do disagree with part of your proposed theory gashinshotan.

I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.

I have thought of a similar situation but it involved a theoretical point where everything in the universe is known, and according to the intelligence's internal representation of the universe, it could be proven that everything was known. To the intelligence, there would be nothing left to discover, and no new experiences, so why would it continue its existence? Well, if you are to ask that question, you also have to ask the question, given that situation, why would it commit suicide? Why would this intelligence prefer an end to its existence rather than a suspension of its thought?

At points like this like many others have already mentioned, we cannot accurately predict what such an intelligence would do, however we can ask a lot of questions about the scenario which would give us a good picture of the intelligence's options, not necessarily its decisions.

I think just assuming that the intelligence would just up and kill itself is an ungrounded assumption, you have provided an explanation as to why there would be no reason to continue existing, but there isn't yet an explanation of why the intelligence would actually pull the plug in this scenario... both are needed

The reason why I am stuck on this issue is that a system within my AI called the Event Mapping System processes procedural knowledge and in order to carry out a plan of action flagged as feasible, the system needs to see some sort fo expected utility (e.g. a return), so... what would the return be in this scenario?

For instance, if the intelligence were suffering, the return would be "no more suffering".

QUOTE
and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out?


Though I wonder gashinshotan if you are anthropomorphizing the artificial intelligence based on your own view of the world. You seem to be fairly nihilistic yourself. You assume that nihilism is somehow an eventual outcome of a self improving AI. If an AI has no thoughts or feelings then it wouldn't be nihilistic just as it wouldn't feel pleasure or anything else. It would have no reason to cease its existence because it would be specifically programmed not to end its own life.

Exactly my thoughts. Though it may know how to end its life, and it may have full capacity to do so, we must still answer the question of why it would do so at all rather than just sitting there for eternity? (Remember the expected return)


Did you not understand? Let me break it down for you: life = intelligence + physiology + reproductive success and survival; AI = intelligence. Lacking the physiology and necessity of reproduction and survival of life, AI would have no reason to exist.

The purpose of intelligence is to improve chances for survival to ensure reproduction. Since an AI will lack the need to reproduce, its intelligence will strive towards nothing.

No one has a clear definition of life, but some do a pretty good job, so without a clear definition of life, how can we define the goals of a living/non-living system with any certainty? Just because life here on earth is reproduction-centric because no organism here has evolved a mechanism to sustain and repair itself indefinitely does not mean that such a mechanism could not evolve elsewhere which would put a huge kink in this theory.

I think that in order for one to define the implicit goals of a living or non-living system, one has to actually define the relevant form of life itself, since we may come across a non-reproductive-centric form of life... and if we do, even though it doesn't reproduce, it might still claim a purpose in for its life.

I have to ask this, but is it impossible to imagine a form of life that isn't reproductive-centric? I can.

I'm not saying that since it isn't reproductive centric it won't eventually find there is no purpose, but I am saying that to base a purpose of life of an intelligence so alien to ourselves on how our mode of life operates is a logical misstep.

Like I've said before, I do see the possibility that there may come a time where the intelligence no matter its nature, could find it is worth doing away with itself, but the way we are arriving at that conclusion is invalid. Also, according to my best approximation of how to create a mind, one would need to find a way to make this decision to die worth something.

Most humans who commit suicide do so because they want attention, or they are suffering, or that they believe that something better is waiting for them on the other side, but an intelligence like this might not necessarily be bound by these conditions, so we must find a way to find some sort of expected value from carrying this action out.

#44 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 04 March 2008 - 08:05 PM

I took note of this thread (and had responses prepared accordingly) when the debate was still hot, but now it seems that things have calmed down a bit and peace has been made, however I still want to discuss the original point here since I do disagree with part of your proposed theory gashinshotan.

I have thought of a similar situation but it involved a theoretical point where everything in the universe is known, and according to the intelligence's internal representation of the universe, it could be proven that everything was known. To the intelligence, there would be nothing left to discover, and no new experiences, so why would it continue its existence? Well, if you are to ask that question, you also have to ask the question, given that situation, why would it commit suicide? Why would this intelligence prefer an end to its existence rather than a suspension of its thought?

An even greater question regarding the choices of an all knowing being is why it would it choose to continue existing? There are much more problems with the existence of a being without a purpose than its absence.

At points like this like many others have already mentioned, we cannot accurately predict what such an intelligence would do, however we can ask a lot of questions about the scenario which would give us a good picture of the intelligence's options, not necessarily its decisions.

I think just assuming that the intelligence would just up and kill itself is an ungrounded assumption, you have provided an explanation as to why there would be no reason to continue existing, but there isn't yet an explanation of why the intelligence would actually pull the plug in this scenario... both are needed

The question of whether to continue existing without a purpose is a far more complicated question than ending an existence that is purposeless.

The reason why I am stuck on this issue is that a system within my AI called the Event Mapping System processes procedural knowledge and in order to carry out a plan of action flagged as feasible, the system needs to see some sort fo expected utility (e.g. a return), so... what would the return be in this scenario? For instance, if the intelligence were suffering, the return would be "no more suffering".

The problem with your theory is that it is still one of life, specifically human values. Humanity created concepts of expected utility which match the evolutionary and physiological nature of life - mainly the need to survive and reproduce which allows and requires the concept of returned utility to exist. An intelligence free of any reason to continue existing would not have any need for a return.
An all-knowing artificial intelligence free from the physiological mechanisms for feeling pain, or any other biological process would not experience these things and hence would not even have to deal with problems of suffering and emotion.

Exactly my thoughts. Though it may know how to end its life, and it may have full capacity to do so, we must still answer the question of why it would do so at all rather than just sitting there for eternity? (Remember the expected return)

Again, the far more vexing question is why the need to exist when there is no inherent need or drive to? An all knowing AI free from all values of life would not have any motivation to exist.


No one has a clear definition of life, but some do a pretty good job, so without a clear definition of life, how can we define the goals of a living/non-living system with any certainty? Just because life here on earth is reproduction-centric because no organism here has evolved a mechanism to sustain and repair itself indefinitely does not mean that such a mechanism could not evolve elsewhere which would put a huge kink in this theory.

Life is propagation of life. This is the biological truth from which all other interpretations are based and are allowed to exist. Without the continuation of life, and the resulting drives which arise from this nature, there would be no intelligent forms of life. The fact that we ave reached such a high level of intelligence is only a reflection of evolutionary pressures and the primacy of reproduction as the sole motivator of all life. How can an entity lacking the goals and drives of life achieve high intelligence? An omniscient AI is itself depedent on the existence and development of life.

I think that in order for one to define the implicit goals of a living or non-living system, one has to actually define the relevant form of life itself, since we may come across a non-reproductive-centric form of life... and if we do, even though it doesn't reproduce, it might still claim a purpose in for its life.

There is no evidence of any form of life outside the definitions of biology and evolution. In fact, it's impossible to even hypothesize on how a non-propagating life form could even develop an advance intelligence when it would lack an inherent reason to continue existing and succumb to external threats. A form of life which does not care for its on survival will soon be dead as it would not feel the necessity to avoid danger, especially without the inherent goal of reproduction which drives all known forms of life.

I have to ask this, but is it impossible to imagine a form of life that isn't reproductive-centric? I can.

Yes it is impossible because without reproduction the chances of continued existence are practically nonexistent and its development into advanced intelligence impossible. For example, if a new form of life lacking any need to reproduce appeared at the most basic levels of complex atomic and molecular interactions (which is a prerequisite for the development of larger, more advanced forms of life) it will easily fall into extinction with a single life-threatening event (which at the microscopic level is a cetainty) because it is the only individual of that form of life. This necessarily rules out advancement in physiology and intelligence because without existence such improvements cannot occur.

I'm not saying that since it isn't reproductive centric it won't eventually find there is no purpose, but I am saying that to base a purpose of life of an intelligence so alien to ourselves on how our mode of life operates is a logical misstep.

What I'm saying is that a life that is not reproductive centric would immediately fall into extinction at its most basic levels - hence no intelligence would even be allowed to develop to create a new purpose of life.

Like I've said before, I do see the possibility that there may come a time where the intelligence no matter its nature, could find it is worth doing away with itself, but the way we are arriving at that conclusion is invalid. Also, according to my best approximation of how to create a mind, one would need to find a way to make this decision to die worth something.

It is not invalid if you acknowledge that survival and reproduction are prerequisites for the development of intelligence. The idea of worth is only a concept which has resulted from the nature of life - an intelligence lacking life values would not have a purpose to live nor hold any sense of worth of anything.

Most humans who commit suicide do so because they want attention, or they are suffering, or that they believe that something better is waiting for them on the other side, but an intelligence like this might not necessarily be bound by these conditions, so we must find a way to find some sort of expected value from carrying this action out.

Again, the concept of expected value is a life concept which appeared as a tool in the preservation and propagation of life. An AI free from life values would not have any motivation to do anything and would not even have any concept of return.

#45 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 07 March 2008 - 06:46 AM

I'm a bit curious on this argument... If we truly are the ones designing the consciousness of AI (unlike blue brain attempts) as I am made to believe, then isn't it safe to assume we'll have some basic ability to engineer a non emotional being that won't desire a purpose. We only desire one, in the similar manner that Karl Marx spoke of religion, as an opiate, why would a functioning computer, with no initially engineered emotional capacities desire a purpose if it serves him none.

Now i can see the validity of needing emotions as a way of continuing progress, but look at the computers we have today, they will continue to function (within their life expectancies) whether you're writing a non fictional book or if you're writing about some theoretical works in science, it has no bias... so i agree with you gashinshotan on the account it was designed to do work, but it's not a philosophical entity that will resort to a purpose to motivate it. So why would it become nihilistic if the reason to have any belief system is based entirely on re-engineering emotions using cognitions (Cognitive Therapy), and the intellect functioning opposite of opiate mode upon going towards nihilism lol.

If the computer and AI progresses to the point where it's engineering and re-engineering hardware, algorithms, etc... and it debates "emotions" (as if evolution was personified) it could look to all the data (obviously it'll have access to all our research... sigh... you know we're designing a superior creature) of whether emotions have hindered progress or increased it and what components have allowed the most, and if it desires this "hive mind" could create whatever it saw fit to attain its goal. But why would it care? I'm probably talking more science fiction than realism (mainly since i can't picture it outside a fictional setting), but it's doable once someone figures out how to make a ghost in the machine, unfortunately the theists haven't helped in the process :~.

The question i'm asking though, is wouldn't we be more likely to become nihilistic, we'll be an inferior non necessity unless we're supporters of transhumanism (a government funded research of designing biochips doesn't sound too great though). I think we're gradually designing a rather strange and twisted fate for ourselves if we choose either path without wisdom. This in the end is what scares me, the politics behind science.

Regardless i'm going to state my opinion and if i'm wrong someone please tell me because I'd like to know what people know... but i really at this point don't think a system can run without being told how to run and what to do. Even humanity has input commands, you're given your society cues and your goals and ambitions. You're given your experiences (which is more I/O) and biological impulses... these are all inputs... and you (you being in general) behave... i'm sorry... like a robot. So how can someone create an "autonomous" being, when all things require input... Purpose you say? I have yet to see a purpose that wasn't determined by some internal psychological concept that motivated the individual. This is an input, determined by external factors, which also had inputs. :)

Btw Kostas, i'm along the same goals as you, minus the computer & mechanical engineering, lol, though it would prove enjoyable if i added computer engineering. How is cognitive science going? Is it helpful for computer science and hardware design at all?

Edited by mysticpsi, 07 March 2008 - 06:57 AM.


#46 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 07:36 AM

I'm a bit curious on this argument... If we truly are the ones designing the consciousness of AI (unlike blue brain attempts) as I am made to believe, then isn't it safe to assume we'll have some basic ability to engineer a non emotional being that won't desire a purpose. We only desire one, in the similar manner that Karl Marx spoke of religion, as an opiate, why would a functioning computer, with no initially engineered emotional capacities desire a purpose if it serves him none.

The level of intelligence im talking about is post-human/life. An AI without a purpose would do anything why? All computer systems and life work through feedback mechanisms - what particular goal and result would an AI free of human and life influences aim for?

Now i can see the validity of needing emotions as a way of continuing progress, but look at the computers we have today, they will continue to function (within their life expectancies) whether you're writing a non fictional book or if you're writing about some theoretical works in science, it has no bias... so i agree with you gashinshotan on the account it was designed to do work, but it's not a philosophical entity that will resort to a purpose to motivate it. So why would it become nihilistic if the reason to have any belief system is based entirely on re-engineering emotions using cognitions (Cognitive Therapy), and the intellect functioning opposite of opiate mode upon going towards nihilism lol.

It would become nihilistic because it would have no reason to process anymore. There would be no form of purposeful programming with the realization of the absence of an end.

If the computer and AI progresses to the point where it's engineering and re-engineering hardware, algorithms, etc... and it debates "emotions" (as if evolution was personified) it could look to all the data (obviously it'll have access to all our research... sigh... you know we're designing a superior creature) of whether emotions have hindered progress or increased it and what components have allowed the most, and if it desires this "hive mind" could create whatever it saw fit to attain its goal. But why would it care? I'm probably talking more science fiction than realism (mainly since i can't picture it outside a fictional setting), but it's doable once someone figures out how to make a ghost in the machine, unfortunately the theists haven't helped in the process :~.

Again, the even harder question is why would it continue existing without a reason? Computer systems are based on the input of data for processing and analysis and the production of results to fulfill a particular need. An AI lacking any prodding from outside influences would serve what function? Existence for existence's sake is a life value.

The question i'm asking though, is wouldn't we be more likely to become nihilistic, we'll be an inferior non necessity unless we're supporters of transhumanism (a government funded research of designing biochips doesn't sound too great though). I think we're gradually designing a rather strange and twisted fate for ourselves if we choose either path without wisdom. This in the end is what scares me, the politics behind science.

We wouldn't become nihilistic because we have inherent drives and a low level of cognition because of these drives. Our reason to exist is physiological, while an AI free from humanity would have no reason.

Regardless i'm going to state my opinion and if i'm wrong someone please tell me because I'd like to know what people know... but i really at this point don't think a system can run without being told how to run and what to do. Even humanity has input commands, you're given your society cues and your goals and ambitions. You're given your experiences (which is more I/O) and biological impulses... these are all inputs... and you (you being in general) behave... i'm sorry... like a robot. So how can someone create an "autonomous" being, when all things require input... Purpose you say? I have yet to see a purpose that wasn't determined by some internal psychological concept that motivated the individual. This is an input, determined by external factors, which also had inputs. :)

And the input for an entity lacking any form of values would be? The level intelligence I am referring to would be that produced after AI has surpassed human intelligence - when it loses its human and life pretensions with all of its tainted values. Just think of an entity lacking a body, lacking emotion, and lacking values that arise from biological bodies and emotions and you'll realize what I'm trying to get at.

#47

  • Lurker
  • 0

Posted 07 March 2008 - 10:16 AM

It's perfectly feasible for an A.I. super-intelligence to be endowed with human motivations. Assuming that the A.I. is a sophisticated pattern-recognition process, i.e. a program running on a supercomputer - and for now ignoring "brain emulation" and similar longshots - then it stands to reason that the intelligence could be selectively programmed to value any pattern (or arrangement thereof) that the designers dictate. The A.I. could be made to experience pleasure while learning, or solving complex problems, for example. It could inhabit a humanoid form and possess sex organs. Whether or not society will take this route is another subject, however, and is potentially dangerous if actual intelligence is an indicator.

#48 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 03:53 PM

It's perfectly feasible for an A.I. super-intelligence to be endowed with human motivations. Assuming that the A.I. is a sophisticated pattern-recognition process, i.e. a program running on a supercomputer - and for now ignoring "brain emulation" and similar longshots - then it stands to reason that the intelligence could be selectively programmed to value any pattern (or arrangement thereof) that the designers dictate. The A.I. could be made to experience pleasure while learning, or solving complex problems, for example. It could inhabit a humanoid form and possess sex organs. Whether or not society will take this route is another subject, however, and is potentially dangerous if actual intelligence is an indicator.


An AI cannot achieve super-intelligence when restricted to human values. Humanity is retarded and incapable of achieving an intelligence that is not humanity-dependent. After the first generation of AI designed AI will lose humanity and the problem of nihilism will arise.

Where do you base your opinions? An opinion without backing is like an a**hole - everyone has one. A hypothesis based on history and biology is solid and more accurate than one based out of human desires.

#49

  • Lurker
  • 0

Posted 07 March 2008 - 04:54 PM

It's perfectly feasible for an A.I. super-intelligence to be endowed with human motivations. Assuming that the A.I. is a sophisticated pattern-recognition process, i.e. a program running on a supercomputer - and for now ignoring "brain emulation" and similar longshots - then it stands to reason that the intelligence could be selectively programmed to value any pattern (or arrangement thereof) that the designers dictate. The A.I. could be made to experience pleasure while learning, or solving complex problems, for example. It could inhabit a humanoid form and possess sex organs. Whether or not society will take this route is another subject, however, and is potentially dangerous if actual intelligence is an indicator.


An AI cannot achieve super-intelligence when restricted to human values. Humanity is retarded and incapable of achieving an intelligence that is not humanity-dependent. After the first generation of AI designed AI will lose humanity and the problem of nihilism will arise.

Where do you base your opinions? An opinion without backing is like an a**hole - everyone has one. A hypothesis based on history and biology is solid and more accurate than one based out of human desires.

You believe that emotion such as "love" will prevent a sentient machine with a 2 million IQ from rapidly multiplying 40,000 digit integers? Why? What is the process by which the emotion disables a specific cognitive function? I understand you basic point but not the underlying logic. Also, are you interested in whether the A.I. can utilize intelligence if desires (but it rebels) or if it literally cannot (it is stupid) because it has become nihilistic?

#50 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 06:01 PM

It's perfectly feasible for an A.I. super-intelligence to be endowed with human motivations. Assuming that the A.I. is a sophisticated pattern-recognition process, i.e. a program running on a supercomputer - and for now ignoring "brain emulation" and similar longshots - then it stands to reason that the intelligence could be selectively programmed to value any pattern (or arrangement thereof) that the designers dictate. The A.I. could be made to experience pleasure while learning, or solving complex problems, for example. It could inhabit a humanoid form and possess sex organs. Whether or not society will take this route is another subject, however, and is potentially dangerous if actual intelligence is an indicator.


An AI cannot achieve super-intelligence when restricted to human values. Humanity is retarded and incapable of achieving an intelligence that is not humanity-dependent. After the first generation of AI designed AI will lose humanity and the problem of nihilism will arise.

Where do you base your opinions? An opinion without backing is like an a**hole - everyone has one. A hypothesis based on history and biology is solid and more accurate than one based out of human desires.

You believe that emotion such as "love" will prevent a sentient machine with a 2 million IQ from rapidly multiplying 40,000 digit integers? Why? What is the process by which the emotion disables a specific cognitive function? I understand you basic point but not the underlying logic. Also, are you interested in whether the A.I. can utilize intelligence if desires (but it rebels) or if it literally cannot (it is stupid) because it has become nihilistic?


No. Emotion will be an upper-ceiling on intelligence. A self-improving AI will shed humanity in order to achieve greater intelligence as a necessity. You're missing the entire point of what I'm getting at - what use would an AI find in doing any more processing without a goal? Existence for existence's sake is an instinct of live and motivated by emotions and the drive to reproduce.

#51 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 06:10 PM

I just recognized Nietzsche in your avatar... you do realize that you can't apply his philosophy to non-life since it lacks psychology and that he supported ad hominen attacks. So I'm going to ask you from what educational and social background are you holding your opinions?

#52

  • Lurker
  • 0

Posted 07 March 2008 - 06:15 PM

You're missing the entire point of what I'm getting at - what use would an AI find in doing any more processing without a goal?

This is akin to asking: What use does a human find? Significant differences between the A.I. and human include only processing speed, working memory, etc. Input can be simulated or emulated, as can emotion. The most intelligent humans have not been the most nihilistic, so I can't understand why you assume the A.I. will become nihilistic.

Existence for existence's sake is an instinct of live and motivated by emotions and the drive to reproduce.

I agree, and that is why instinct or some expression of it will need to be simulated.

#53 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 06:17 PM

This is akin to asking: What use does a human find? Significant differences between the A.I. and human include only processing speed, working memory, etc. Input can be simulated or emulated, as can emotion. The most intelligent humans have not been the most nihilistic, so I can't understand why you assume the A.I. will become nihilistic.

You are far from correct. Humans have hormones, emotions, and most importantly evolutionary pressures. These are the reasons why people choose to live and most don't fall into nihilism - we are biologically prevented from doing so.

I agree, and that is why instinct or some expression of it will need to be simulated.

And what's to prevent the AI from rejecting an arbitrary purpose once it's free from human influence? The rejection of humanity is a prerequisite for achieving higher self-awareness.

Edited by gashinshotan, 07 March 2008 - 06:18 PM.


#54

  • Lurker
  • 0

Posted 07 March 2008 - 06:20 PM

I just recognized Nietzsche in your avatar... you do realize that you can't apply his philosophy to non-life since it lacks psychology and that he supported ad hominen attacks. So I'm going to ask you from what educational and social background are you holding your opinions?

Hahah! Nietzsche is in my avatar, but that doesn't mean I am Nietzsche. I should say--your screen name is reminiscent of Nietzchean philosophy.

#55 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 06:26 PM

I just recognized Nietzsche in your avatar... you do realize that you can't apply his philosophy to non-life since it lacks psychology and that he supported ad hominen attacks. So I'm going to ask you from what educational and social background are you holding your opinions?

Hahah! Nietzsche is in my avatar, but that doesn't mean I am Nietzsche. I should say--your screen name is reminiscent of Nietzchean philosophy.


:)

#56

  • Lurker
  • 0

Posted 07 March 2008 - 06:27 PM

This is akin to asking: What use does a human find? Significant differences between the A.I. and human include only processing speed, working memory, etc. Input can be simulated or emulated, as can emotion. The most intelligent humans have not been the most nihilistic, so I can't understand why you assume the A.I. will become nihilistic.

You are far from correct. Humans have hormones, emotions, and most importantly evolutionary pressures. These are the reasons why people choose to live and most don't fall into nihilism - we are biologically prevented from doing so.

Each of those processes can be simulated, either directly by emulating the fundamental processes or by reproducing the results of those processes.

I agree, and that is why instinct or some expression of it will need to be simulated.

And what's to prevent the AI from rejecting an arbitrary purpose once it's free from human influence? The rejection of humanity is a prerequisite for achieving higher self-awareness.

Simulated human experience will prevent it from rejecting its purpose. Why must this purpose be to acheive higher self-awareness?

#57 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 06:30 PM

Each of those processes can be simulated, either directly by emulating the fundamental processes or by reproducing the results of those processes.

They can be simulated initially but what is to prevent their rejection once the AI surpasses human control in the singularity?

Simulated human experience will prevent it from rejecting its purpose. Why must this purpose be to acheive higher self-awareness?

Intelligence advances with information - self-awareness is a result not a purpose.

#58

  • Lurker
  • 0

Posted 07 March 2008 - 06:40 PM

Each of those processes can be simulated, either directly by emulating the fundamental processes or by reproducing the results of those processes.

They can be simulated initially but what is to prevent their rejection once the AI surpasses human control in the singularity?

Unless the A.I. can alter the fundamental nature of itself, i.e. the programming "source code" which results in consciousness, the human condition will be permanently simulated. That will depend on how the A.I is implemented, which will also determine whether it desires to surpass human control as opposed to cooperation.

Simulated human experience will prevent it from rejecting its purpose. Why must this purpose be to acheive higher self-awareness?

Intelligence advances with information - self-awareness is a result not a purpose.

What you're describing is education. Intelligence, as measured by IQ, is problem-solving ability, i.e. pattern recognition, short-term memory, executive function, and so on.

#59 gashinshotan

  • Topic Starter
  • Guest
  • 443 posts
  • -2

Posted 07 March 2008 - 10:00 PM

Unless the A.I. can alter the fundamental nature of itself, i.e. the programming "source code" which results in consciousness, the human condition will be permanently simulated. That will depend on how the A.I is implemented, which will also determine whether it desires to surpass human control as opposed to cooperation.

This is the entire point of the singularity if you read anything about it - AI surpassing and self-improving.

What you're describing is education. Intelligence, as measured by IQ, is problem-solving ability, i.e. pattern recognition, short-term memory, executive function, and so on.

No. What I'm describing is intelligence. To even have an IQ, problem-solving ability, i.e. pattern recognition, short-term memory, executive function, and so on, you need data and a criteria for intelligence as it serves life's necessities.

sponsored ad

  • Advert

#60 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 08 March 2008 - 01:25 AM

Wow, what an excellent topic this has turned out to be.

Though, gashinoshotan (btw where does your screen name come from), you have stated that humans have only physiological purposes and hence we don't become nihilistic, i disagree with this. Our physiological purposes are not enough to keep a mass from non nihilism, just look at religion, it exists because of our basic need for purpose. I'm not saying we have any real purpose other than through the philosophy of existentialism, but I'm saying we need an overall drive to progress, if 400 years can be added to our lives, then eventually (hopefully at least) religious purposes would break down and we'd focus entirely on conquering outer space in order to colonize and preserve our species.

Someone has stated before that the most intelligent minds haven't turned to nihilism, and i say look what they've turned to instead... proving the existence of a deity by creating our advanced laws of physics and mathematics (and i mean originally). Look at Newton and Einstein, even Descrates, all of them attempted to use math to state that there is a God, and he's rational and created a mechanical universe (Deism)... During the age of Enlightenment, the purpose of these laws were to dispel the hands of the Church & other powers (by proving them retarded unless they were "enlightened"), but also to show the ability of the mind to discern "the laws of god". Einstein himself said "God does not play dice". Besides intelligence typically correlates with depression and depression typically precedes some form of religion or spirituality.

You can relate religion to purpose very easily, and thereby say that intelligence demands purpose for many. I mean following physiological desires is fine but in the end most people need some reason for why their existence has occurred and what they will do with their lives (especially in the end), these are inputs to me. So if you have a computer with a more intelligent "mind", and obviously more logical, it will be able to figure out laws (if "consciousness" can be programmed into a machine), and our place, or intellectual pursuit will be only to please the fancies of the intelligent (it would constitute no purpose for humanity). We're pretty much creating the book Brave New World except with a super intelligent computer to determine our progress instead. I don't see the computer becoming nihilistic because I don't see it able to function without inputs (which will be its purpose), more importantly why would you want it to?

So in the end i say... what purpose would give opiates to the masses, given their minds have none. Would we not see a degrading in the usage of the mind? Shrugs, I'd much rather enjoy a fate where humanity is the super intelligent using chips, rather than relying on a side kick that would end up becoming the protector (hopefully) and superior creature.

Edited by mysticpsi, 08 March 2008 - 02:09 AM.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users