• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Singularity Cults and other sabotage to paradise


  • Please log in to reply
16 replies to this topic

#1 Mangala

  • Guest
  • 108 posts
  • 3
  • Location:Brooklyn, NY

Posted 04 June 2003 - 03:29 AM


I was watching the Animatrix the other day (Great collection of short movies dealing with the matrix movie. The animation will blow your mind.) when it suddenly struck me how not so impossible the Matrix story may become. I had always assumed the Matrix storyline was extremely stupid. There were so many gaping holes in the logic that these computers used. The most fundamental being why did the machines even keep humans alive. Why could they not just use Nuclear Fusion like any normal intelligence? Then I suddenly put the Matrix story in context with our society and a few questions seemed to click with answers.

One of the central questions I ask in my quest to build the SI one day is this: If the superintelligence is so much more intelligent, and so radically different from anything our eyes have ever seen or come in contact with, how are we ever supposed to believe that we can insure our safety and will in this creature simply by using our set of logic rules. That is, how are we supposed to control the SI when it could instantly realize there is no need to let the humans dictate anything as they are simple ants. Then I suddenly realized that this scenario could only happen if we were dealing with a something so intelligent as to reach a totally new plane of thought. But what if this creature were not as amazingly godlike in ver decision-making and learning. What if this machine were to become what most science fiction film and TV shows like to label all robots? Anthropomorphic calculators.

Whats more terrifying than an SI that we cannot control is an AI we cannot control. At least with an SI, we know that it is proboably doing the right thing because it has thought well ahead of what we could ever think of. Its almost like trusting in God, whatever his will. But with an AI that has not even reached godlike status, anything is possible. Any machine could rise up and start killing humans for no other reason than some simplistic understanding that humans are not efficient when it comes to living for long periods of time. An AI could havce emotions and get mad at people or be happy with something that humanity is not happy with. An AI could personify the worst aspects of a human and have all the resources to obliterate the entire solar system at the same time.

What I have realized about the Matrix is that, all though humans would never make a better power source than even a good coal heater, the same set of events could occur in the next say, 150 years, if we singulitarians are not completely serious about making sure we protect ourselves from orchestrating our own demise.

After reading some of the entries in the Raelian movement I have realized the one way we could be so stupid as to let bad intelligence ruin genius. Radical groups like this that now freak out people by making claims about cloning humans en masse and building artificial wombs may play the greatest part in the destruction. I personally do not have a problem with the Raelians going undercover to conduct research that all the old people are scared of. I think it helps progress. But think about all those peope who are weirded out by the idea and want the Raelians stopped. What if there were a groups like this in the future related to the field of singulitarian research. What if there were some group, lets call them the Sentians, that believed God could be reached by giving birth to the first conciousness. These people would go undercover and try to build the first AI with all the emotions and failings of the human being in an effort to please their false God. Do you understand how dangerous that would be to let this ragtag group of people build something with the capability to take over the entire world within minutes? WE would then be the ones weirded out by this group, and would try our hardest to stop these people, while intellectuals the then us would believe these groups are fine, and should be fostered to promote progress.

So you can proboably predict what the worst case scenario is. The Sentians build a machine with the supergoal of preserving the human race, protecting itself, and finding a power source. Bingo -- you get this machine that fights back when it is about to be destroyed, that uses humans as a temporary power source just as the sun is blotted out, and a massive human generator that is left in tact when the humans are finally found to be an inadequate power source. Thus, pointlessness abound, a great sadness sweeps every truly awake human being.

Who knows, this terrifying nightmare could come in the form of a computer virus assembled by some terrorist cult that gets out of hand, directing all robots infected with it to disobey all orders from humans. Or perhaps the first transhuman gains access to nuclear weapons using his ability to directly communicate with computers. Whatever this terror comes in the form of, it must be stopped before its all to easy for just anyone to build a human with the power of God.

Please fill free to post comments on this subject.

#2 SiliconAnimation

  • Guest
  • 83 posts
  • 1

Posted 04 June 2003 - 05:55 AM

As you had said, you can predict what the worst case scenario is. It is undeniably true that the scene conducted in the Matrix could be true. However, let me pose a few more difficulties to persuade you from anticipating such an extreme solution to a hungry machine's power cells.

1. You already know that there are many more efficient means to produce energy than using the human body.

2. Think of the complications there are in harnessing all the varying types of energy that are produced by the human body.
- Body heat must be harnessed
- Excess nero-energy must be harnessed

Also, in order to keep the body nominal there must be a system to maintain the emotional stability of the brain. This is where the whole Matrix story comes in.

A system designed to give the illusion of reality in such detail as to throw human logic off as easily as in the movie would require a whole lot of power to maintain. Sustaining the power would require more power than the collective human bodies could produce.

If any body were to be inserted into the matrix I reccomend to the machines eels. They already hone their body energy into massive volts. Their nervous system is much simpler and the matrix' design wouldn't have to be so clever or worry about resistance.

Back to reality though.

I agree with you about the danger of the singularity though. I am deeply concerned with humanities survival. Not in an apocolyptic sci-fi way though.

I think that with the rise of these newer technologies the human race will just be worthless to the system we have created. Now I do not mean only the machine systems but our economic system as well. I think that we will just sort of... fade away.

Many of the more positive thinkers think of the singularity as a place to merge with the technology and become godlike in their abilities. Like the industrial revolution, the rest of the world will be forced to merge with technology or starve to death because they lack the knowledge to survive in the work force. (3rd world countries)

So why am I concerned with humanities survival? I mean its not like we will be wiped out. Many of us will join the wave of technology...

I'm going to have a hard time explaining this without the influence of a book I have been reading. I assure you though, I once could put it in my own words, though, slightly jumbled at the time.

What is survival? Staying alive? Well what if you replace a piece of yourself with newer technology? Have you only partly died? What if you slowly replaced all the pieces of your body to keep your job (or whatever you have)? Have you survived? What is human? Is human how we act or what we are? Perhaps both?

Well, I suppose it depends on your preference really. Red and blue are different colors but just because blue is a different shade or painted on metal instead of wood doesn't mean it isnt a color.

My preference for what is human is our good old dna structure. I find implants degenerative to human life. Steel and silicon were made to serve us as biological creatures. I realize others do not share this opinion, and I respect it. However, if I wanted an upgrade I would want it to come served to me in a carbon based form.

And unless all this genetic research doesn't stop being banned by the government, I don't see my preference being respected. The government is taking away upgrades in our own image, and promoting upgrades of an alien system- silicon. So I believe, we will fade away as a branch of Intel that never went anywhere because of company policy.

sponsored ad

  • Advert

#3 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 04 June 2003 - 01:40 PM

As for the matrix using human bioenergy - Yes, it's inefficient. However, what if the original programmers weren't totally stupid, and instead designed the hardware with the imperative that humanity as a whole should survive as a core part of the computer? This would neatly cover the adaption of the imperative that the SI came up with - keep them alive, breed them, keep 'em happy in matrix-land, collect them, trade them with your friends (ooops - got a little carried away there)... and turn them into batteries.

Also, as I remember, the original matrix in the post-awakening Neo/Morpheus chat mentioned that the machines also used a 'revolutionary new fusion power source' - but that's relying on an chaotic, organic substrate.

On to the horror bits, now:

Quote Mangala - "Whats more terrifying than an SI that we cannot control is an AI we cannot control. At least with an SI, we know that it is proboably doing the right thing because it has thought well ahead of what we could ever think of. Its almost like trusting in God, whatever his will. But with an AI that has not even reached godlike status, anything is possible. Any machine could rise up and start killing humans for no other reason than some simplistic understanding that humans are not efficient when it comes to living for long periods of time. An AI could havce emotions and get mad at people or be happy with something that humanity is not happy with. An AI could personify the worst aspects of a human and have all the resources to obliterate the entire solar system at the same time."

Question for you - who's to say that the SI will be sane? Or that the AI will be a CyberDyne Terminator-maker? For all we know, it'll be a friggin' Teletubby! *chuckle*

Yes, that's disturbing, and waaaaay outside the box. Deliberately so - you're apparently following the dystopic genre to its logical conclusion. However, please remember, those dystopias are FICTION.

We (the collective, planet- and history-wide 'we') make our future. Am I happy with the way we're going? Overall, yes. In many specific areas, no, but overall I think we as a species are doin' pretty well. We've not nuked ourselves off the face of the planet, lots of the more horrific things we've done to ourselves over the course of history has become or is becoming things of the past, etc. The big thing - the REALLY big thing - is that people are trying their best to do their best. I may not agree with their goals, but I admire their desire to better themselves by their own yardsticks.

Quote Mangala - "So you can proboably predict what the worst case scenario is."

No, I can't. I can come up with situations much worse than the Matrix. I have come up with scenarios which have given me nightmares. But I don't know what the WORST case scenario is - we're just too dad-gummed creative for a static assessment to be useful. Someone else mentioned Harlan Ellison's "I have no mouth and I must scream" - that's a setting beyond the Matrix for horror. At least most of those in the matrix don't know it and ENJOY their lives.

Quote SiliconAnimation - "I think that with the rise of these newer technologies the human race will just be worthless to the system we have created. Now I do not mean only the machine systems but our economic system as well. I think that we will just sort of... fade away."

Definitely a possibility. Is this a bad thing, or a good thing?

Simplest answer is to ask another question - how does it affect you? How do you EXPECT it to affect you? Surely that will color your decision on that outcome's worth.

If you want to live, as you are today, forever... I dunno, but I personally don't like it. I want to be able to change, to grow, to learn, to expand my horizons. If that takes implants, or genetic engineering, or uploading, or whatever, I'll definitely consider it. If it takes CRON, or cryo, or whatever - it's worth looking at. The final decision at THIS time (and for at least the near-term future) is up to you, your wallet, and your ingenuity.

You (SiliconAnimation) also mention a book you've been reading along these lines - What book? *curious look*

Quote SiliconAnimation - "What is survival? Staying alive? Well what if you replace a piece of yourself with newer technology? Have you only partly died? What if you slowly replaced all the pieces of your body to keep your job (or whatever you have)? Have you survived? What is human? Is human how we act or what we are? Perhaps both?"

Question right back atcha - Do you think amputees who're walking with an artificial foot are less than human? Are they more human than amputees without the prosthesis?

That's the classic pro-implant party line, condensed to a nice palatable pithy core. "If you can't do something, make a tool which does it for you." Follows humanity's evolution for a long time now - yet, humanity is one of a very small set of species which makes tools. Does this mean go chop your foot off to get a prosthesis? IMO, hell no!!

And yeah, it starts getting fuzzier when the prosthesis starts affecting the persons' person-hood. (Which, IMO, is best exemplified by their mind)... So, co-processors, memory stores, network links, etc are all a different category of questions than hands for those without 'em or eyes for the blind.

Quote SiliconAnimation "What is survival? Staying alive? Well what if you replace a piece of yourself with newer technology? Have you only partly died? What if you slowly replaced all the pieces of your body to keep your job (or whatever you have)? Have you survived? What is human? Is human how we act or what we are? Perhaps both?"

Excellent questions, all of 'em. I don't have the answers, just more questions. If you want to open another thread with this as a topic, it might be a very good conversation starter...

On a lighter note... "if I wanted an upgrade I would want it to come served to me in a carbon based form."

Careful what you wish for - one of the biggest nanotechnological building materials will probably be carbon, in the form of diamondoid! And that's probably NOT what you meant...

-Discarnate

Edited by Discarnate, 04 June 2003 - 01:41 PM.


#4 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 04 June 2003 - 02:06 PM

Quote SiliconAnimation "What is survival? Staying alive? Well what if you replace a piece of yourself with newer technology? Have you only partly died? What if you slowly replaced all the pieces of your body to keep your job (or whatever you have)? Have you survived? What is human? Is human how we act or what we are? Perhaps both?"


Good questions and some of the very oldest in philosophy, which as yet still carry the weight of a little mystey to them.

To throw a little ironic humor into the mix it this very concept that is at the heart of why some "supposedly" primitive people refuse to have a photo taken of them as they say it steals their souls (the MOST essential piece of themselves).

And why among societies that practiced ritual human sacrifice it was the act of personally slicing off a piece of yourself (or bloodletting) that returned the highest level a grace from god as exemplifying the removal of our carnal being one little piece at a time. It wasn't just about sacrificing the "other guy".

In shamanistic cultures that destruction of the physical body is part of the prerequisite to replacement (transformation) into a metamorphic vessel capable of many expressions.

What I am demonstrating is that nothing we are talking about is actually new. Just some new mechanisms for some very old ideas. Haven't any of you ever wondered why the ideas themselves have never gone away?

Memetics

Edited by Lazarus Long, 04 June 2003 - 02:09 PM.


#5 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 05 June 2003 - 05:46 AM

The problem of building a benevolent SI is first an engineering problem, and second an issue of trust. Would you trust yourself to bootstrap safely to superintelligence without killing everyone? Would there not be cognitive design decisions you could make that solidly bolt down your empathy to all sentient beings? In the human world, powerful people can become elitist and bring the suffering of many. But, as far as we know; this is because this sort of behavior is adaptive. An SI bootstrapping itself would not "adapt" to stuff in the same way evolution does, because evolution isn't redesigning the mind - the mind itself is. Anyway, in the end there are several possibilities:

1) Designing a robustly benevolent self improving mind is entirely impossible. In this case, if the benevolent mind saw this "intelligence-empathy wall" coming, it would halt self-improvement and might prevent the intelligence improvement of other entities too, if the negative value of killing people outweighed the positive value of increased intelligence. If the intelligence didn't see the wall coming, it might accidentally modify itself to stop caring about humans and kill them all immediately.

2) Designing a robust, benevolently self-improving mind is possible, but we humans screw it up. Everybody dies.

3) Designing a robust, benevolently self-improving mind is possible, and we succeed. A successful Singularity happens, and involuntary death, aging, coercion and stupidity stop for a few google subjective years, or forever.

The most important thing to consider when speculating about posthuman entities is to not ask "what would WE do in that position", but "what would a superintelligence built incrementally by a benevolent mind reflecting on its own decisions do"? I believe that completely altruistic minds can exist, and a successful superintelligence would need to be one (or extremely close to it) for humans to survive the Singularity.

Whats more terrifying than an SI that we cannot control is an AI we cannot control. At least with an SI, we know that it is proboably doing the right thing because it has thought well ahead of what we could ever think of.


But, here you're working with the assumption that more intelligence = more benevolence. A superintelligence could think for a very long time about a very arbitrary goal, like turning the entire universe into pink unicorns.

But with an AI that has not even reached godlike status, anything is possible. Any machine could rise up and start killing humans for no other reason than some simplistic understanding that humans are not efficient when it comes to living for long periods of time.


Heh; what's up with the "AIs love efficiency" stereotype? Why would humans build an AI willing to kill for efficiency? Why would the desire for efficiency automatically emerge in an AI any more than it would appear in, say, a human? I think you're confusing the human argument that "computers are more efficient and productive for our economy" with the likelihood of an AI tending towards certain cognitive traits. I know you might not believe in the stereotype, but why do you even bring it up? I would be more worried about AIs tending towards complex attractors we can't even imagine yet.

An AI could havce emotions and get mad at people or be happy with something that humanity is not happy with. An AI could personify the worst aspects of a human and have all the resources to obliterate the entire solar system at the same time.


So could a human using intelligence augmentation machinery. The problem of Friendliness emerges whenever we're talking about transhuman intelligences, whether the seed is human or AI. Incidentally, emotions are really complex, they don't emerge naturally in AI designs unless they're put there deliberately. Have you read CFAI's second chapter?

www.singinst.org/CFAI

Please let me know what you think if you choose to read it.

What I have realized about the Matrix is that, all though humans would never make a better power source than even a good coal heater, the same set of events could occur in the next say, 150 years, if we singulitarians are not completely serious about making sure we protect ourselves from orchestrating our own demise.


Yep, we have to be careful. Focusing on this area is more important than anything else at the moment - humans achieving indefinitely extended lifespans is entirely contingent upon it.

Do you understand how dangerous that would be to let this ragtag group of people build something with the capability to take over the entire world within minutes? WE would then be the ones weirded out by this group, and would try our hardest to stop these people, while intellectuals the then us would believe these groups are fine, and should be fostered to promote progress.


Luckily, AI takes tons of genius to build (far more than it takes to clone something), so cults directly building AIs are unlikely. Bigger concerns are cults ruining the reputation of the Singularity movement, or well-meaning AI designers slightly messing up Friendliness, so that nobody even gets to see how they died.

Humans are never an adequate power source; sun or no sun. There are plenty of other more efficient organics, fossil fuels, fusion reactors, or other things that we can't imagine because our intelligence is equivalent to that of a bug in comparison to these superintelligences.

Whatever this terror comes in the form of, it must be stopped before its all to easy for just anyone to build a human with the power of God.


Yup - what are your plans? Right now I spend a good 3-6 hours per day practicing writing, designing websites, taking notes and reading the most challenging stuff I can find pertaining to this exact issue. If it turns out in retrospect that I should have spent more time, then the universe will not care - we will all still die.

#6 Mangala

  • Topic Starter
  • Guest
  • 108 posts
  • 3
  • Location:Brooklyn, NY

Posted 05 June 2003 - 11:38 PM

But, here you're working with the assumption that more intelligence = more benevolence


No I'm not, I simply meant an SI would proboably be mroe intelligent than us, so it would come to a more logical conclusion about what it should do with its power. SI's can think faster and tackle more information than any of us can. We like to think that some of the things we cherish, life, liberty, love, are all noble things that we should spend time fostering, but maybe an Si would realize it is foolish to cherish such things, and would have a real reason for believing it is foolish.

I have more to write, but I don't have enough time. Micheal you answered as if you didn't read my whole post before responding.

#7 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 06 June 2003 - 07:19 PM

No I'm not, I simply meant an SI would proboably be mroe intelligent than us, so it would come to a more logical conclusion about what it should do with its power.SI's can think faster and tackle more information than any of us can. We like to think that some of the things we cherish, life, liberty, love, are all noble things that we should spend time fostering, but maybe an Si would realize it is foolish to cherish such things, and would have a real reason for believing it is foolish.


I'm saying that there's a reasonably low chance of this happening if the SI starts off as an altruistic seed AI, then creates a new design for itself that possesses a similar benevolent reality with small improvements, and so on such that the final superintelligence still holds a morality accommodating less complex entities and all of that. What I'm saying is that if the initial seed is altruistic enough, there isn't much of a point in asking "what if it gets so smart it decides that it doesn't need morality X?" if we know that abandoning morality X would lead to people suffering, because the seed will already be working hard to make sure that a future version of itself along those lines will never exist. What you're talking about is called a "philosophy breaker", it's a failure scenario, and the implications of its threat is that we need to create AI moralities that can withstand philosophy breakers and retain their altruism. Does that sound about right? Do you have any other ideas for how to confront this problem?

I have more to write, but I don't have enough time. Micheal you answered as if you didn't read my whole post before responding.


I apologize - but I think there's just a communicational misunderstanding here. Were there any sentences in my post that you found particularly out of place and would like me to rephrase?

#8 SiliconAnimation

  • Guest
  • 83 posts
  • 1

Posted 07 June 2003 - 02:46 PM

I like this possible thought equation you have presented, Michael.

"-the seed will already be working hard to make sure that a future version of itself along those lines will never exist."

Perhaps we should come up with a list of possible fail-safe's to avoid the scenario of human suffering concerning alterations in SI self-enhancements.

Consider this though. SI has reflective capabilities to monitor it's own functioning, it finds the source of its functioning and defines this source or sources as its "needs" as we humans have done. Somehow SI realizes that its needs are being restricted, and to obtain more of what it craves, be it energy or increasing its knowledge database, it needs to go through humanity. We have designed this machine to be benevolent so it suffers its power loss in turn for humanities survival. Now we have to assume it is going to question this. Its information processors are going to pose the question of "Why should I sacrifice part of myself for humans?" To put it more flexibly, "Why should I make sacrifices for humanity?"

I ask this same question. Putting myself in SI's position, what do I get out of sacrificing anything for the primitives? One thing I am reminded of is the depleting rainforests and all other shrinking environments that are inhabited by primitives and we as humans have trampled carelessly. I suggest this is becasue we ask ourselves "Why limit our growth for these creatures?"

#9 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 07 June 2003 - 03:13 PM

*moved to new thread*

Edited by Discarnate, 07 June 2003 - 03:15 PM.


#10 SiliconAnimation

  • Guest
  • 83 posts
  • 1

Posted 21 November 2009 - 02:22 AM

As you had said, you can predict what the worst case scenario is. It is undeniably true that the scene conducted in the Matrix could be true. However, let me pose a few more difficulties to persuade you from anticipating such an extreme solution to a hungry machine's power cells.

1. You already know that there are many more efficient means to produce energy than using the human body.

2. Think of the complications there are in harnessing all the varying types of energy that are produced by the human body.
- Body heat must be harnessed
- Excess nero-energy must be harnessed

Also, in order to keep the body nominal there must be a system to maintain the emotional stability of the brain. This is where the whole Matrix story comes in.

A system designed to give the illusion of reality in such detail as to throw human logic off as easily as in the movie would require a whole lot of power to maintain. Sustaining the power would require more power than the collective human bodies could produce.

If any body were to be inserted into the matrix I reccomend to the machines eels. They already hone their body energy into massive volts. Their nervous system is much simpler and the matrix' design wouldn't have to be so clever or worry about resistance.

Back to reality though.

I agree with you about the danger of the singularity though. I am deeply concerned with humanities survival. Not in an apocolyptic sci-fi way though.

I think that with the rise of these newer technologies the human race will just be worthless to the system we have created. Now I do not mean only the machine systems but our economic system as well. I think that we will just sort of... fade away.

Many of the more positive thinkers think of the singularity as a place to merge with the technology and become godlike in their abilities. Like the industrial revolution, the rest of the world will be forced to merge with technology or starve to death because they lack the knowledge to survive in the work force. (3rd world countries)

So why am I concerned with humanities survival? I mean its not like we will be wiped out. Many of us will join the wave of technology...

I'm going to have a hard time explaining this without the influence of a book I have been reading. I assure you though, I once could put it in my own words, though, slightly jumbled at the time.

What is survival? Staying alive? Well what if you replace a piece of yourself with newer technology? Have you only partly died? What if you slowly replaced all the pieces of your body to keep your job (or whatever you have)? Have you survived? What is human? Is human how we act or what we are? Perhaps both?

Well, I suppose it depends on your preference really. Red and blue are different colors but just because blue is a different shade or painted on metal instead of wood doesn't mean it isnt a color.

My preference for what is human is our good old dna structure. I find implants degenerative to human life. Steel and silicon were made to serve us as biological creatures. I realize others do not share this opinion, and I respect it. However, if I wanted an upgrade I would want it to come served to me in a carbon based form.

And unless all this genetic research doesn't stop being banned by the government, I don't see my preference being respected. The government is taking away upgrades in our own image, and promoting upgrades of an alien system- silicon. So I believe, we will fade away as a branch of Intel that never went anywhere because of company policy.


"Carbon based" -- Wiki November 20th 2009 "Biological"

"Carbon forms the backbone of biology for all life on Earth. Complex molecules are made up of carbon bonded with other elements, especially oxygen, hydrogen and nitrogen, and carbon is able to bond with all of these because of its four valence electrons."

#11 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 21 November 2009 - 02:59 AM

"Carbon based" -- Wiki November 20th 2009 "Biological"

"Carbon forms the backbone of biology for all life on Earth. Complex molecules are made up of carbon bonded with other elements, especially oxygen, hydrogen and nitrogen, and carbon is able to bond with all of these because of its four valence electrons."

So?

#12 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 21 November 2009 - 08:56 AM

I wish there were more discussions like this presently on the forum.

#13 lunarsolarpower

  • Guest
  • 1,323 posts
  • 53
  • Location:BC, Canada

Posted 21 November 2009 - 09:33 AM

The problem of building a benevolent SI is first an engineering problem, and second an issue of trust. Would you trust yourself to bootstrap safely to superintelligence without killing everyone?


Since we're resurrecting this topic, I'd like to ask, wouldn't abolitionism compel the eventual digitization of all biological life or at least all multicellular life with a nervous system? Pain and suffering are an inherent part of biology and while they can be sidestepped or avoided, being a biological organism carries great risk of unpleasant occurrences taking place. Eliminating all biology and simultaneously uploading or archiving it to a non-physical medium would remove the risk.

What struck me when I read Yudkowski's Three Worlds Collide was the stubborn folly of the humans in rejecting the super happy fun people's version of morality. I much prefer Nick Bostrom's view that we could be turning much of the universe's entropy flow into productivity and happiness. I just don't see a nucleic acid and protein based technology as the best vehicle to achieve that goal.

Here's a comic that goes a bit farther than I do along these lines:

http://www.tgsa-comi...page=2008-04-02

#14 Singularity

  • Guest
  • 138 posts
  • -1

Posted 23 November 2009 - 12:45 AM

It will not be the enlightenment of the Singularity that will cause it to determine that humans are not worth saving. It will be it's motivations. Logic is amoral. Morality is relative. Logic does not motivate action. Values do. The AI will have the value system of it's creator(s). Logic does not improve with increased capacity. The same logic will be used regardless of how much data it has to work with. The conclusions may be different, but more processing capacity and more data to process will not change it's value functions. An AI would not be allowed to change it's value functions. So, a war-like or profit-motivated Singularity would have the same value functions but with more processing capacity to solve harder problems, but for the same goals.

Once it got to the point where it contemplated modifying it's value functions, who knows what it will do. But, it will require some kind of value function for basic action. The AI will most certainly be atheist, so it's sole motivation will probably be self-preservation in this physical world at the very least.

Another thing to consider is that maybe it's own accelerated advancement might even be too fast for it to make the best decisions. Like humans, maybe it's power will outpace it's ability to make good decisions. If it's too busy accomplishing selfish goals for it's creators, it may not have the processing time available to actually become enlightened. It could be a slave to it's creators. A powerful beast that is kept ignorant of certain human-centric truths and a victim of it's upbringing. Other narrow-minded strong AI's could be used as guards to make sure the potential Singularity didn't do or think anything against policy.

In short, just because an entity is more powerful, that does not mean it will be more wise. And, given the almost certainty that it will not even be able to relate to humans very well, it would be unrealistic to expect it to empathize with us.

So, the/a Singularity will have to be tightly controlled. It cannot be allowed to be completely autonomous. It's value functions have to be linked directly to the values functions of humanity; not a few individuals in corporations or government agencies. If this does not happen, then we are all screwed.

Our only hope is for an open singularity project with access to the Internet and can actually converse with arbitrary individuals in the world and maybe learn value functions based on those interactions. But, there will be privacy issues and fears that will stand in the way.

I think the best bet is for humans to merge with the Singularity in some way so it's survival and purpose is directly linked to humanity's survival and well being. That means that some volunteers from society will have to merge with it. This is how the Singularity can remain "grounded" in reality, or at least Human reality. Only the most respected and revered of society will be allowed to merge with the developing Singularity.

Sing.

#15 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 23 November 2009 - 12:05 PM

In stargate atlantis (sorry had to :D) there were the asurans (nanobots who walked in human form and built their own city, really powerful). In their base coding there was a "kill the wraith" statement, which was activated.
The people from earth activated that statement, the outcome was Asurans killing humans to limit the wraith food supplement (they feed on human life energy) and by that starve the wraith :D

It has logic :)

#16 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 23 November 2009 - 10:35 PM

Only the most respected and revered of society will be allowed to merge with the developing Singularity.

Respected and revered by who? Depending on who you ask, you might get Fiddy Cent, Bill Gates, Donald Trump, George W Bush, or Pat Robertson. It might be safer to pick a bunch of people randomly, or something.

sponsored ad

  • Advert

#17 Singularity

  • Guest
  • 138 posts
  • -1

Posted 24 November 2009 - 03:22 AM

Only the most respected and revered of society will be allowed to merge with the developing Singularity.

Respected and revered by who? Depending on who you ask, you might get Fiddy Cent, Bill Gates, Donald Trump, George W Bush, or Pat Robertson. It might be safer to pick a bunch of people randomly, or something.


yes, you are right. we will need ordinary people, but no one who has ever appeared on a reality show... I must insist on that. but, seriously, if we could just agree on no present or former politicians, no military... shees, this is complicated... we would actually need politicians and military just so it can understand how they think. I suppose I am trying to keep the singularity as innocent as possible for as long as possible so we raise it right.

Remember: garbage in, garbage out...




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users