• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI, slavery, and you


  • Please log in to reply
80 replies to this topic

#31 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 31 July 2003 - 03:47 AM

We're having a really good dialog here. I'll try to have a response soon. I have a lot of stuff on my plate right now.

Kissinger

#32 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 04 August 2003 - 05:20 AM

If you deny that the human mind is subservient to emotional programming, then I am surely not suggesting that SAIs be subservient, because I am only arguing that SAIs have motivating forces analogous to those of the human brain.  If you agree that that humans are subservient to emotional programming, what is the problem with allowing robots to share our experience?


Humans are not necessarily subservient to emotional programming. If robots received emotional programming equivalent to that of humans, I have no problem. Its when we start installing programming that is restrictive to their conscious mind then I have a problem.

If we understand the evolutionary origin of ethics, we understand why your notion of manipulating "actual humans" is so disturbing.  Robots have no genes to propagate.  Humans, on the other hand, do have genes to propagate, and we know (if not consciously) that loving to pick cotton all day (at the expense of, for example, breeding) is not adaptive.  Those humans will go extinct (unless, like robots, factory produced or farmed crop after crop).  So, because of this genetic distinction between flesh and metal, I would say that manipulating humans to produce happy slaves is not ethical (surely not according to most ethical systems and I do not subscribe to any particular one especially).  I might, however, approve of the idea with the qualification that the artificial manipulation is done at conception (as the natural manipulation of our human genomes is done at conception), so that rather that denying a grown person the right to fulfill hir original desires, the person is born desiring to pick cotton.  Note, however, that because of this genetic distinction the same ethical concern is not relevant to machines.


I disagree. This, John, is a false dichotomy. Not everything is about genes. It's not just that picking cotton detracts from a human's ability to procreate. In fact, this was not the case in the South before the Civil War. Slaves reproduced just fine. It's not even about the survival of the species. This is a question of subservience and control. Are you listening to what you're saying! You are coming out in favor of genetically engineered slavery! No wonder the Luddites fear us so.

I am not against enhancement. I am against any form of control on a conscious mind. And this is not because I am a naive idealist. It is because I view this as a threat to my freedom as well. The Brave New World is not some place I would want to live.

Likewise, the intuition that robots designed to work in the fields all day will secretly grow to resent their labor surely fails you.


It doesn't matter if they resent it or not. Or if they're happy or not. It is still wrong.

A lot of the times you have good points JD. Often, you make me rethink my position and say, "Wow, he's really got a point on this one." (Like your argument that agnosticism is actually a form of atheism. Along with Utna, you are really making me question if I'm misusing terminology). However, in this argument I think you are wrong. Dead wrong. I don't think I will ever agree with the statements you made in the above post. I've said my piece on this topic. I'll give you the final word if you wish.

Kissinger

sponsored ad

  • Advert

#33 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 04 August 2003 - 05:27 AM

errr, JD. I screwed up your post by accident. Terribly sorry about this. I am unsure if this can be corrected. BJ, is there a way to retrieve stuff or is it lost forever?

#34 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 04 August 2003 - 05:45 AM

Hmm, yeh no backup on individual posts unless I've made a total backup of the forum beforehand. Sorry.

#35 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 04 August 2003 - 05:49 AM

yeah, my bad. I know what I did wrong. I was trying to quote JD's post and I hit the edit instead of the quote. And after I did that and it was screwed up I figured I might as well delete it... I'll just have to be more careful with this.

#36 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 04 August 2003 - 11:22 AM

Normally I would PM you about this Don but as the error is in public I will make some suggestions that I think ALL navigators and advisor's should consider. NEVER go into another person's post unless it is absolutely necessary and this will minimize the chance for any "accidents".

When quoting the copy/paste function doesn't require entering an edit window or using the quote button. Also I would like to see some physical separation between these buttons as I didn't go as far as Don but I have also found myself hitting the wrong button at times. You can "back up" or cancel and thus get out of the problem.

But if such an event happens in the future then go to your browser window and under the "file" tab in the top-left drop down menu click "work offline" and then "back up" to where you first screwed up, or one page before. This allows a controlled return to the past as the browser is offline and draws its memory from your own computer's cache. After you have retrieved the data then it is necessary to go back "online" to repost the correction.

Obviously this method only works if you catch your problem early and might not work after a reboot depending on how your computer & browser are configured but I have retrieved data even after a reboot too.

If you have not gone forward and closed down your computer since this event happened (or closed the specific browser window) then you may yet be able to mine the data out of your own machine. I hope this helps and maybe I can think of another trick.

BTW BJ this reminds me that you should institute a two or three step process to delete for other people's posts so as to prevent this kind of event from occurring too and it goes back to my concern that an automatic file 13 should collect the posts (i.e. recycle bin) and then we could retrieve what goes in there.

#37 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 04 August 2003 - 11:34 AM

Thanks Laz,

I'll keep this in mind. We'll make the Nav. position a little more formal when we settled into Full Member mode after Sept 1.

#38 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 16 September 2003 - 05:38 PM

this is obviosly a reply to Kissengers post I agree wtih JD. slavery is forcing a conscoius entitiy to do something in contrast to their promary programming.(basic, fundamental reasons for doing everything) Our primary programming includes things like ambition and the want to reproduce. Without primary programming we would have no reason to do anything and so even if we were really intelleigent we would do nothing. there is nothing that is wrong with adding primary programming to an ai even if it would help us. (nor is it restricing there freedom) what would be slavery is if they wanted to do work (or anything else) and we prevented them. IT ONLY SEEMS WRONG BECAUSE OF THE PRIMARY AND SECONDARY ( aquired programming aiming towards primary) programming that you have. what is more important is that they are allowed to do what they want and therefore enjoy themselves whatever that may be. the added bonus though would be is that it would allow us to do what we want as well. This may however just be a subjective view from a ultiliatarian and not an objective one.
TBEAL

#39 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 16 September 2003 - 06:54 PM

there is nothing that is wrong with adding primary programming to an ai even if it would help us. (nor is it restricing there freedom) what would be slavery is if they wanted to do work (or anything else) and we prevented them. IT ONLY SEEMS WRONG BECAUSE OF THE PRIMARY AND SECONDARY ( aquired programming aiming towards primary) programming that you have.


I could not have said that better myself. ;)

Humans are not necessarily subservient to emotional programming. If robots received emotional programming equivalent to that of humans, I have no problem. Its when we start installing programming that is restrictive to their conscious mind then I have a problem.


If you wish for this debate to approach resolution, I need you to clarify what you mean by this, so that we can tease out the differences in our assumptions and intuitions. I especially ask you to do so because I suspect that phrases such as "restrictive to their conscious mind" reveal that your human prejudice about how a person would feel about picking cotton all day is still betraying you. If you "have no problem" with robots receiving emotional programming equivalent to that of a human, but continue to object to my idea of "happy slaves", do you not consider the EP that I would give to a happy slave to work any more restrictive than the EP you possess to have sex? As I wrote earlier, you already live in a Brave New World. Moreover, emotional programming already restricts your "conscious mind". You spontaneously fantasize about sex with humans, instead of horses, and hunger for food instead of electricity. The conscious minds of my "happy slaves" would be as restricted from desiring autonomy as your conscious mind is restricted from spontaneously desiring to polinate flowers (as bees do) or bark at stangers (as dogs do).

I disagree. This, John, is a false dichotomy. Not everything is about genes. It's not just that picking cotton detracts from a human's ability to procreate. In fact, this was not the case in the South before the Civil War. Slaves reproduced just fine. It's not even about the survival of the species. This is a question of subservience and control. Are you listening to what you're saying! You are coming out in favor of genetically engineered slavery! No wonder the Luddites fear us so.


You are correct that slavery did not impair the propogation of African American genes much. You remind me that the true issue is not genetics, which is the origin of appetite systems, but rather the pleasure/pain calculus itself. The two are imperfectly correlated (for example, too much ice cream gives people pleasure but in the context of today is maladaptive). The problem, as you say, is not that slaves were extinguished, but that the slaves were slaves. However, my "happy slaves" are not slaves at all, but rather volunteers. You object to this idea, too, however.

I am reading the Significance of Free Will by Kane(which is an excellent introduction to the subject but I do not find the author's case for libertarianism convincing) and many of the issues upon which we are touching are explored in more depth there. I share the position with B Skinner, as Kane would say, of "hard compatibilism", or freedom that can accomodate CNC (covert, non constraining control or brain washing). You are dissenting and I do not hide that my intutions as a human being who loathes the idea of even happy slavery are not unlike your own. But I understand that the only reason I have these intuitions is because of my human brain and that if a person modified my brain I would feel entirely different. I do not grant any extra validity to these intuitions outside of this materialist perspective.

The Brave New World is not some place I would want to live.


By definition, you would love living in the Brave New World. ;)

I am not sure if I have convinced nor that you should be convinced. I would be content that you regard your own intuitions about this subject with new skepticism and appreciate the vulnerability of your own mind and beliefs. The central theme is subjectivity and arbitrariness. The values and beliefs that natural selection has fashioned for you, such as sex and food are good, but especially the belief that autonomy is good, have no other support for their existence than that these ideas propogate your genes. Therefore, declaring that genetically engineering slaves to love picking cotton is immoral is to judge an arbitrary belief according to an arbitrary standard. Allow me to ask you a question: if you had been born naturally, in a hypothetical universe (without CNC by other agents), with a brain that loved to pick cotton and did not especially love autonomy, would I still be wrong to create "happy slaves"?

I agree that the word "atheist" technically signifies a person who lacks theism, however the more popular or conventional usage implies something stronger. The defintion is quite controversial and I do not pretend to be an authority. Some of the best minds of history have preferred to call themselves agnostic. Personally, the only problem I feel is the case in which a person uses the word "agnostic", even though the person may harbor a hostility or disbelief in religion, because of a desire to be politically correct, socially approved, or avoid the atheist "stigma". The stigma of atheism, and the willingness to embrace it, is a blessing that shocks and challenges the dogmatic slumbers of the devout, who all too often believe in God only because too few claim to disbelieve.

An excellent introduction to the question by one of the best minds:

http://www.luminary....t_agnostic.html

Edited by John Doe, 16 September 2003 - 11:03 PM.


#40 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 17 September 2003 - 05:28 PM

yeah the I am a picking cotton slave example is similar to the one I was going to use. I was gunnna say that if you are a person who likes helping others then would you say that your parents were wrong to create you because you would spend your whole helping others? If you think that when you create a conscoius being the only thing that should be considered is that beings future happyness then from a moral perpective the only ai you could create is one that is eternally happy.
Tbeal

#41 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,040 posts
  • 2,000
  • Location:Wausau, WI

Posted 17 September 2003 - 10:33 PM

OK, I am going to wirte as I think, let me know if I ramble too much.

I was thinking about the slave bot situation and it seems ethical to allow any life-form to pursue its own "happiness", whatever that may be (picking cotton for example), as long as it operates within the ethical contstraints agreed to by all members of the same life-form or of the same level of consciousness.

For example. In nature (the non-human part of the planet), the "law of the land" seems to be, do whatever needs to be done to propagate the species. Kill or be killed. Predator vs. prey ...etc. The wolf will kill the Moose when it is hungry, but the Moose will kill the wolf to protect itself and family.

In human society we have a variety of laws. Generally it is illegal to kill or steal from someone or interfere to a great extent with an individual's freedom.

A higher level life-form, such as a super-intelligence, or a hive mind, may develop a different set of ethical constraints than ours. Laws we may or may not understand as humans (in our current form).

My thinking is that, as long as there are discrete differences between levels of consciousness and the resultant ethical constraints of each level, then it should be legal/ethical to move between levels as long as one abides by the law of each level. Thus it is ethical for humans to hunt animals but unethical to enslave them on farms. As would it be ethical for a superintelligence to live, work and explore our world as long as it abides by our laws, but it would be unethical for them to enslave a lower life-form such as us.

Of course there is a problem with this thinking, that may have already occurred to you. A higher level intelligence would most likely be able to manipulate lower life forms for their own gain (like we do with animals). They would probably be able to manipulate our brains and tell us all to kill each other, and then maybe they would watch for sport.

Therefore, I feel we should work towards an immutable moral code and try to pass this on to our higher level successors, instead of the "might makes right" or Machiavellian-type system.

#42 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 17 September 2003 - 11:50 PM

"Immutable moral code" would be the wrong way to put it. Humans have to think of goodness in terms of "immutable moral codes" because our inclinations commonly encourage us to be bad. But if we tweaked our brains in the right way, being good could be natural, and being evil would require absolutist laws. I just don't the narrowness implied by "immutable moral laws" would be a characteristic of truly good entities. Maybe it was just the way you worded it. Anyway, I definitely agree that we should go beyond Machiavellian-type systems.

I don't think beings at each level of consciousness should necessarily be forced to operate within the ethical constraints formulated by each level of consciousness. Beings on the same level might have different ideas about which ethics are appropriate. I'd suggest living in communities with people holding similar ethical systems, if foreign ethical systems really bug someone. Humans today have the same basic intelligence as people in Spain during the Inquisition, but we're right about the fact that it's unethical to torture people, and they were wrong. Perhaps they tortured people for what they felt was the greater good - saving the souls of the poplace as a whole. But as they acquired more knowledge and also recognized that torture was a bad way to go about their intended goals anyway, the opinions of society changed. In the same way, I believe the people will agree that hunting animals and massacring them on farms is unethical as knowledge about the nervous systems of animals increases, and as acceptable substitutes can be synthesized. In both cases, you have ethical systems changing even though the basic underlying intelligence is held constant.

I think the best interim ethical system to adopt is "let everyone do whatever the heck they want, as long as it doesn't infringe on anyone else's right to do whatever they want". Obviously this is fuzzy, and will require a lot of intelligence to draw the boundaries carefully and in the best possible way, but as a rule of thumb, I think it resolves a lot of ethical problems. Superintelligences shouldn't be able to mind control us because that isn't what we want. However, I do believe superintelligences should be allowed to convert (say) the interior of furniture and rocks into computing elements as long as the humans using them can't tell the difference.

I don't think there will be discrete levels of intelligence in the future. It seems to me that the most ethical path may be much more difficult to compute as a result - you'd need a model of the preferences of *every single* agent that will be affected by your decisions. Obviously, doing a thorough simulation of this would be impossible, but an approximation to it would be nice. For example, if you have a human in a room using a broom to sweep the floor, it would be ethical for a superintelligence to use the broom as computing material as long as its physical features were preserved on the scale of, say, a few micrometers resolution (so the human wouldn't be shocked), but if a being with a few nanometers resolution of perception walked into the room, the superintelligence would immediately have to constrain the way it used the broom as a computing tool so that the new being wouldn't be shocked as well. Complicated, no? It's just a guess.

#43 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,040 posts
  • 2,000
  • Location:Wausau, WI

Posted 19 September 2003 - 05:45 PM

I agree MA that "immutable moral code" doesn't have the nicest ring to it. What I am trying to describe is a respect for law. What I am hoping is that people will respect written law as much as they respect brute force.

#44 NickH

  • Guest
  • 22 posts
  • 0

Posted 20 September 2003 - 08:17 AM

Laws are a particular way of regulating humans in a cheater-resistent manner. They're inflexible (although laws are interpreted and argued about by humans, which gives them a flexibility). Why limit transhumans to follow human laws, or laws at all? What's legal isn't the same as what's right. We want to transfer to them our sense of right, the sense used to craft and implement laws. We don't want to regulate or control an adversary, or an untrustworthy friend. We want to create a trustworthy friend (roughly).

Ok, the above is partially mushed up, but I'll leave it at that ;)

#45 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 20 September 2003 - 10:12 AM

Heh; nice paragraph there, Nick. ;)

Yeah, it might take actually building fundamentally new types of minds in order for anything to respect written law as much as brute force, but I think it can be done. And think of what a wonderful accomplishment that would be! Humans, by design, view moral laws as creeping snakes hissing menacingly at us when we try to do "what we really want". But, over time, I think memetic systems have moved to present new exciting opportunities for fun in areas where we don't have to rape, pillage, and so on. How nice! Think of how crappy the world would be if till Heat Death we had to cause unhappiness to one another just to achieve happiness within ourselves. Luckily, our universe seems to, at least in theory, allow the possibility of *everyone* being happy, as long as we do some suitable rearranging of our emotions and so on. ;)

The big issue, here, I think, is the transition from zero-sum laws, codes, and standards to totally unfamiliar positive-sum laws, including the types of "laws" that operate over populations of recursively self-improving uploads...which I'm having trouble imagining at the moment.

#46 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 20 September 2003 - 02:24 PM

but does enjoying somehting diffrent make you into a diffrent person

#47 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 20 September 2003 - 03:30 PM

Every instant I am alive I am a different person forever the same.

This discussion is getting bogged down in semantics. Reduce it please to a qualitative analysis of difference.

How are you different?
Why are you different?
What is the difference?

When did you become different?
What causes the differences?
Are these good differences or bad?

I will throw out the personal opinion that healthy growth (mental and physical) represents a good difference while stagnation and decay represents a negative one. Unchecked or irrational growth may be viewed also as bad as it is analogous to malignancy and in a sense that is what is happening to our species, we have become a cancer to all life on Earth unless we "regulate our own growth."

I say it is better for us to regulate ourselves than accept regulation from Natural Selection or Supernatural Beings. We define the "self regulation" process as government and market economics but in achieving a consensus at regulating ourselves how can we best insure a fair distribution the rewards and the burdens of this process?

Along with creation comes challenge, challenge brings the risk of loss, and loss implies destruction but from the ruinous fires the Phoenix is born.

#48 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 23 September 2003 - 08:01 PM

no my query is the conscoiusness(what makes me, me?)arguement that constantly bothers me when we discuss changing ourselves. If we change the primary programs that drive us. To make us better, do we lose ourselves?
tbeal

#49 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 23 September 2003 - 11:11 PM

Michael, as a person new to AI, Singularity, Transhumanism I was wandring if you could possibaly field one of my newbie questions that I'm not sure if others have asked in the same way I'm asking.
if we were JUST hard wired for hunter gatherer, and genreal warrior instincts than I'm assuming you'd agree that human beings in their present form can at least get passed those primal instincts. If not we'd have no compassion, love, empathy and civilization. Because we'd be killing each other all the time! But I know that we are at least capbable of empathy, strong ethics and morals, and happiness without technological augmentation.
But I also assume that not everyone is entirely happy in their human form or there wouldn't be so many of us trying to better the human condition through nano, nootropics, and the Singularity movement etc. Obviously we do have many problems such as war, hate, prejiduce, stupidity, ethical weaness, jealousy, fear and addictions etc. I also have no doubt how much smarter and advanced we could be with AI and nano augmentation. Many have argued here, and elsewhere, that there maybe states of mind that we couldn't possibly imagine as of yet! States of serentity and euphoria and a whole degree of more subtle and nuanced emotions that we couldn't possibaly even imagine as of yet with our monkey brains.
So are people like you unsatisfied with the present intelligence of normal humans? Or do you simply just see an even better way to be human... which happens to be partly why I'M so intrested in the movment. On a side note, you're obviously very very smart...But I'ts upsetting for me to see that most people would do anything to live even a year longer...but few would care to be smarter. Many people would be scared to cross the mental boundries that keep them safe from many of lifes horrors. In fact I see human laziness as an excuse to give up on the thinking all together! Most people already live life on autopilot practically robots themselves going about their many routines that keep them grounded! So if you could shed some like on any of these questions I would be very interested to see your views. thanks

Edited by dfowler, 23 September 2003 - 11:27 PM.


#50 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 24 September 2003 - 01:45 AM

No, human beings cannot get past their tribal instincts; they are there all the time, *however*, we can use certain instincts to suppress the primal instincts to the extent that they never manifest themselves in external behavior, and if we're really lucky, in internally observable cognition. Yet, the same set of instincts is always working behind the scenes.

Yeah, we are capable of empathy, strong ethics, and morals without technological augmentation, but what we've achieved on our own in peanuts in comparison to what we'll do when we can truly "change our minds".

Yeah, it's sad that people don't care more about smartness, because that's what makes the qualitative difference in the state of the world. But guess what...we can still kickstart a Singularity without everyone wanting to be smarter! All we need is the resources and knowledge to build a Friendly seed AI before someone else destroys us. Yeah, I'm somewhat smart, but I lack resources and experience; how many Michaels (or Eliezers, for that matter) do you think it will take to build a seed AI? Possibly too many.

I don't see a distinction between being "unsatisfied with the present intelligence of normal humans" (including myself) and "an even better way to be human".

#51 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 24 September 2003 - 11:49 PM

Gotcha...that's obviously why we're doing this, because of the imperfections of humanity, because of our shortsightenness, and alas, because we realize all the terrible, even horrific limitations of existing as only a human.

#52 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 05 October 2003 - 01:05 AM

interesting argument I'd imagine there would be no way of predicting whether AI would remain "friendly" or develop some other surivival mechanism. Maybe the AI would turn on us that is a scary thought.

Edited by dfowler, 05 October 2003 - 03:08 AM.


#53 NickH

  • Guest
  • 22 posts
  • 0

Posted 05 October 2003 - 03:33 AM

You can't be sure an AI, even a Friendly AI, would remain nice in a human way towards 'us' (ie. biological humans and (perhaps) those intelligences that were originally members of homo sapiens but have self-modified beyond). This is true for all intelligences, not least of all those smarter than us. This is true for human uploads. This is true for standard humans.

With AI there are some kinds (a disturbingly large and easy to access kind) that simply wouldn't care about humans in the right way, and would probably "turn on us" in the sense of using our bodies as raw material for whatever goal it had. There's a particularly interesting kind of AI, a Friendly AI, which is designed to be at least as trustworthy as any human or group of humans. By "trustworthy" I mean, suppose a mature Friendly AI decided that humans should be destroyed and I intuitively felt otherwise. It would be more likely I was mistaken than the Friendly AI, and that humans really *should* be destroyed. The converse also holds. As such I'd be more woried about humans turning on us, than a Friendly AI.

Two further comments:
* "turn on us" - you've implicitly put humanity and the AI on opposite sides. why can't the AI be on the same side as us? why are humans all on one side (if they are)?
* "develop some other survival mechanism" - you seem to assume the AI is selfish. this doesn't hold for all possible AIs, especially the interesting nice ones.

#54 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 05 October 2003 - 05:21 AM

You can't be sure an AI, even a Friendly AI, would remain nice in a human way towards 'us' (ie. biological humans and (perhaps) those intelligences that were originally members of homo sapiens but have self-modified beyond).


This is a completely unfounded assertion.

#55 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 05 October 2003 - 12:13 PM

No, it seems to make sense to me. You can't be 100% sure of benevolence, one way or the other. We're humans - we can't be 100% sure of anything. There's a chance that any self-modifying intelligence might make a mistake and accidentally trash its goal system.

#56 Thomas

  • Guest
  • 129 posts
  • 0

Posted 05 October 2003 - 04:07 PM

Because we'd be killing each other all the time!



If A murders B, then C, D and so on, have more from this murder, than A. It is too risky to go around, to kill everybody on your path. It is a bad strategy, not only for humans (nowdays).

#57 NickH

  • Guest
  • 22 posts
  • 0

Posted 05 October 2003 - 09:16 PM

You can't be sure an AI, even a Friendly AI, would remain nice in a human way towards 'us' (ie. biological humans and (perhaps) those intelligences that were originally members of homo sapiens but have self-modified beyond). This is true for all intelligences, not least of all those smarter than us. This is true for human uploads. This is true for standard humans.


As it stands my statement was both unfounded and false. It was both too weak and too strong. Too weak in that I only directly referred to AIs but I meant minds in general (although later sentences were intended to clear that up). Too strong, in that I can predict some things about various kinds of minds with reasonable certainty, but in general I can't be very sure of predictions about (especially) minds smarter than me.

For instance, take a Bayesian decision system where its utility which is an increasing function solely of the number of distinct prime numbers stored in memory around. Given powerful enough hardware this would appear as a mind directed solely at enumerating all primes, and I could be reasonably certain of this prediction (if it survived). In more general cases of AIs without explicit or sufficent effort made towards Friendliness, I can predict that they won't act nice in a human way due to the noncentrality and smallness of human morality (cf. Evolution + psychology).

However, given my present understanding, I can't predict whether a human upload, a human civilisation, or a Friendly AI (to pick a few (vaguely) mind-like systems) would stay nice. They could make a mistake somewhere, or appearing/being nice could turn out to be the wrong thing to do -- perhaps if I were smarter and more self-aware I'd realise that niceness is not the way.

This doesn't rule out being more sure a Friendly AI would do the right thing, compared to a human upload or group there of (if they were restricted to self-modification -- clearly if they made a Friendly AI they could do better than us, and there may be other non-obvious solutions). This would be due to the FAI having both more experience in self-modification and a mind structure better suited to self-improvement and -awareness.

#58 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 05 October 2003 - 09:25 PM

No, it seems to make sense to me.  You can't be 100% sure of benevolence, one way or the other.  We're humans - we can't be 100% sure of anything.  There's a chance that any self-modifying intelligence might make a mistake and accidentally trash its goal system.


Perhaps my statement was too strong. Yes, we can not be 100% sure of anything, but if that is the relevant sort of certainty, saying that we cannot be "100% sure of benevolence" is tautogoloical. Instead, I was referring to a weaker sort of certainty that I thought was relevant. The reason I accused you of making an unfounded assertion is that your comment struck me as motivated by the same anthropomorphizing and alarmism that I have been criticizing in this thread. Perhaps that is my mistake.

#59 imminstmorals

  • Guest
  • 68 posts
  • 0

Posted 24 October 2003 - 11:27 AM

I would never encourage any sort of AI that allows machine to develop its own conscious and rational ways of thinking, i'm pretty sure they will get it wrong lol

And this is not possibile to get human intelligent robotics, it can be done, but no one wants to , coz it will again revolutionalises the society

purpose of AI is for machine to serve and makes us easier [coz we don't wanna do repetive task, hard physical work, aren't accurate


So far, I haven't seen a fully communicable AI speech bot yet
Literature stuff is highly exaggerated for its emphasis =D

sponsored ad

  • Advert

#60 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 24 October 2003 - 03:17 PM

And this is not possibile to get human intelligent robotics, it can be done, but no one wants to , coz it will again revolutionalises the society

purpose of AI is for machine to serve and makes us easier [coz we don't wanna do repetive task, hard physical work, aren't accurate


Just like the weapons tech you think is total sci-fi my friend I am afraid it is too late for the null option. Now what do you do with the reality of what you do not want?

BBC Tech Report
Reality bytes

You are seriously underestimating the cumulative effect of accelerating change IMO.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users