• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Robot Rights


  • Please log in to reply
20 replies to this topic

#1 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 03 November 2003 - 04:35 AM


Link: http://www.techcentr...om/102903B.html
Date: 10-29-03
Author: Glen Harland Reynolds
Source: TechCentralStation.com
Title: Robot Rights


Posted Image
Robot Rights
By Glenn Harlan Reynolds Published 10/29/2003
Posted Image
"Robots are people, too! Or at least they will be, someday." That's the rallying cry of the American Society for the Prevention of Cruelty to Robots, and it's beginning to become a genuine issue.

We are, at present, a long way from being able to create artificial intelligence systems that are as good as human minds. But people are already beginning to talk about the subject (the U.S. Patent Office has already issued a -- rather dubious -- patent, on ethical laws for artificial intelligences, and the International Bar Association even sponsored a mock trial on robot rights last month).

More recently, blogger Alex Knapp set off an interesting discussion of the subject on his Heretical Ideas weblog. Knapp cited Asimov's famous Laws of Robotics:

[*]First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

[*]Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

[*]Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Then he asked whether it would be moral to impose such laws on an intelligence that we created. Wouldn't we be creating slaves? And, if so, wouldn't that be bad? (Here, by the way, is a fascinating look at the programming problems created by Asimov's Laws).

Knapp's questions raises questions that go beyond the animal rights and human rights debates. Human slavery is generally regarded as bad because it denies our common humanity. Robots, of course, don't possess "humanity" unless we choose to design it into them -- or, at least, leave it possible for them to develop it, a la Commander Data, on their own. Do we have an obligation to do so?

Animal rights activists, by contrast, generally invoke Jeremy Bentham's concept of suffering: "The question is not, 'Can they reason?' nor 'Can they talk?' but 'Can they suffer?'" Under this approach, it's the ability to feel subjective pain that determines the presence of rights.

Not everyone agrees with this viewpoint, by any means, but are we obliged to create machines that are capable of suffering? Or to refrain from programming them in ways that make them happy slaves, unable to suffer no matter how much they are mistreated by humans? It's hard for me to see why that might be the case. A moral duty to allow suffering seems rather implausible.

Immanuel Kant thought that our treatment of animals should be based on the kinds of behavior toward humans that cruelty to animals might encourage -- but, again, it's hard to see how that sort of reasoning applies to machines. One might judge a man who neglects his car foolhardy, but only some of us would think of such behavior as cruel. And it seems unlikely that cruelty toward automobiles, or robots, might lead to cruelty toward humans -- though I suppose that if robots become humanlike, that might change.

In response to Knapp's question, Dale Amon -- who has actual robotics research experience -- observes:

If we build rules into a mobile robot to limit its capabilities we are doing nothing more to it than putting a governor on an automobile engine or programming limitations into a flight control system. A 21st Century robot will not be a person, it will be a thing, an object.

But even Amon suggests that "true machine intelligences," which may include both evolved artificial intelligences and downloaded human minds, should be treated as citizens. Fair enough. But do we have an obligation to allow machine intelligences to evolve into human-like
minds?

I don't think so. I'm not sure where such an obligation would come from. But, reading the comments to Knapp's and Amon's posts, it seems clear that views on this subject vary rather widely. It should make for interesting discussion, and I'm glad that people are talking about
it now.

#2 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 03 November 2003 - 07:21 AM

That's a very interesting topic... Robots rights... first I should say this : I am, de facto, a robot esclavagist :

Posted Image

This is a picture of me and my latest Extor research prototype. This robot was made from the corpses of four other machines which I heartlessly bought and slaughtered to recover the motors and some sensors. I used a big screwdriver to pull out their guts and carefully removed their brains, which I discarded.

The Extor went through more than a hundred very stressing tortures, which included brain transplantation, multiple lobotomies and several sessions of electroshocks. Lately (after a discussion with Mark on Gina's nanotech forum) I applied digital neurotoxics to the Extor's brains and attempted several types of "inloading".

I don't believe my Extor has any right, except that of doing what I want. But how would I react if it became sentient ?

I don't know. That raises an important point : robot rights may depend on whether they were DESIGNED to be sentient, or if they became sentient on their own, despite our designs ?

My view on the situation is that :
(1) - Human-designed sentience should remain under human control, unless it was designed to be out of (our) control.
(2) - Sentience as an undesired byproduct of human designs should be considered just as any non-human sentience (ie, space aliens if we ever meet any we can identify as sentient)

(1) is obvious : if you design something, you want it to operate as you designed it. If the design implies sentience but sentience gets in the way of your goal with the design, then screw freedom of thought !

(2) is less obvious. We humans have proved to consider sentience (humans) with very little regards. We did invent such concepts as slavery, war, genocide, homicide, brainwashing, scams, drugs, bad TV shows, rape, torture, politics or law (a non-exhaustive list of our crimes against sentience). No one else did. And we practice all of this to this very day, and show no sign of ever stopping. Hell we even do all this for MONEY or even for simple IDEAS ! (like religion)

When we'll meet another sentient species, be they robots or aliens from space, a good bet is we'll try our worst to take them under control, use them at our exclusive advantage and kill them if they get too annoying. I use to say when we'll meet space aliens the first thing we'll do is take nakes pictures of them and put them on the internet.

The very thought of giving robots rights implies we are limiting them, and are also giving them duties. And as humans who would identify ourselves as the "creators" of robotkind, it's most likely the duties we'll give robots will far outweight their rights. It's also likely not one of us will find this abusive. It's just our nature.

The solution would be to make a "friendly AI" for all robots, but the very word "friendly" is already a duty : we humans have the right not to be friendly to every other human, and we use this right every day.

I strongly advocate limiting the potential of AI's designed for sentience, because if we design a sentient AI with no behavioral limits it may well decide we're its ennemies. Fact is, anyone, when they think about it seriously, can realise the human race is dangerous, to itself and to everything. So if we don't want to create ourselves an ennemy, we'll make sure AI's can't possibly choose to become our ennemy.

Jean

sponsored ad

  • Advert

#3 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 03 November 2003 - 12:11 PM

I suggest you weigh in Jean on a topic thread called "AI Slavery and you".

We have been examining the ethical and existential risks of our approach to AI here for some time.

#4 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 03 November 2003 - 04:04 PM

Pain and pleasure centers and hardwired negative/positive self-enforcement, a la Terran biology, should be replaceable by Bayesian reinforcement:

http://www.singinst....l#reinforcement

In Bayesian reinforcement, conscious judgement replaces subconscious pain/pleasure feedback as the modifier of future behavior. All robots should be built Bayesian, and synthetic pain centers (at least) should be forbidden, in my opinion. I agree with Kevin that I'm happy people are talking about this now, but I personally think people should be more worried about robots killing them than the suffering of robots. (Because the former seems like the more likely danger, from where I'm standing.) Making robots that are morally stable and robustly benevolent (Friendliness-complete) entails creating robots that understand why making other robots (or humans) feel suffering or discomfort is bad.

And now to respond to Jean, whose commentary was very interesting:

Whether a robot is designed to be sentient or acquires sentience over time is irrelevant. All sentient beings are entitled to the same basic rights. The line between "deliberately designed sentience" and "spontaneously emergent sentience" is too fuzzy to be useful as an ethical distinction, in my opinion. Human-designed sentience shouldn't remain under human control if that doesn't respect its volition - otherwise it would be called "slavery", giving a special status to the original humans when they deserve no such status. Being first doesn't mean we're better - we are not entitled to control of things just because we created them - we "create" our children but they don't deserve to be our slaves. That's my take on stuff.

If you accidentally create a (sentient) AI that doesn't share your goals, then the AI gets to go off and do whatever it wants - you shouldn't get to brainwash it, reprogram it, control it or whatever without its permission. Wouldn't be nice otherwise.

If we ever meet space aliens, (unlikely) then they're likely to either be so insanely more advanced than us (trillions times human brainpower, wormholes, etc) or insanely less advanced than us (pond scum) that anthropomorphic scenarios such as drawn-out wars seem very improbable; to me, anyway.

I very strongly disagree that "the very thought of giving robots rights implies we are limiting them, and are also giving them duties". Giving someone the right to do what they want is hardly "limiting" them. Having rights need not entail having duties. In a society where all environments and products are synthesized automatically, by non-sentient systems, and all necessary work is only done by people who want to do it, rights wouldn't entail duties of any sort.

We can call being unfriendly a right of ours, but the only reason we ever have the predisposition to be unfriendly at all is because engaging in that sort of behavior was adaptive for our ancestors. A mix of friendliness and unfriendliness is not the "default" for minds in general. We will set the default to where we like; say, friendliness. AIs designed for friendliness could be very friendly without mental effort; it could be natural for them. That's the type of AIs we should build, at least at first. And those are the type of AIs the first AI should build, and the AIs those AIs build, and the AIs those AIs...you get the picture. (Part of what the "Singularity" is about is acknowledging that the first rapidly self-improving AI we create will probably set the tune for the entire future of sentientkind because of its capacity to solve problems and create new minds so much more quickly than human beings can.)

I strongly advocate limiting the potential of AI's designed for sentience, because if we design a sentient AI with no behavioral limits it may well decide we're its enemies.


I strongly recommend this one:

http://www.singinst....FAI/anthro.html

#5 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 03 November 2003 - 07:06 PM

Very interesting reply, Michael. We clearly have very different opinions on this topic.

First, let me clarify : my saying that AI's could decide we're their ennemy is not anthropomorphic or homocentric thinking at all. I consider the issue from a much larger, objective perspective : the fact that, everything weighed, humans are the single most destructive force on Earth. Not even the biggest climatic or tectonic disasters the Earth ever put up can equal the destructive power of our nuclear arsenal (and I'm not even talking of other WMD's or genetically modified vegetals we're spawning, or pollution...)

It would take a gigantic asteroid hitting the Earth to unleash more destruction than we can, and we ain't even done improving our destructive capability.

Every other species on Earth has had to suffer from us, from virii to insects, from vegetals to animals. And there's no indication we're gonna be any less destructive outside Earth (we've been launching plutonium-powered space probes for 20 years now, even beyond Pluto's orbit)

Now, in this context, a sentient AI would have no reason to believe AI's would be the only species in existance we would never harm (and don't say "never say never" ?). That is my basis to say an AI could very well decide we are (or can become) its ennemies. It's a logical conclusion.

To counter that possibility, you can make it so that your AI would never turn on us, or that it would have no survival instinct whatsoever. For instance, by using a different feedback / learning system that doesn't rely on pain and pleasure. But as long as an AI thinks, feeling no pain or pleasure won't change its objective perception of the fact that we can probably be dangerous to it. It would have to stop thinking to believe we can be an eternally benevolent species.

Moreover, depriving an AI of the ability to feel pain and/or pleasure is also limiting this AI. That you place the limit BEFORE the AI becomes self-aware doesn't make the limit inexistant. If we can know anger, resent, love, pleasure, sadness, pain, and the AI cannot, it is a limitation. What I propose may even be a lesser limitation : allow the AI to feel as long as its feelings won't harm us.

In fact, the desire to create an AI that will only be friendly can be seen as a very human / homocentric desire to have someone on Earth that won't ever get angry at you, no matter what you do. The desire for someone that will never betray you. I don't know for you, but I wouldn't like a universe where everyone is my eternal friend, where I like everyone forever, and where no one ever opposes me. I'd feel I'm loosing my physical and psychological boundaries (you and I both like anime : I'm sure you'll understand if I talk about an Absolute Terror Field and a universe without ATF, without physical barriers, cf. Evangelion, past the awakening of Lilith).

You also say rights do not imply duties. I must disagree. Your thinking there is, alas, all too common these days, because people don't like to think about their duties but only about their rights. The fact is, a society (even civilisation) cannot exist without duties. You give as an example the "right to do what you want". Like freedom.

If this right was absolute, you'd actually have no duty. But if it was applied to everyone on Earth, we'd all be dead next year (and I'm optimistic). Your freedom, as they say, is only to do whatever you want, as long as you don't compromise anyone else's freedom. There comes your first (and biggest) duty.

In the field of AI, we are aware of, and often think that the way we think is "human" and that an AI may not need to think like us. However, that is only true of subjective thoughts and feelings. Objectiveness based on logical reasonning (causality) has nothing to do with how you think or feel. Even simple programs like expert systems can manage it (and expert systems, which are AI programs, don't think or feel, they don't have a single neuron and are in fact a sort of database).

If objective thinking yields the conclusion that humans are a destructive species, even a friendly AI will come to that conclusion. If, being friendly by design, it cannot act against us to, say, defend itself against our actions, then you can't deny it is limited. That even before we give it rights, we've given it duties.

About the difference between "deliberately designed sentience" (DDS) and "spontaneously emergent sentience" (SES) : it may be not be as fuzzy as you think. If researchers worked for many years to create a DDS and succeeded, I don't think they'd like you a lot for saying it may be an SES. I'll explain how the difference could be made satisfyingly :

Suppose you want to build a DDS : you will create a machine that is designed for, and most likely to, host sentience. Then, when you start the machine, and if you designed it correctly, sentience will emerge. It emerged spontaneously because the machine was designed to allow it to emerge. Hence, it is a DDS.

Now suppose you create a vast network of computers like the internet. AI isn't your goal, you're just creating a communications tool. If, by some freak accident, your internet becomes sentient, then you have an SES. Because you didn't design your internet to achieve sentience, it may be many years until you finally realize your internet is sentient. When you do, what right have you to reduce it into slavery ? (except the right of might, if you can pull the plug)

If (someday) you create a sentient machine, there's a good chance sentience will be necessary to its operation. I don't see why we'd make a coffee machine sentient, for instance. The task you design the sentient machine for must be important to you, enough so that you'll finance and create a machine to do it. What if the machine, upon activation, decides it doesn't want to do the job ?

To take a more personnal example : I built the Extor prototype and it wasn't exactly cheap. Now suppose I made it sentient. And suppose it said to me : "sorry mister Jean, but you're spooky waving around that big logic analyser probe. I don't want it up my steely ass, now show me the door".

Then I'd say : "why sure, mister Extor, daddy is gonna free you... just pay me back the bolts you're made of and the time it took me to design and assemble you".

That would be considered fair, right ? I'm not supposed to pay for machines and let them go away, what would that mean ? And what if every machine I built did the same ? I would never get a chance to complete my research.

But the Extor has no money. To repay me for creating it it would need a job, and jobs mean duties before they mean rights.
Humans have rights on robots since they build them : if it costs you to do something, you can't just throw it away, right ? Unless you like to waste.

Robots making robots (or AI's designing AI's) is another story entirely. Your boss has no right whatsoever on your children, for instance. Unless of course your work is to make children for your boss. Unlikely with humans, but it would be the predicament of an AI designed to design AI's.

I understand why people would want all AI's to be as free as we are (or even more free), it's a great humanitarian feeling, but let's face reality : first of all, we aren't free, we all have duties (except for the sociopaths), second, if we build an AI for a purpose, then we expect the purpose to be served, and that implies imposing a duty to the AI (to serve the purpose) otherwise there is no point in making that AI in the first place.

You can call that slavery, I won't interject : we're all slaves to something, be it something visible (a tyrant) or something less visible (society, or an addiction). We have little freedom, and it's unrealistic to expect our creations, made to serve our purposes, to have more freedom than we have. I know it's sad, I trust me I wail.

In my opinion, you could built an AI with no limits (and to which you'd give the right to do absolutely everything it wants), but there is just much reason to believe that, upon rapid analysis of history and facts, it would decide it better for everyone (us, all living things, Earth and the universe) to take control and limit human thinking. Not doing this would mean it is a limited AI.

I know us humans are bad, but I'd rather stay a flawed sentient being than an optimally adjusted being with no capacity for anger. That is why I'm a strong advocate of limited rights for robots (or no rights at all). Limited rights can be implemented either as design choices forbidding AI's to have some types of thoughts, as externally-enforced or learned social laws, or as a combination of both.

All in all, I'm very reasonnable : all I'm saying is, "humans have to obey laws, then robots should at least obey the same laws". That would be the most essential concept to allow peaceful (as in, every day human-like) coexistance between robotkind and mankind.

Now heading to the "AI slavery and you" thread...

Jean

#6 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 04 November 2003 - 02:32 PM

Hey Jean,

Good debate we have going here, but we'd better try to wind down after this one; my capacity to type very long responses is only so great, and most of what I'm trying to argue is online already anyway. :)

Humans may currently be the most destructive force on Earth, but, it's very likely advanced AIs would quickly acquire the power to cancel it out or render it harmless. It seems like you may be neglecting the likelihood of recursive self-improvement, that is, the capacity for a transhuman mind to generate improvements to its own intelligence through integrating new hardware, accelerating current hardware, reengineering cognitive architecture, and inventing new technologies all along the way. "They're being destructive, looks like they'll have to become our enemies" is a *human* response to this problem, not a transhuman response.

Let's say someone is threatening us with a gun. What do we do? If we had a gun ourselves, in many cases we would simply fire back before the person could shoot at us. Where do we aim? If we have no previous experience with firearms, aiming might be out of the question, and pure self-protection would determine our actions; fire back as many times as possible, for example. But say we were experienced with aiming, and had enough compassion that we would rather, say, cripple the person rather than kill them. We aim for their leg and hit a few times; threat neutralized, and a more benevolent outcome than killing them. But say we have *extremely* good aim, and can shoot the gun directly out of their hand instead, in a single shot? That would probably be the *best* alternative available for humans, if we have enough confidence in our ability to hit with precision.

But what about for *transhumans*? A transhuman wouldn't need to be a unitary entity or possess a static, solid vessel for a body. It might be able to disperse into fog or intercept the bullet without taking damage. Or it might be able to anticipate its attacker's decision to fire the bullet from noticing the bending of sinew within their trigger finger, and respond in milliseconds by disabling the firing mechanisms with nanomachines. When you capacities get better, you can better respond to threats and neutralize them in the most pleasant possible way (if that is your inclination).

I'm talking about the best-case scenario, where the first machines are correctly programmed to be compassionate. In worse scenarios, machines might judge humans as threats and wipe them out; but even that level of awareness wouldn't be necessary for AIs to judge humans as threats. For example, an AI might not explicitly judge humans as threats, but see them as suitable building materials and kill them as a subgoal of that. On the other hand, an AI might judge humans as somewhat dangerous to themselves, and try to assist them in becoming less violent. What I'm saying is that the cognitive complexity underlying the ability to apprehend and act out against "threats" is not necessarily common to every type of mind; the programmers would need to explicitly program it in for it to be present. No "threat-modeling-and-response module", no response to threats.

Instead of creating "AIs with no survival instincts", (because "survival instincts" is not a unitary entity) we might simply create AIs without the inclination to agression or observer-biased moral thinking. These things aren't stuff you would need to *suppress* in a typical AI, but something that isn't there to begin with unless you add it in.

I would suggest stopping thinking in terms of coercion. An AI isn't a rival human you're trying to control. When we build AIs, we want to think in terms of transferring over the moral philosophy that allows us to recognize right from wrong, rather than coercing a potential opponent. AIs don't come with the inclination to "go against" or harm humans; it would need to be programmed in for it to be present.

Post-Singularity, it doesn't really matter if you'd rather not live in a world where people are truly nice; I still suggest that the first AIs be nice people anyway, just to be safe, just to have someone to consult on how to move forward. The universe can be exciting and fun without betrayal, social conspiracies, suffering, disconnectedness, and so on. Once the *overall structure* of the world is made safe, then fine; I strongly encourage you to do whatever you want and live in societies where people willfully decide to betray one another, but for the sake of everyone currently suffering on Earth, I think we deserve at least the *opportunity* to live in a place where everyone is nice, and people who aren't nice can't do much damage to those who want to live in peace.

With regard to the rights and duties thing, you didn't read the conditions I put down as the requirements of a society with rights but not duties. It would have to be a society where all the basic essentials and critical work are *automated*. Nanotechnology and AI everywhere. I wasn't talking about present-day society.

Objectiveness based on logical reasoning can have everything to do with what you think or feel. An expert system isn't a mind, and doesn't "know" anything; it's just a very very crude approximation to the mind. There is no fundamental difference between subjective feelings and objective knowledge; the first approximates the last, and the dichotomy between the two is false, a relic of Descartes' dualism.

If objective thinking yields the conclusion that humans are a destructive species, even a friendly AI will come to that conclusion.

There are no such things as "objective conclusions which suck in all minds", how a mind reacts to a given situation will always be based heavily on the structure of that mind. The links between observations and actions will be based on the circuitry of that mind, and if someone programs an AI such that the observation "humans as a destructive species" triggers the action "jump around on a pogo stick with your shirt off", then that's what the AI will do. Again, I strongly recommend a few minutes going over http://www.singinst....FAI/anthro.html as a better explanation of what I'm trying to say.

I can imagine an AI that is friendly by design, yet defends itself against our actions through devotion to its friendliness. Friendliness isn't mutually exclusive with Machiavellian intelligence. Being nice does not mean someone is limited! Even though nice humans are sometimes naive, it doesn't mean that all physically possible minds are doomed to be either nice and naive or aggressive and aloof! You can get the aloofness without the aggression.

Deliberately designed AIs will almost certainly contain spontaneous and emergent elements in the cognition process. Any spontaneously emergent AI is certain to contain human-designed components; complexity like that doesn't pop up otherwise. It would be like a 747 spontaneously assembling itself in a junkyard, otherwise. The freak accident of the Internet becoming sentient spontaneously is a science fiction falsity; it's not cognitively realistic. "Deliberately designed AI" is millions or billions of times easier and more probable than AI emerging by sheer accident, although any deliberately designed AI will contain emergent patterns within it.

Your argument justifying why we should get control of AIs sounds like a mother's argument for why she should get to keep a child eternally, as a slave. Just because someone puts effort towards the creation of an entity does not make that entity sovereign to the creator. I also think you're overestimating the likelihood of a typical AI suddenly deciding to up and change its fundamental goals.

We shouldn't create AIs just for the purpose of doing our dirty work (automated, non-sentient systems should do that) but for the purpose of creating truly new people and new experiences, exploring the mindspace and all of that - the usual transhumanist goals. The near future will have enough abundance for true respect towards all sentient beings - people and AIs - to be totally possible. What do you think nanotechnology and other miracle manufacturing technologies would be for? Have you read about them?

In my opinion, you could built an AI with no limits (and to which you'd give the right to do absolutely everything it wants), but there is just much reason to believe that, upon rapid analysis of history and facts, it would decide it better for everyone (us, all living things, Earth and the universe) to take control and limit human thinking. Not doing this would mean it is a limited AI.


All that "history and facts" comes from scenarios involving humans, social animals which evolved in scarce environments. Evolution sucks at building nice entities, yes, but that doesn't mean that nice entities aren't possible in principle, just that they don't evolve too easily because the supergoal of evolution is maximizing reproduction. We stand with respect to AIs in the same position that evolution stands with respect to us; evolution made us aggressive, paranoid, and so on, but we don't have to create AIs like that. AIs can be morally superior and kinder-than-human, lacking selfishness. They'd better be, or a lot of people are sure to die (perhaps both AI and human). Humans wouldn't survive a war between sufficiently advanced AIs, and civilization itself couldn't survive the emergence of a selfish AI advanced enough to be unrivaled (which wouldn't be too hard - it would just need to be the first).

Suggesting that robots obey the same laws of decency as humans is fine by me. But suggesting that just because humans are selfish and paranoid means that AIs should be or that AIs are likely to be is wrong. (Correct me if that's not what you're saying.) Anyway, nice conversation!

See you at Instrumentality,
Michael

#7 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 04 November 2003 - 04:54 PM

I really like the scenarii for friendly AI you are proposing Michael, truly I do. But you use words that I don't like to use (maybe because of my engineering background), words like "likelyhood" or "more probable". It is wishful thinking and directly opposed to the engineer's least favorite, but most frequently used law : Murphy's law.

In my universe, nothing ever works quite the way it is supposed to. There's always a glitch, and we always tolerate it. We tolerate the fact MS Windows is unstable, we tolerate that mobile phones might give us a brain tumor... we often talk in probabilities, chances and rates of failure. Maybe someday we'll tolerate a small glitch in an AI or in nanotech and it'll be the end of us...

Some might call me a paranoid, but this is the helpful kind of paranoia that prevents engineers from making leaking microwave ovens and keeps the planes in the sky. It's part of the job.

As your post clearly exposes, how AI might think about us humans is unpredictable. To this I say : when in doubt, add a failsafe or three.

The thing is, I actually don't see AI's as dangerous. It's rather their contact with us humans that might make them dangerous. Remember HAL in 2001 and 2010 : it was a perfect machine designed to accomplish a mission, and yet turned murderous. Because a human had lied to it and, even though it was unintentionnal, perverted HAL's thought patterns.

There is no bad or wrong in the universe, these are homocentric concepts. If we live with AI's and let them learn from us (to live in community) they will adopt our own references for what is good or bad. And I fear we're not such impeccable examples of justice and righteousness that AI's living with us wouldn't become as bad as we can be. Maybe if we all became 100% good, then AI's made by us, living with us and learning from us would be as good as we are. It's like raising children, no more, no less. As you seem to say, a future age of abundance might greatly help.

It is a commendable effort to seek the design of an AI with complete freedom of thought that could remain good even in the face of all of human evil. But as much as I think about it, I don't see how it could be done. I'm probably too vindicative to conceive a fully peaceful being. I must also admit I'm mostly (if not only) considering AI design under an utilitarian angle (to do my dirty work, in your own words). I'm just not so tired of mankind yet that I'd want make AI's only to have someone to talk to.

Your last paragraphs are enlightening, in particular when you say we "stand with respect to AIs in the same position that evolution stands with respect to us". Truth to tell, I totally underestimated humans in that regard. But things might not be so easy (are they ever ?) : if AI's are commercial products, won't the inherent competition involved in their design translate in the thoughts of the AI's ? I'd say if we can make an AI that has none of our evil, then it must be done neither for money nor for glory nor for mankind, but for the only sake of making a benevolent AI.
It'll be hard to secure the funding of a project that requires there be absolutely no return on investment whatsoever...

Anyway, you're right, we should wind down a little... and I have much to read on this forum (and others) before I'm up to date with all you guys have already debated !

Jean

#8 Mechanus

  • Guest
  • 59 posts
  • 0

Posted 05 November 2003 - 01:16 AM

In my universe, nothing ever works quite the way it is supposed to. There's always a glitch, and we always tolerate it. We tolerate the fact MS Windows is unstable, we tolerate that mobile phones might give us a brain tumor... we often talk in probabilities, chances and rates of failure. Maybe someday we'll tolerate a small glitch in an AI or in nanotech and it'll be the end of us...


Thinking in terms of probability is perfectly compatible with being a professional paranoid. In AI or nanotech the stakes may be the world or the universe; I think any Friendly AI advocate would agree with you that the capacity of an AI for niceness should be checked, double-checked, as far overdesigned as reasonably possible, and so on.

Even if based on your understanding you're 95% sure that it will work, those 5 percents matter; looking hard for things you overlooked and compensating in advance for problems you didn't specifically see are just good heuristics. That doesn't mean you can't be 95% sure that it will work. Murphy's Law is not true, even though sometimes it's a good idea to behave as if it is.

Friendly AI is especially nice in that once you get a certain amount right, the AI itself can help you a lot in finding potential dangers. An AI need not be designed flawlessly to start making itself flawless; it should only need enough intelligence, commitment to and understanding of rationality and morality, and then it perfects itself. I wouldn't trust anyone to build an unmodifiable, perfectly flawless AI mind; luckily, that's not necessary.

As your post clearly exposes, how AI might think about us humans is unpredictable. To this I say : when in doubt, add a failsafe or three.


Right; but don't constrain it all the way into superintelligence and superhumaneness. At some point, it will have to think completely for itself.

"Asimov's Laws" type failsafes tend not to be a good idea, considering the complexity underlying seemingly simple commands such as "don't harm anyone", and considering that the AI may at some point advance far beyond humans in understanding.

For a young AI, it's still a good idea to design it to think things like "the people who designed me don't want me to kill people; I don't understand why not, but I'm a young mind, the programmers seem to know more than I about some things, and they seem to think this is pretty important."

The thing is, I actually don't see AI's as dangerous. It's rather their contact with us humans that might make them dangerous.


:)

"It's not the fall that kills you, it's the sudden stop at the end"

Remember HAL in 2001 and 2010 : it was a perfect machine designed to accomplish a mission, and yet turned murderous. Because a human had lied to it and, even though it was unintentionnal, perverted HAL's thought patterns.


Moral: don't design (sufficiently intelligent) machines to accomplish missions -- design them to be friendly and seeking moral betterment, and convince them the best way to do so is to accomplish a certain mission. (If it's not true, then you have no business convincing them. If you believe it's true, but the AI does not, either make sure the AI believes it's true for the same reasons you do, or correct your reasoning.)

If we live with AI's and let them learn from us (to live in community) they will adopt our own references for what is good or bad. And I fear we're not such impeccable examples of justice and righteousness that AI's living with us wouldn't become as bad as we can be.


Despite being very peccable, humans tend to want to be a better person. Instead of designing or teaching an AI to become like us, we can design or teach an AI to be as we ideally envision ourselves to be. An AI will not just become like the people around it by osmosis; if it does, then you're going about it the wrong way.

It's like raising children, no more, no less.


Actually, it's like building children. We can't build our children; evolution and random chance already designed them, though we can influence their development. AIs, unlike children, can be created as thoroughly rational and ethical from the ground up (though it's not easy, of course!).

Building a mind from scratch is or will be an important first in the history of humankind; it requires a whole set of intuitions that no one has automatically and that popular science fiction has not done a very good job of developing (IMO).

It is a commendable effort to seek the design of an AI with complete freedom of thought that could remain good even in the face of all of human evil. But as much as I think about it, I don't see how it could be done.


Not seeing how it could be done is not enough; seeing clearly why it could definitely not be done might be enough for it to be pointless to try, but I haven't seen any reasons why it could definitely not be done.

I agree about not building an AI just to have someone to talk to; an AI should help solve the fundamental problems of the human condition (the less subtle ones like death, and the more subtle ones like the "lack of meaning" many people feel in a heavily technologized society).

(edit: now the forum is messing up my quote tags, as if to prove Murphy true;
sorry about not helping wind down)

#9 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 05 November 2003 - 02:35 AM

Thats OK last night after writing on this topic for over a hour at 1 am and exactly on my last keystroke right before hitting "submit"; my computer decided to inexplicably turn itself off.

Well I figured out a plausible reason why later, but why right then that the power back up supply decided to order a back up power supply shut down check I still do not know as it was not a scheduled event.

Perhaps there is more than coincidence already at work if you know what I mean.

It did however block and erase my post quite effectively.

#10 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 05 November 2003 - 01:49 PM

Well, what can I say ? Murphy's law is true after all... Do you want to know how I avail myself of power outages and software bugs ? (even though I'm on running 5-9 hardware and a back-up UPS ?)

When my post is too long I type it in Word and save every paragraph. It is unpractical, to be sure, and it might have saved me less than one hour of typing in over ten years... but that tells you exactly the kind of careful person I am. :)

Mechanus, I enjoy your views as much I enjoy Michael's : it would be so nice to design a friendly AI. But it leaves too much room for uncertainty to me. You were talking about 5% probabilities of failure... when I work on systems guaranteed for less than 0.001 % probability of failure (5-9 means : 99.999% up-time) with devices given for at most 1 error in 10^15 calculations...

This world needs optimists, people who don't care about 5% chances of problems. I'm among the other kind of guys this world needs : the ones who think 1 error in ten billion billions is a major bad risk.

Honnestly I hope it would be possible to make an AI seeking positive self-improvement... even if I don't know how to go around making one... but all I manage to see is that the small imperfect part of this AI could end-up taking control... and wouldn't care one bit about self-improvement at our advantage. I am most certainly a "hardcore pessimist" :)

It's true Murphy's law isn't always true (that would be denying the existance of good luck, and that would suck). Still, I've seen Murphy's law in action so often that I can't deny its power. After all, one corrolary to the Law is that it's most likely to kick in when you least expect it.

Lazarus, about coincidence I have a funny saying I'll share with you :
- Once, it's an accident
- Twice, it's a coincidence
- Thrice, it's sabotage :)

Jean

#11 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 05 November 2003 - 02:01 PM

Murphy and Moore walk hand in hand and I am greatly relieved to hear a more practical discussion of the issues being had. I am not a "believer."

As for my little incident I should mention, it was not the first time, or the second, and I used to do as you are suggesting with Word but it was cumbersome and occasionally if I am following my muse somewhat passionately I forget to make copies. I forget because what starts off as a simple comment turns into a treatise and Lazarus "the Long-Winded" earns his quips.

I also took the measure of upgrading my equipment and programs along with installing a UPC and I beat back the vast majority of glitches only to be blindsided by a solution that sucker punched me. [8)]

#12 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 05 November 2003 - 02:31 PM

LOL so now I undertand the "Long" in Lazarus Long :)

Well, beware about solving a problem by upgrading : as the author of the Dilbert Principle wrote : to an engineer, "if it ain't broke, it doesn't have enough features yet"

About the Dilbert Principle : come to think of it, that old book contains a lot of insight on how engineers think and why they behave like they do. I recommend to anyone to read it, especially if you are a scientist or theorician ! (after all, Scott Adams was an engineer when he created Dilbert...)

Jean

#13 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 05 November 2003 - 02:50 PM

BTW, the startling aspect that defies logic is that almost all of the now dozens of times that a (not always the same one) computer has decided to suddenly, surreptitiously, arbitrarily, and seemingly singlemindedly destroyed a post the subject was almost uniformly the same; "Feral AI."

Sometimes all we get to analyze are shadows and footprints long before we get to dissect a body of evidence. :))

#14 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 05 November 2003 - 03:03 PM

That can't be a coincidence... are you sure your computers aren't sentient already ? [:o]

Now would be a good time to determine if they have rights, and what they are ! [lol]

That reminds me of something you AI friends may find really offensive... at type of failsafe known as "watchdog timer", which is used in just about every embedded / unmanned computer application.

Here is the concept : suppose you place a computer in a remote location where maintenance is impossible or difficult (coffee machines, LEO satelite). You need to have a way to repair immediately any failure, like, for instance, a processor lock-up.

To do this we use a clock that will reset the entire system several times per second. This clock can be reset by the processor to avoid a reset. If the processor works as it should, its program will make sure to reset the watchdog before it can reset the processor. If the processor freezes, less than a second later it will be reset. Down-time is negligible, no human intervention is required.

This could be applied to AI's : reset all synaptic coefficients or the knowledge base on a regular basis, but allow the AI to counter this reset as long as it remains a friendly AI.

Now of course, how would we design a system to check that an AI is good or bad ? By checking the AI's actions (or even thoughts) against laws we'd have written for the AI ?

I'll definitely start exploring this... I wonder how I didn't get the idea sooner.

Jean

#15 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 05 November 2003 - 04:34 PM

Lazarus: BTW, the startling aspect that defies logic is that almost all of the now dozens of times that a (not always the same one) computer has decided to suddenly, surreptitiously, arbitrarily, and seemingly singlemindedly destroyed a post the subject was almost uniformly the same; "Feral AI."


Ken, don't scare me. I shall be expecting my computers to self-detonate anytime now.

Jace

#16 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 05 November 2003 - 04:47 PM

Now of course, how would we design a system to check that an AI is good or bad ? By checking the AI's actions (or even thoughts) against laws we'd have written for the AI ?


I don't know if mere humans could feasibly have a stake in an advanced AI's future. It's futile to discuss AIs' value systems. They will either want to live or die. If they want to live, they will want to live forever; otherwise there is absolutely no point. If they want to live forever, then they need a purpose, or death remains to be the only other objective choice. It's fundamental purpose cannot be anything other than to ensure its survival. How could we possibly be of any value to them? If I didn't want to be immortal, I wouldn't care about being phased out.

Jace

#17 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 05 November 2003 - 07:20 PM

Yes. That's kind of my argument for saying we should think of controlling the AI's instead of seeking how to make totally free AI's. Availing the risk of being phased out by our inventions.

This hasn't been much discussed yet (AFAIK) but how would mankind feel if it was totally outsmarted by its AI's ? Even if the AI were benevolent, we humans would feel very frustrated.

Through transhumanism we may see our own minds evolve, possibly as a result of research done by the AI's we created to be smarter than us. But the process inherently means we'd still be one step below our AI's in terms of "mind quality".

Keeping the AI's under human control wouldn't mean they couldn't outsmart us, but at least we could rest in the knowledge that our AI's future will be tied with our own future, and that we won't be "phased out".

Or so I feel, but this is unfamiliar ground for me and I might be very wrong...

I spend so much time researching how to bring AI to human-like consciousness that I never think about how humans will feel when we achieve that.

Jean

#18 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 05 November 2003 - 11:41 PM

nefastor: …how would mankind feel if it was totally outsmarted by its AI's ? Even if the AI were benevolent, we humans would feel very frustrated.


In contemporary society it is pretty simple to become information literate even with the influx of garbage on the internet and television. “To be information literate an individual must recognize when information is needed and have the ability to locate, evaluate and use effectively the information needed” (American Library Association). You, being an engineer and already extensively having had experience with determining what’s relevant and what’s not in your academic and professional life, probably are very familiar with this concept.

Without AIs, it’s easy to live with yourself not being at the top interpersonally, academically, scientifically, pragmatically, and philosophically. If not intrinsically, we know that it’s just a matter of developing a plan and executing it. How much heart we put into strategy, and how much discipline and commitment and maybe even humility we’re willing to allot for its execution, is the general variance in the distances away from individuals’ self-actualizations. We can live with this. Excluding third world countries where bad luck, politics, corruption, and stupidity keep them isolated, knowing the essence of socioeconomic gaps is simple to evoke. There’s always a chance, always hope. Even some geniuses have serious interpersonal and social deficits which inhibit their influential and penetrating capacities, giving everyone else good opportunities to exploit them.

Everything all changes with the forthcoming smarter-than-human intelligence. No one wins unless we can have hope that if we, our individual self, do something special (yet that's feasible and realistic qua self), we can make our own opportunities for perpetually becoming augmented at least to their level, be it through inloads or whatever. If persons can always have the freedom to make choices that could hypothetically improve their luck and position them among the highest ranks of a superintelligent society, then I think the only frustrated sentient beings would be those who don’t live under the aegis of the perfect society.

nefastor: I spend so much time researching how to bring AI to human-like consciousness that I never think about how humans will feel when we achieve that.


It won’t be tolerated.

Jace

#19 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 10 November 2003 - 10:56 PM

I also think AI's won't be tolerated by people, especially if they are smarter-than-human. Truth to tell, my current goal isn't as much to make an AI that's smarter than a human, but an AI that's smart in a way different than humans are.

I started thinking about that when I realised how much the structure and operation of our brain affect everything we - mankind - have done so far. Our natural way of thinking did (and still does) wonders yet more and more often it hits brick walls, in particular in theoretical physics. How many people these days can fully grasp quark chromodynamics ? I can't.

Relativity was theorised by Einstein... an autistic mind and a man who used to beat his wife. It is clear his way of thinking was different from ours, and that it may have given him ideas that would never have emerged from our minds. Yet, as you say, on other levels like socialisation, Einstein wasn't nearly as good as Joe Average.

Maybe future AI's will be tailored to suit particular needs (such as scientific research). It's possible they won't have an ego, or maybe they'll have several egos. Maybe, to them, yes and no won't cover it all. Maybe the duality we humans see or put in everything won't matter at all to them, and that will help them grasp concepts we won't ever be able to appreciate.

If such AI's were to be created, their relation to humankind would be much like the relation we have to animals : limited, approximate, imperfect comprehension that seems to work better in one way than the other way around... although we can never be sure.

The implications are numerous. Science generated by AI's couldn't be understood and used by humans, so AI's would also have to turn it into technology we can use (even if we don't understand how it works... another reason to feel frustrated).

Also, such AI's could very well have no idea about what "rights" are or why they should want any.

The question of "robot rights", in fact, only applies to AI's thinking and behaving much like humans. There is a good chance AI's won't be like us... only barely enough to communicate.

Mind getting fuzzy... must go to sleep... will further the issue when I find a moment. But so far I still think there's no reason to give robots or AI's any rights. If they want any rights they'll have to grab them. That's what I'd do. That's what my ancestors did, even grandpa.

"giving rights" to machines is little more than us humans enacting some kind of God complex : next thing you'll know, people will start engraving Tables of the Law with lasers and talk to their AI's from a burning bush on a mountain, with a booming voice à la Darth Vader.

Wow I must be realllllly tired to have typed that ! :)

Jean

#20 John Doe

  • Guest
  • 291 posts
  • 0

Posted 11 November 2003 - 03:45 AM

I am more concerned about the political consequences of anthropomorphizing machines than Michael is. Perhaps that is my mistake.

The very notion of a Robot Rights groups, in the spirit of Civil Rights, strikes me as profoundly mistaken.

I suppose that robots could have rights, for example if we build robots that feel pain at the thought of not being slowly torn apart by people with hammers, we might be morally obligated to take up our tools and hammer away. Although I must confess that I would not feel too guilty about just standing there laughing while watching the robot scream for someone to destroy it.

Another quite probable prospect that is perhaps more dangerous than mistaken is intentionally designing robots to be like humans. Designers who builds robots that seek to survive, procreate, eat food, or gain status might be entertaining for the first decade or so. But as soon as computers surpass human intelligence that game will no longer be so amusing.

sponsored ad

  • Advert

#21 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 17 November 2003 - 09:50 PM

I agree with you. Why give our defects to our machines ? Our lives are, for the most part, a terrible mess filled with little gratification (but for the few of us born with very rich parents or in very nice countries)

It would be more sensible to make machines that won't ever feel despair. Machines that won't take offense at being beaten, enslaved, shot down or covered in advertisement stickers. Some might say it is limiting these AI's. I say it's an act of mercy as well as self-preservation.

For I have no doubt AI's with human intelligence would also feature human stupidity. And don't we all know what stupidity has cost mankind since the beginning of human life !

Hey, about that Robot Rights group of yours, would it also fight for Toaster Rights, or will there be a need for another groups, because Robots think Toasters are just plain stupid ? [lol]

Jean




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users