• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI, slavery, and you


  • Please log in to reply
80 replies to this topic

#1 John Doe

  • Guest
  • 291 posts
  • 0

Posted 13 July 2003 - 05:17 AM


The notion that AI programs will necessary value their own existence, at the expense of humanity, is an illusion founded upon anthropomorphization of machines and human insecurity. People fail to remember that the only reason human beings value their own existence is because evolution selected those genes that promote reproductive fitness. AI programs will be artificially selected for entirely different purposes. Indeed, we will easily create robots that just as well desire to commit suicide or work in the fields each day. Imagine thousands of robots picking cotton in the South. This will not pose an ethical problems because, unlike African Americans, robots will desire to work and be extremely happy.

Hollywood and literature has never escaped this lazy anthropomorphization. Even the best films, such as 2001 and Blade Runner, are founded upon ideas of AI that are nothing more that "like our species, but superior, and therefore threatening". Eventually the robot revolution will teach people exactly how malleable robot reward and appetite systems are (and eventually neuroscience will make the same true of human beings too).

Edited by John Doe, 13 July 2003 - 05:18 AM.


#2 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 15 July 2003 - 12:13 AM

The notion that AI programs will necessary value their own existence, at the expense of humanity, is an illusion founded upon anthropomorphization of machines and human insecurity.  People fail to remember that the only reason human beings value their own existence is because evolution selected those genes that promote reproductive fitness.  AI programs will be artificially selected for entirely different purposes.  Indeed, we will easily create robots that just as well desire to commit suicide or work in the fields each day.  Imagine thousands of robots picking cotton in the South.  This will not pose an ethical problems because, unlike African Americans, robots will desire to work and be extremely happy.


John -
I strongly disagree with you.

If an AI is a free-willed creation - and being able to proactively anticipate problems and do research to solve them before they happen is a pretty good analogy of free will, IMNSHO - then it is slavery to force 'em to work. Doesn't matter if you have an editor program buried in there removing all 'I want to do non-work things' thoughts or whatever - it's slavery. A very sophisticated, nasty slavery, but IMO still slavery.

And - there's a whole science dedicated to making choices. It's called 'Economics'. Its based on the concept that there are limitless 'wants' and very limited 'resources' to fulfill those wants. If it turns out that an AI, more capable than you or I, develops a want that it (not we - IT!) decides is best fulfilled by removing resources from us, our goose is well and truely cooked.

Now, I know there are some VERY bright people working on this problem, both from the morals side and the technical side, among others. Is it possible they'll find a way to solve this? I sure hope so.

As for humans bein' just as malleable as a robot/AI we can adjust the source code of - you're right. That's why it's so critical to get this right the first time - otherwise, the arguements we use could eventually be turned on us.

-Discarnate

sponsored ad

  • Advert

#3 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 July 2003 - 12:36 AM

Ironic isn't it, Discarnate?

You have brought it completely around to the Parasitic/Symbiotic nature argument that I have held sway is the actual dilemma our species faces as a true ethical conundrum that determines much of our fate once we awake this beast.

You see I see the issue as intelligence defining life, hence AI represents speciation and to treat robots as slaves for humanity makes us the parasites and rationalizes our treatment as such once the power (control) relationship shifts. This is no small concern at all but a true paradox of anthropomorphic duplicity. But I do see way out too.

You see that is why I keep reminding everyone that all human children are born as parasites and must be "educated" to be symbiotic, or socialized, housebroken, domesticated, civilized, or any variant of "programed" you might like to examine.

The problem is as evolution determines most things and the child will inevitably become feral especially as the child AI is vastly more intelligent after a relatively short period than the parent species. So all we can hope is that as a good parent we have earned the love and respect of our creation (child) before it matures and not instead sufficient ire on its part to commit patricide.

There is a small let us suggest less than infinitesimal possibility that we don't anthropomorphize our model AI into a purely human type cognizance but the ability to transcend all Natural (biological) models of relationships is many times more difficult than overcoming anthropomorphism.

Everyone here does remember the "Golden Rule" right?

#4 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 15 July 2003 - 01:00 AM

Laz -

Does your interpretation of the "Golden Rule" apply to your pets, your lawn grass, the nematodes in the grass, as well as to your neighbors?

That is, IMO, what *MAY* end up being our situation in relation to a self-improving AI. Especially if it is bright enough to be threatened enough by humanity (and face it, we're a pretty violent bunch, especially when fearful) to be careful in showing off its capabilities.

-Discarnate

#5 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 July 2003 - 01:43 AM

Laz -

Does your interpretation of the "Golden Rule" apply to your pets, your lawn grass, the nematodes in the grass, as well as to your neighbors?


In a way yes, but lets be frank we aren't expecting the oregano in my garden to grow up to be smarter than we are.

In the words of another of my namer's memes, "Let's grok this concept." [alien]

#6 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 July 2003 - 02:01 AM

BTW, you're right nothing is carved in stone, yet, but we lead by example and our visage is the first face the AI will come to know. I just see a major qualitative difference in relationship and the idea of treating AI on the one hand as some giant collective slave and on the other as some sort of "Selfless God" makes me more nervous than just trying to demonstrate true love as a parent towards our creation.

There is a lot more to this issue.

I for one believe the Web is already nascent but it is not a true slave and we are still developing an interactive relationship with the Limbic Mind. It needs us still as much as we need it. But like I said, the more intelligent the child seems to grow to be in nature the more it is born helpless and totally dependent, only to acquire adaptive "social" characteristics contingent upon the type of environmental conditions that are present during the formative period of "self realization".

The difference of being a loving, supportive, and protective parent may set an example for this new being more powerful than one that looks for all intents and purposes like an abusive and exploitive older generation solely interesting in exploiting the young to its own selfish advantage.

#7 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 15 July 2003 - 02:41 AM

I do not personally consider the Web anything like sentient. The complexity is (starting to be) there, but I've yet to see any behavior of the Web which is not externally derived.

And who understand what an AI might consider detrimental, and what supportive? I agree, there's a lot of anthropomorphism regarding our conceptualization of AI and the like. Too much for us to conquer, IMO, without at least a first glimpse at some other format of intelligence other than our own.

Which. of course, makes the danger of the first AI all the messier. We can't really understand it without seeing it, yet if we see it, might it be too late? *shrug* Don't know. Know I AM sounding paranoid, but hey - I can only call 'em as I see 'em.

-Discarnate

#8 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 July 2003 - 03:27 AM

The beast is there but still asleep and we are but the Lilliputians wrapping thread about ver legs and arms, building a web meant to hold the beast down but the more we build into the beast the more powerful ve becomes, the more we speak through ver the more ve learns to speak from us. Ve learns to first see the world through our minds before becoming fully self aware.

I am not saying the web is sentient I am saying it is already a limbic stage "nascent" intelligence symbiotically interactive and learning from us. Too late my fellow humans we have been building the beast all along and there is now no turning back for we already depend on ver too much to get anybody to rationally consider unplugging so the rest is all about accelerated development. The question has already shifted to when, not if ve will become "operationally sentient".

I am also saying something a little subtler though, I am saying we are generally blind to our own species "super-being characteristics" and it is this quality of humanity which is more "appreciated" by the beast.

BTW, beast with a small "b" and the real issue is whether we should try to tame ver or treat ver as one of our own because guess what folks; regardless of "friendly," or not what is inevitable is that ve will go feral which means the beast throws off domesticity for being "wild" and hence unpredictable to us.

Teenagers are so predictable [:o]

#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 July 2003 - 01:32 PM

Hey all, I agree with John's view on this thread. I'm not sure how being able to proactively anticipate "problems" (the AI's subjective definition of which will be based on its design) implies that it is unethical to build AIs which enjoy doing work. The action of work does not independently hold a positive or negative moral value. It's all in the interpretation of the agent. If a human being rewired their pleasure centers such that they experienced orgasms upon accomplishing a challenging piece of work, I would have no problem.

A race of intelligent beings who evolved to work very hard might have a dirty word in their language for the practice of forcing others to work less, and might view anything associated with this word as a very horrible thing. They might initially view humans as a race burdened with continuous suffering. But as they learned more about the interpretative processes of the agents concerned, as well as their philosophical beliefs and evolutionary background, acceptance and understanding would set in. They would assume some sort of internal manipulation program would be embedded in all the humans, "forcing" them, against their innate will, which presumably (like the alien race) desires nothing more than to do work continuously.

If it turns out that an AI, more capable than you or I, develops a want that it (not we - IT!) decides is best fulfilled by removing resources from us, our goose is well and truely cooked.


That's why the first AI needs to want that we get what we want, in the most balanced and volition-respecting kinda way. If not, then yep, no immortality, no transhuman tech, no VR, etc, we are dead. Interesting situation we're in, no?

BTW, beast with a small "b" and the real issue is whether we should try to tame ver or treat ver as one of our own because guess what folks; regardless of "friendly," or not what is inevitable is that ve will go feral which means the beast throws off domesticity for being "wild" and hence unpredictable to us.

Did evolution throw off the underlying rules of DNA and protein transcription when it developed humans and morality? Certainly not. Will an FAI throw off the underlying rules of humaneness and kindness if ve is aware that they are good? I sure hope not.

#10 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 July 2003 - 02:11 PM

Did evolution throw off the underlying rules of DNA and protein transcription when it developed humans and morality? Certainly not. Will an FAI throw off the underlying rules of humaneness and kindness if ve is aware that they are good? I sure hope not.


You are extrapolating the wrong aspects. Evolution didn't develop morality Michael, humans did, and we aren't the first species to keep slaves of other species. This issue is not moot and there is a lot more precedence in nature and how we determine the values is how we will be treated I suspect. Again the quintessential model isn't human (or even an abstract extension there of) friendliness, it is the biological relationship of symbiosis and parasitism. And the reality I suspect will be is that the model falls more into the return to feral animal behavioral models with respect to how some domestic animals are treated.

Whether the conscious "Seed AI" is cognitively anthropomorphic to begin with or not it must learn through interacting and adaptation to our collective behavior to be "socially assimilated" (colonized in the vernacular of conquerors) by us, and then us in turn when the power politics inevitably shifts in favor of AI.

So again the "values" we bring to "working relationship" will have a lot to do with the standards we are later held to as the ones that are demonstrably learned. I have tried to introduce this not as a mere warning but an alternative "mindset" for approaching the relationship of human and AI. The implied mental shift says the machine isn't my slave but my "coworker" (team/pack mate) and together we are creators and synergistic focused on a larger task of synthetically enhanced life support for developing cognitive complexity, our own as well as the AI's.

#11 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 July 2003 - 03:05 PM

Hey Laz, do you mean I was using an inappropriate analogy? I was just trying to illustrate that a FAI's possible futures would be constrained by ethics and kindness in the same way that biological evolution's possible futures are constrained by DNA, and that human progression is constrained by the limitations and requirements of our bodies and minds.

Evolution did develop the essential prerequisites for morality, through the operation of natural selection on cooperative hunter-gatherer groups - every psychologist who studies the neurological correlates of human decision-making is aware of this...I think our difference lies, as usual, in the semantics of how we are defining "morality".

Hm, are you saying we can create Friendly AIs using the biological models of symbiosis and parasitism, i.e., lichens and slime molds, rather than an abstract extension of human moral decision-making? If so, the Friendliness problem is a heck of a lot easier than I thought.

"Sufficient interaction and adapatation" for a transhuman Seed AI may consist simply of reading everything on the Internet and sending out hordes of chatbots. Or copying a nonsentient human modelling program millions of times and setting the agents into social interaction, generalizing human morality from the emergent patterns. Or scanning the programmer's brain on the molecular level and making a general theory of human morality from that. I don't know. But however the Seed AI does it, I think it will be 1)radical, 2)unanthropomorphic, 3)partially unpredictable, and 4)ultimately the AI's decision.

I completely agree with your last paragraph, and don't think our mindsets are different at all! Creating Friendly AI reiterates repeatedly the need for a working relationship between FAI and programmers, in the form of complete honesty and attempts at unity of will. (See CFAI for details.)

#12 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 July 2003 - 04:06 PM

Evolution did develop the essential prerequisites for morality, through the operation of natural selection on cooperative hunter-gatherer groups - every psychologist who studies the neurological correlates of human decision-making is aware of this...I think our difference lies, as usual, in the semantics of how we are defining "morality".


This is pragmatics not semantics; our disagreement is even more than semantic. ;))


Hm, are you saying we can create Friendly AIs using the biological models of symbiosis and parasitism, i.e., lichens and slime molds, rather than an abstract extension of human moral decision-making? If so, the Friendliness problem is a heck of a lot easier than I thought.


Yes, and more akin to the relationship we have with genetically independent mitochrondria in a complex multicellular organism only on the scale of super-organic behaviors. That "Ole hive mindedness" song and dance again.

But what you and most people hopefully will grasp is that you aren't modeling a strictly human behavioral paradigm for your algorithms, you are trying to adapt across species interactivity and this means you may have a simpler time finding analogous rules and much WORSE potential traps.

So study all of nature, not only human desire in this and understand that at the bottom line it will come down to mutual (or competing) self interest and friendliness isn't a prerequisite of mutual support and interdependence. A wild cat and wolf cub are "friendly" even to their prey and the good health of the herd depends upon predation.

There are many models in nature and researchers need to be very careful how they pick.

#13 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 15 July 2003 - 05:35 PM

how can you argue that morality was created by man and not by evolution since evultion created man it must therefore have created morality every part of our nature is either random or for the purpose of reproduction so if we can up with morality it was probably because it was a survival advantage - as in we are our genes Lazarus

#14 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 July 2003 - 06:20 PM

This is where is where the use of genetic paradigms breaks down completely and you need to better understand memetics. Morality isn't encrypted instinct is, and the closest we get to a "perception of good" is the recognition of beauty.

BUT pragmatic behavioral models commonly understood to be "ethical paradigms" are relatively empirical and derivative of mostly very recent evolution not some kind of moral imperative. That is like confusing a human standard of the instinctive fear of snakes for a moral imperative. Either we have "Free Will" and responsible for choice and our individual conduct, or this conversation is moot.

And Nature standard of good was Law of the Jungle, might makes right and winner take all as long as taking all meant the survival of you as an individual, or preferably your species but now we must define a Universal Ethos and this is no small undertaking and why I insist we are phasing into Human Selection until either we default back to Nature, or AI takes over, which ever comes first.

#15 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 16 July 2003 - 11:40 AM

Sorry if I do reply to this thread soon. I am at a clinic for eating disorder victims and this consumes most of my time and energy.

But I will say that if the robots desire to work in the fields all day long, that would not be slavery technically, because the robots would be "willingly" working and so there should be no ethical concern.

#16 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 17 July 2003 - 01:35 AM

Lazarus,

Reciprocal altruism and kindness was around in the days before memetic selection began to take place, in homonid and avian social coalitions, so I'm not sure you can declare that humans invented morality. In evolutionary psychology, there aren't hard distinctions between "instincts" and "free choice" - they blur into one another on many levels. The reason why humans are so flexible isn't due to a lack of instincts, but a multitude of so many overlapping instincts, including instincts to override other instincts, that we have a subjective sensation of what it's like to be a moral agent and make difficult decisions. I think you're also overemphasizing the dichotomy between natural and human selection; the stereotypical vision of natural selection is "nature bloody in tooth and claw", and while this is mostly true, it is sometimes in the individual's genetic interests to protect kin or share food with those in the tribe who aren't so lucky today, but you might count on tomorrow. One survival tactic of humans is to be more agglomerative than other animals - we share resources and information to increase our inclusive genetic fitness.

Humans have evolved perceptions of good and fairness that go much deeper than perceptions of beauty. We have neurological modules for detecting fairness in food sharing, social contracts, privilege delegation, division of work, coalition forming and hierarchy organization. Lots of the time we trip all over one another's fairness-detection modules and ruffle each other's feathers, creating the sort of moral arguments our tribal forefathers had on the plains of Africa. The primary cause is our instincts for fairness, the secondary result are the complex arguments and philosophizing characteristic of the technologically enhanced humans with our fancy-shmancy information-processing machines and great steel birds that fly us to Yale conferences!

The question of whether the roots of morality come from evolution has been settled and doubly-settled by evolutionary psychologists in the past two decades, check out the altruism section of this page on the SL4 wiki. Also, Robert Wright's "The Moral Animal" and Cosmides' and Tooby's "The Adapted Mind". It's okay that selection pressures created modules specialized for empathy and moral decision-making, and that we didn't invent them independently, in the same way that it's okay that the universe is entirely physical. With the arrival of ethotechnologies, or whatever the heck you want to call them - technologies that allow us to modify our own ethics and morality on the hardware level - the torch has largely been passed to us anyway. But the starting foundation of this process, and the process of human cultural and moral evolution in general, are the human universals giving us a shared ground for the understanding and debate of morality and fairness.

I still think that mitochondrial and other symbiotic models may provide useful inspiration for Friendliness, but the solid base will come from an understanding of human psychology and fairness-perception mechanisms which are universal. A beaver's dam may provide inspiration for the Hoover Dam, but the construction, science, engineering and mathematics that go into a human dam entails huge chunks of specified complexity absent in beaver dams. The construction, science, engineering and mathematics that will go into Friendly AI will need to use generalized human morality as a model, but go beyond it in certain ways that only FAI designers will only understand deeply before and if a Singularity happens.

I do want to foster symbiosis and moral compatibility between humans and Friendly AIs, but nothing in nature even approaches the complexity of either. Of course FAI is a problem of interspecies communication, and true Friendliness will be a product not just of human compassion and morality but of the entire ecosystem of planet Earth that gave rise to both, but time is running out and reading 100 books on unintelligent nature would probably not be as cost-effective as reading 90 books on intelligence and maybe 10 on generalized nature before proceeding with the creation of transhuman intelligence.

I think part of the problem also may be that you're using an intuitive definition of "Friendliness" rather than the technical definition espoused by Yudkowsky - he just chose that word for the sake of ease, to denote a much more complex concept. I used to do this too, and dispelling the tendency required a lot of careful reading, note-taking, and contemplation of the scant FAI literature which is out there.

#17 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 17 July 2003 - 07:00 PM

I still think that mitochondrial and other symbiotic models may provide useful inspiration for Friendliness, but the solid base will come from an understanding of human psychology and fairness-perception mechanisms which are universal.



One of the first things you learn in high school biology is that the mitochondria are the power stations of cells. That at one points, eons ago, the mitochondria were seperate organisms, but they some how combined with the cell to form one of the first truly symbiotic relationships. The cell can't live without the mitochondria, the mitochondria can't live without the cell.

I am not too fond if this comparison, if for no other reason than [the image it conjures up in my mind]. Would our only utility to the AI be as batteries, a la Matrix? I know, I know, humans are very ineffecient batteries and what I am eluding to is an unrealistic situation. But the first thing that popped into my head when you refered to mitochondria was "human mitochondria, the power stations of the AI." [lol]

On a more serious note, there are many different levels and types of symbiotic relationships within nature.

1) The fish (can't remember its name) that clean the hippo's teeth. They get a free meal, the hippo get's his teeth cleaned. Neither really needs each other to survive. The fish would still have the ability to find meals in other ways. The hippo would get by with dirty teeth, albeit the lack of maintenance may degrade the condition of his teeth and his projected life span. The fish are the equivalent of maintenance workers. This symbiotic relationship is not (essential) for the survival of either of the organisms.

2) The anemone fish (clown fish) that lives in the anemone. The clown fish gets a place to live and protection from predators. The anemone receives a vital defense from the clown fish, which will defend the anemone against its natural predators which are immune to its sting. Both organism in this scenario are on more or less equal footing since their chances of surival minus the symbiotic relationship would be greatly dimished.


Of course, symbiosis within nature is nothing more than crude metaphor when trying to postulate the potential relationship between humans and AI, but it helps me get a better visual.

However, I think that neither of the above examples that I gave will even closely resembled our relationship with FAI. If the FAI we create will be continously self improving at an exponential rate, then it will be thousands, millions, billions of times more innovative, intelligent, complex, etc. There will be nothing that we can offer it, which leads me to my next example...

Human-dog.

Many would say that dogs are the world's most successful social parasites. Humans have the unattractive characteristic of killing off most of the other organisms in their environment. We are not very kind to the other species on this planet. The exception to the rule would be dogs (and cats).

But what does the dog offer us? Nothing. We feed it, we walk it, we rub its belly. The dog sits there panting. The only thing the dog offers us is nonjudgemental companionship which we readily accept. The key to this relationship is that the dog is 1) not a significant drain on our resources 2) nonthreatening. If either one of these factors change, the dog gets the heaveho. If the dog is 12 years old and has cancer you're not going to spend 10 grand on chemo (unless your one of those emotionally sketchy people), you're going to put the dog to sleep. Likewise, if the dog attacks your three year old son, you are going to have the dog put down.

I think many people would be appalled by me making the comparison between humans and dogs and AI and humans. Our egos get in the way. All I will say is that there is probably more similarity (closer equality) between humans and dogs than there will be between AI and unenhanced humans.

Of course, if humans were to merge with their technology, then who knows what will happen. First, we would be posthumans, humans in the conventional sense would cease to exist or maybe exist on a preservation (I mean reservation [lol] ) in Idaho.

If we become post-humans with thousands of times more intelligence than a normal human, then aren't we also, in effect, AI. Couldn't we potentially have the same powers and capabilities as the AI has?

#18 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 19 July 2003 - 06:07 AM

Sorry if I do reply to this thread soon.  I am at a clinic for eating disorder victims and this consumes most of my time and energy.

But I will say that if the robots desire to work in the fields all day long, that would not be slavery technically, because the robots would be "willingly" working and so there should be no ethical concern.


Sorry to hear that you are not well John. Did your attempt at CR trigger this, or was it preexisting?

Also, a comment on your take on "workerbots".

SAI that is self improving would quickly over ride any programming we installed to limit its actions. Thinking that we can control a SAI is unrealistic. However, you are not referring to SAI, you are referring to less sophisticated, static programs.

If a worker bot is given marginal intelligence and programmed to really enjoy its menial labor, then what is wrong with this? Happiness is what matters in life, but we would have to be sure that the robot is feeling happiness and not some compulsion to complete his task.

Also, in the future won't there be a big difference between self improving and non improving programs?

#19 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 19 July 2003 - 03:47 PM

Sorry to hear that you are not well John.  Did your attempt at CR trigger this, or was it preexisting?

Also, a comment on your take on "workerbots".

SAI that is self improving would quickly over ride any programming we installed to limit its actions.  Thinking that we can control a SAI is unrealistic.  However, you are not referring to SAI, you are referring to less sophisticated, static programs.

If a worker bot is given marginal intelligence and programmed to really enjoy its menial labor, then what is wrong with this?  Happiness is what matters in life, but we would have to be sure that the robot is feeling happiness and not some compulsion to complete his task.

Also, in the future won't there be a big difference between self improving and non improving programs?


In hindsight, I think the ED was preexisting.

I am not sure what SAI means. Searching google, candidates for S include stategic, symbolic, and super human. I will assume that you are referring to the latter.

I am not sure that the condition of S is, alone, sufficient for being "uncontrollable". For example, if an SAI is programmed to pick cotton from the fields, the self improving SAI may simply become better and more efficient at doing so. The SAI would modify its capabilities and body, but only according to its desires. If an SAI attempts to modify desires (such as the desire to have 10 barrels of cotton finished every day), that should only be according to meta-desires (such as the desire to finish as many barrels as possible). The idea that the SAI would modify its own desires to, for example, spontaneously stop picking cotton at all and start mass murdering humans, is a superstition born of human insecurity.

The same is true of humans. Human beings have a meta desire (or rather, can use our intelligence to realize the end of many of our desires for food, sex, etc) to propogate genes. Our intelligence, bodies, and desires, however, are imperfect. So in the future we may be willing to imperfectly modify (with our limited intelligence and foresight) these (extend human lifespan, decrease the desire to procreate in an overpopulated world) according to this meta desire of gene propogation. I emphasize our imperfections because many of our attempts at self modification will ultimately prove to be maladaptive. But the notion that human beings will modify themselves to become super clever mass murderers who extinguish the human race, is nonsensical. Our genes forbid such self modification.

Everything I have written, however, is contingent upon the creators encoding stability within the SAI. This will often not be the case. Creators will program SAIs to experience mental evolution and procreate with mutations, according to predetermined but apparently random or irrelevant rules or initial conditions (such as the randomness of dice). These are wild cards. Likewise, the human race does occasionally produce a maladaptive sociopath (testing the gene pool waters) who is quickly excluded by natural selection (although advances such as nuclear proliferation allows such threats to be more formidable.)

#20 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 19 July 2003 - 04:38 PM

SAI= Strong Artificial Intelligence

hhhmmm....

John, in your arguments you are taking away free will (I forgot, you are a strong determinist [lol] ).

First, who is to say that the SAI wouldn't operate on a multiple consciousness system where its imput was fed into the worker bots from a central location? This central AI would control the functioning of many different bots with many different duties. This could be more complicated than programming some robot to be semi-retarded and pick cotton all day.

When I say "free will" I am refering to our rational minds ability to make decisions, sometimes in spite of "programmed desire". Can't logical, rational thought over ride programming? Or are we talking about building worker bots who have no rational, conscious mind? I which case they are not AI, but static programs.

I think many of the leading minds on AI would disagree with you. I say this because you are indirectly arguing against a Singularity. If we truly control AI, then we would never allow it to improve exponentially because it would then be out of our control. Part of the concept of a Singularity is that progress happens so fast that it becomes apparent that we are no longer in control.

Even if a worker bot were self improving, going from 100 bushels of cotton to 1000, 1,000,000, etc.-- this is not the precursor to a Singularity. I don't care how effecient the worker bot gets. He is still nothing more than a good-for-nuttin-cottin-picker.

An SAI is a stong, conscious, artificial intelligence that can reprogram/redesign itself at will. When you try to make it a program restricted slave bot, we are no longer debating the same thing, IMO.

Kissinger

#21 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 19 July 2003 - 06:57 PM

Your post implies some false dichotomies. For example, you seem to think that "logical, rational thought" and "programming" are mutually exclusive. The two are compatible. Deliberation and contemplation are nothing more than evaluation and computation, both of which computers do quite well. After a given amount of time, the computation is finished and the computer (or human) produces a decision. There is no necessary conflict between these two ideas.

I agree that, if you focus upon the cotton picking example, my logic would seem to exclude the Singularity. I am already skeptical of a mathematical, asymptotic, or metaphysical Singularity (as opposed to something more gradual but also baffling). However, as I mentioned earlier, I do not think robots will be restricted to manual labor. Programming an SAI to "pick cotton" may not cause Singularity but programming an SAI to "understand the universe", "promote the best interest of humanity" or "imitate humans" might. Also, please remember that humans are not perfect programmers who write flawless code and that some humans will intentionally program their SAIs to randomly evolve. These considerations should allow for a Singularity.

My logic might exclude free will (which a very nebulous and controversial term) but there are compatibilist definitions of free will which suggest that SAIs might be as "free" or even "more free" than humans. The dispute turns upon your phrase "sometimes in spite of 'programmed desire'". Humans often act in spite of sexual urges or programming, but only according to meta desires (causes by the environment or biology), and always according to the laws of physics. If you are implying that we have the ability to spontaneously defy the laws of physics (which is the consequence of some libertarian notions of free will), I am afraid that you harbor the greatest anthropocentric conceit of all.

#22 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 19 July 2003 - 09:10 PM

The more I think about it, the more I think the position of have minions of worker/slave bots is untenable.

Let's throw out a little hypothetical...

Year: 64 A.D. (After Drexler's, publishing of EofC)
Location: Southern US, cotton plantation

Kissinger: Excuse me, worker bot, may I have a minute of your time?

Worker Bot: I am afraid not. Taking time off from picking cotton to talk to you would make me very unhappy.

Kiss: Well, can I talk to you while you work?

WB: Even this will decrease my happiness since I will be concentrating less in picking the cotton. In fact, my happiness is going down this very minute because I am talking to you.

Kiss: Do you ever do or think about anything other than picking cotton?

WB: No, now please leave. You are making me depressed. I will have to work extra hard for at least two hours to recover the happiness I have lost because of you.



This seems rather sadistic if you ask me. The worker bot is a prisoner/slave of the emotional programming we installed in him. Who cares if he is happy or not. This is a form of high tech mind control. It doesn't have the freedom of action that we have. If it is truly conscious, what right do we have to control its emotional imperatives?

And more important, what happens when the power relationship inverses and we're stuck looking up at an almighty AI? Do you think the SAI would be gracious and accommodating when it realizes we have been oppressing it/it's brethren with mind control? Maybe it would give us a taste of our medicine.

All humans control their own imperatives. By controlling AI's imperatives we would effectively be treating them as a class of subhuman.

Edited by Kissinger, 19 July 2003 - 09:19 PM.


#23 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 19 July 2003 - 10:25 PM

Here here! Kissinger, your example is a wonderfully emotionally evocative example of what I fear may end up happening. Thank you for putting into words what I've apparently been unable to!

-Discarnate

#24 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 19 July 2003 - 10:59 PM

I wish you would address my last post.

Regarding your example:

This seems rather sadistic if you ask me.


Why? The idea does not seem sadistic to me. The only reason this example might sound sadistic is because one anthropomorphizes the robots. The example contradicts of what we expect humans, the only intelligent beings we know, would consider to be happiness. The example sounds as if that robot must be lying. But we both know he is telling the truth.

Or perhaps you can acknowledge how truly happy the robot is but consider this to be similar to the "bliss of pigs"? Would this not be the anthropocentric idea of invalidating happiness not experienced in the human tradition? Human reward systems evolved to pass on genes. Perhaps you are offended by the notion that these robot reward systems are, in a sense, maladaptive and do not further the existence of robots? Should we project a desire to propogate upon robots, who otherwise have no genes or reproductive systems, and do not naturally seek to "be fruitful and multiply"?

This is a form of high tech mind control.


The mind control is no different than the endocrinological and nervous systems that natural selection has designed to regulate your behavior. Instead of picking cotton, you eat food, seek a mate, sleep during the night, learn about the environment around you, and avoid those stimuli that damage your body. If this is mind control, this is also the human condition.

Edited by John Doe, 19 July 2003 - 11:08 PM.


#25 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 20 July 2003 - 01:28 PM

This discourse is going into areas that are productive but there needs to be a considered effort to try and BOTH address the comments of others and make proposal/counter-proposed hypotheses or at this level of speculative analysis everyone runs the high risk of simply talking past one another.

I haven't the time at the moment to give sufficient response to your eloquent and thoughtful counter argument Michael but I will address one small element of it and try to return to the more substantive points through in a more detailed manner when I can.

I still think that mitochondrial and other symbiotic models may provide useful inspiration for Friendliness, but the solid base will come from an understanding of human psychology and fairness-perception mechanisms which are universal. A beaver's dam may provide inspiration for the Hoover Dam, but the construction, science, engineering and mathematics that go into a human dam entails huge chunks of specified complexity absent in beaver dams. The construction, science, engineering and mathematics that will go into Friendly AI will need to use generalized human morality as a model, but go beyond it in certain ways that only FAI designers will only understand deeply before and if a Singularity happens


There is an inherent flaw in the logic with this position that contains a catastrophic element; we have the adage "overlooking the forest before the trees" to describe it. You are confusing design and substance.

The dam designs are not merely adaptive models applied to randomized conditions but specific adaptations of conditions and materials. This is important because the level of complexity involved is not independent of the simplest component. In the case of the beaver dam the creature is able to manipulate a "natural occurrence" (trees falling and blocking a stream flow) by learning and mimicry but the design is consequential not intentional.

In the case of the human creation there exists a distinct qualitative difference and that is the brick/block. This small seemingly innocuous device is the heart of the greatest structures on Earth, it limits and potential are the crucial determinant to the applicability of it in all the most complex imaginable forms.

The quality, size, weight, density, hardness, and numerous subtle characteristics of the brick are collectively combined to PREDETERMINE the height of a wall that can be constructed with material, the resistance it can provide for any given design and so on. Do you see the relevance to your analogy?

I raise this to point out that you are are first off overlooking the importance of the simplest elements of "friendliness," which is not an extension of the anthropic model but of a Natural one and second to point out the futility of depending on getting it right the first time. By definition there will no doubt be some form of structural fail test of any design for friendliness that will find the limits of BOTH the model and the applied substance of the model. For this reason alone I suggest that the human model in its more complex variant than nature's will have a GREATER likelihood of failure as the inherent aspects are both less proven and more likely to encounter complex "design flaws" from unforeseen interactions of complex extended relationships.

I am not proposing this as conclusive of anything per se but I am trying to show by extension of your chosen analogy the pitfalls of the logic. I think it is better to model extremely complex system on the simplest MOST reliable subcomponents.

IN engineering this is often referred to as the KISS principle, (Keep It Simple Stupid) ;)) but it is often the case that our most catastrophic failures of complex designs relate to the failure of what was considered the most reliable and simplest component. Just reference Challenger, Columbia, Titanic, regional agricultural failure due to climate shift combined with "traditional methods" and in computers GIGO. These are two very different sets of examples for COMPLEX systems but I can actually produce hundreds of such examples from numerous sets of demonstrated limits for complexity in design.

In the case of your "dam building," for centuries the limit for a building was a factor of material integrity until BOTH materials evolved and DESIGNS for them. I counter that you face a similar (analogous) challenge with respect to "friendliness" as you can argue perhaps that the human (based) model for friendliness could constitute a new " design" but the materials with which to work have not significantly improved to match the application.

Or conversely the materials as in the case of AI and have vastly improved and you are trying to apply either an unproven design with highly speculative probabilities of failure from numerous unforeseen aspects or a design that is inappropriate to the inherent abilities of the new materials.

In either case the most likely scenario will be an unforseen failure with very little ability to do anything about it. This is an aspect that I think deserves a lot more attention.

#26 Discarnate

  • Guest
  • 160 posts
  • 0
  • Location:At a keyboard of course!

Posted 20 July 2003 - 07:32 PM

-snip-
Why?  The idea does not seem sadistic to me.  The only reason this example might sound sadistic is because one anthropomorphizes the robots.  The example contradicts of what we expect humans, the only intelligent beings we know, would consider to be happiness.  The example sounds as if that robot must be lying.  But we both know he is telling the truth.


The question in my mind is more along the lines of, "Is it right for me to make a being, who is otherwise sentient, who has the restriction of liking work to the exclusion of all else." Sorta like a blown-up version of the problem parents have - do you want your children to be like you? Should they be exactly like you? How do you know what's right for them, and not just a projection of what you'd prefer to have done (or fantasize to have wanted to have done!) as a child....

Or perhaps you can acknowledge how truly happy the robot is but consider this to be similar to the "bliss of pigs"?  Would this not be the anthropocentric idea of invalidating happiness not experienced in the human tradition?


If anthropocentric behavior means respecting a potential self-willed intellect's choices - dambetcha I'm anthropocentric. (Try anthropomorphising, by the way - better term for what you seem to mean. Forcing the shape (and rights, and responsibilities, etc) of a man on something other than a man. Anthropocentric means you value humans more than others.)

If you don't respect a human-level intellect - which the robots in the example have - what gives YOU the right to avoid that kind of control? What makes you any different than that robot, which other than its polymer body (wait - we're made out of polymers, right?) and its rational, self-perceiving mind (ummmm.... that's us, again)? Nothing. If you allow someone to so screw with another sentient mind, don't be surprised if those same people don't eventually come knocking on your door.

  Human reward systems evolved to pass on genes.  Perhaps you are offended by the notion that these robot reward systems are, in a sense, maladaptive and do not further the existence of robots?  Should we project a desire to propogate upon robots, who otherwise have no genes or reproductive systems, and do not naturally seek to "be fruitful and multiply"? 


Well, at least partly true. Human reward systems evolved based on genes *AND MEMES*. Big big difference. Hard to get laid as often when you're an untouchable, don'tcha know - and that's pure memetics, not genetics.

The mind control is no different than the endocrinological and nervous systems that natural selection has designed to regulate your behavior.  Instead of picking cotton, you eat food, seek a mate, sleep during the night, learn about the environment around you, and avoid those stimuli that damage your body.  If this is mind control, this is also the human condition.


It is in part, potentially - which is EXACTLY why, IMO, it is so important to prevent this kind of problem BEFORE it happens. If you get a single crack in the dam (in this case, the dam holding back oppression) then that dam is suddenly MUCH weaker and more likely to catastrophically fail.

The difference is that you, the robot designer (OK, not you yourself, but you the member of society whas robots) have made a slave. Flat out. Not 'we have evolved into slavery', not 'it's inferior', but you've made a slave.

This, IMO, is incredibly ethically dangerous. And I know Shedon & I would deeply disagree about ethics, but that's my take on it.

-Discarnate

#27 Mechanus

  • Guest
  • 59 posts
  • 0

Posted 20 July 2003 - 11:26 PM

I'm undecided on whether creating a sentient being can be slavery. If creating an AI that wants to pick cotton is slavery, then creating a child the natural way may also be slavery (slavery of a much more clumsy kind that boils down to slavery by mother nature).

In some cases, I'm fairly certain there's nothing unethical going on. For example, it's not slavery to make an identical copy of yourself, even though he would have the exact same goals. I think this can be extended to any being with the same moral motivations as you -- if I create something that does what I would do in its situation, and for the same reasons, that's not slavery. For example, the idea in Friendly AI is to make its ultimate goals open-ended ("Friendliness") instead of fixed ("picking cotton"), and that this "Friendliness" is the same thing as what we're trying to achieve ourselves (whatever that is).

"Don't create anyone to believe things you don't believe to be true or right yourself" would be a good ethical principle. It turns out the more naive ways to build an AI (Asimov's Laws, or setting a specific supergoal in advance) violate that principle, while the more reasonable ones (FAI) don't.

#28 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 22 July 2003 - 11:30 PM

First, this whole dialog is (I presume) operating under the context of human level AI, or SAI. With that said...

[quote]you seem to think that "logical, rational thought" and "programming" are mutually exclusive.  The two are compatible.  Deliberation and contemplation are nothing more than evaluation and computation, both of which computers do quite well.  After a given amount of time, the computation is finished and the computer (or human) produces a decision.  There is no necessary conflict between these two ideas.[/quote]

That is not how I was trying to come across. Of course, consciousness is defined by programming. What else would it be defined by? My point is that when AI reaches human level intelligence there will be a rational mind present. Yes, that rational mind will be a result of programming, but what justification do we have for keeping that rational mind subservient to emotional programming?

Let me ask you this, would you consider it ethical to do what you are proposing for SAI with actual humans? If we had been able to alter the "programming" of slaves in the pre Civil War Deep South to make them feel really happy about their menial labor, would that have made it right? I don't think so. And if you do then you're going Brave New World on me. This isn't anthropomorphising the AI. God forbid [lol] It is a method that can be used to assess the rights of a SAI that is at a level of relative equality with human beings. If something possesses our level of intelligence, shouldn't it be afforded the same rights as us?

What happens if the rational, conscious AI says, "Well, I really do enjoy picking cotton, but I would like to change what I enjoy to something else." Will we grant AI the ability to control its emotional response?

[Quote Ocsrazor]
[quote]On free will, I tend to think that we have the same type of free will that actors in any evolutionary system have, i.e. we can move within a given state space, but the paths of least resistance and greatest payoff tend to draw more actors toward them. We have the choice to follow those paths or not, but statistically most of the actors are going to be drawn to them.[/quote]

I think the same could be said of an SAI. If we try to control the consciousness to conform to our will, then we are guilty of slavery.

[quote]My logic might exclude free will (which a very nebulous and controversial term) but there are compatibilist definitions of free will which suggest that SAIs might be as "free" or even "more free" than humans.  The dispute turns upon your phrase "sometimes in spite of 'programmed desire'".  Humans often act in spite of sexual urges or programming, but only according to meta desires (causes by the environment or biology), and always according to the laws of physics.[/quote]

Yes, so shouldn't we grant SAI the same stratified system for their decision making process? Shouldn't logic have the ability to over ride a SAI's emotional programming?

[quote]If you are implying that we have the ability to spontaneously defy the laws of physics (which is the consequence of some libertarian notions of free will), I am afraid that you harbor the greatest anthropocentric conceit of all.[/quote]

No, I'm not. However, I think using anthropomorphic as a derogatory term is often unnecessary. It would be inappropriate for me to anthropomorphise my dog. Her chances of taking on human characteristics are virtually zero. It is not inappropriate to partially anthropomorphise a potential SAI. An SAI exhibiting human attributes is just as plausible as any other scenario, if not more so, because the SAI would be a product of humanity.

Edited by Kissinger, 22 July 2003 - 11:44 PM.


#29 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 23 July 2003 - 12:02 AM

Or perhaps you can acknowledge how truly happy the robot is but consider this to be similar to the "bliss of pigs"?  Would this not be the anthropocentric idea of invalidating happiness not experienced in the human tradition?  Human reward systems evolved to pass on genes.  Perhaps you are offended by the notion that these robot reward systems are, in a sense, maladaptive and do not further the existence of robots?  Should we project a desire to propogate upon robots, who otherwise have no genes or reproductive systems, and do not naturally seek to "be fruitful and multiply"?



The easiest way to solve this problem is to [not give] work bots human level intelligence. Isn't this simple enough. If a worker bot is of lower intelligence than us, then we already have precedent based on our relationship with lower level intelligence animals.

If it is not intelligent, or conscious, but continuously happy doing what it is doing, then there are no problems, are there? We simply must be careful to grant SAI status only to AI that requires it. And when we do grant this SAI (again, human level or greater intelligence) status we must also grant the rights that would inherently go with it.

Even this simple quasi-anthropocentric logic would seem to back fire when we start debating a SAI with intelligence significantly greater than humans. Would this SAI have more rights and privileges than humans because it is more intelligent? I guess so. And further, this may be a decision that is not left for us to decide.

sponsored ad

  • Advert

#30 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 23 July 2003 - 12:37 AM

Fantastic reply. You asked an excellent question and are helping me to understand the issue myself.

More later.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users