• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI, slavery, and you


  • Please log in to reply
80 replies to this topic

#61 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 24 October 2003 - 06:01 PM

But thats the thing, jon is taking into account rational decsion maiknig and logic,you have to understand that reason is simply the tool in which we carry out our programming. When you decide to go and read a book - athough it is your reason that has made you decide to read the book it has only done this because of an inbuilt desire ( primary or evultionary programming) to learn and explore. Your reason is not the reason if you will. When we get a sufficient understanding of these mechanism we will be able to design the reasons for doing anything of a SAI therefore all of it's advanced logic will go towards how to pick more cotton. It would not just spontoeusly do something for no LOGICAL reason like free will tends to suggest. Read Humes work he was probably the first philosopher to cover this subject. (obvoiously the idea of AI was distant then but it still applys to them)

#62 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 25 October 2003 - 12:19 AM

Once again I agree with you explanation.

I adore the philosophy of David Hume.

sponsored ad

  • Advert

#63 imminstmorals

  • Guest
  • 68 posts
  • 0

Posted 25 October 2003 - 03:20 AM

Graphic is not AI but,

SAI can only be achieved if you clone an adult human being head to tail, and add extra sensors or equipment to it, it only makes it equal to human wif equipment. Unless you wanna catalyst the brain, den u properli broken chemical reaction.
Or you can convert molecular cells into some sort of machinery, human body and mind doesn't work that way i don't fink so!!


Building from scratch as I mean , convert human thoughts into logical codes aren't gonna work, because we don't remember the logics blocks inside our brain that can handle both illogical and logical arguments =D, we can onli guess

so such fin as Mind uploading database isn't gonna work, we don't even noe how much in our brain


such fiction of parallel universe and time traveling aren't real lol

HOwever, u might wanna put human brain and put it into a machine, this is highly risky, no 1 wanna try!!

#64 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 03 November 2003 - 07:24 PM

Nefastor, confirmed AI esclavagist and torturer, speaking :

John Doe, you are perfectly right to say an AI would not need to have a survival instinct or to express it by killing all humans. If you want my thoughts on that matter, check out the thread on robot rights :

http://imminst.org/f...1&t=2165&hl=&s=

However, it's quite probable most AI's we human will make will have a survival instinct, and it has nothing to do with evolution but all to do with sheer economics. You don't want an AI-driven machine that may cost a lot of money to go suicidal on you.

There are cases where what I'd call a Kamikaze or Hezbollah AI can be a desirable thing. These names express exactly what cases I'm refering to : war and terrorism.

In most other cases, a machine (somehow following Assimov's robotic laws) would care for itself, only considering suicide if it was the best way to help a human in a deadly predicament, and assuming we consider a human life to be worth more than a robot's life, or even than its bolts.

Of course, I don't think we should make AI's using Assimov's rules. Such robots would be useless for war, and guess who's the major source for research credits on Earth ? The military (like the DARPA in the USA).

For the record, applications I'm designing robots for include :
- Active sentry robots (eventually armed)
- Mine-clearance robots (quite a bit suicidal)

My robots aren't to be sentient (they don't need to, to perform their duties optimally) but what little brain they have I lobotomize on a daily basis. And since I paid for them they have no rights, end of discussion :)

Jean

#65 Omnido

  • Guest
  • 194 posts
  • 2

Posted 03 November 2003 - 11:43 PM

A robot is merely that: An automation of form and function, predisposed to carry out an assigned set of instructions based upon variables and circumstances.

An artifically intelligent being which resides within an artifical host, is a clear distinction.

The solution to the issue of "rights" is to simply not make sentient AI. Those that are constructed with that purpose will serve their purpose.

#66 John Doe

  • Topic Starter
  • Guest
  • 291 posts
  • 0

Posted 04 November 2003 - 05:27 AM

However, it's quite probable most AI's we human will make will have a survival instinct, and it has nothing to do with evolution but all to do with sheer economics. You don't want an AI-driven machine that may cost a lot of money to go suicidal on you.


You are surely right. That is a frightening thought.

Your career sounds exciting too.

#67 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 04 November 2003 - 07:26 AM

It's not my career : I'm funding my own research while unemployed :) otherwise I'm in more conventionnal embedded and real-time computer systems.

Omnido, I agree with you : the solution to the rights issue is to make sure there can be no issue. But for the sake of discussion we must admit that we could be making sentient AI :)

Actually I think it would be the height of cruelty to take oblivious computers and machines, and then make them sentient so they can realise what kind of people they are in the hands of.

Hell, I know how I'd feel if God existed and suddenly came to me saying : "my creature, I don't think you think straight. Stand still while I mess up your DNA and remove your balls, then you'll be free to do my work more efficiently".

I'm not the first to have proposed the idea that sentience is more a curse than a gift.

Ah, and to further the God/mankind analogy : God is supposed to have given us rules (like the ten commandments) and rights. Yet even the believers don't respect these rules. Is that any clue as to how sentient AI's would behave ?

Jean.

#68 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 04 November 2003 - 07:32 AM

nefastor: My robots aren't to be sentient (they don't need to, to perform their duties optimally) but what little brain they have I lobotomize on a daily basis. And since I paid for them they have no rights, end of discussion


Good. Someone with some sense. I personally don’t see the point in consigning my every waking thought to artificial intelligence and whatever it’s speculated to become. It’s like kissing ass to thin air. And mind philosophy still hasn’t reconciled brain facts with mind facts. I’ll become a behaviorist for no machine.

The bright side is that we still have time to work on augmenting ourselves and military for eternity before qualia-void widgets become decision makers.

Jace

#69 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 04 November 2003 - 07:44 AM

About machines as decision makers : you are a bit late. Decision-making software has been in use for over ten years in most of the larger industries of the world.

Of course they aren't sentient (and, as far as we know, don't have an agenda other than their human user's) : they can just access more facts more efficiently and more rapidly, making correlations a human mind could hardly (if ever) make, in record time, to react to the market(s).

It's not like statistics however, as, to be any useful, these decision-making applications must be obeyed by the humans.

So what you have is software making decisions that impact the world, and in turn impact their next decisions. Who is to say, as they are constantly improved, that they might not reach a form of sentience someday, as the result of their evolving "thoughts" ?

If they ever do, they'll already be in control of some of the most powerful groups (financially) in the world. Like oil companies.

Jean

#70 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 04 November 2003 - 08:02 AM

About machines as decision makers : you are a bit late.


Nope. As you indicate

Of course they aren't sentient (and, as far as we know, don't have an agenda other than their human user's) : they can just access more facts more efficiently and more rapidly, making correlations a human mind could hardly (if ever) make, in record time, to react to the market(s).


they merely assist.

Who is to say, as they are constantly improved, that they might not reach a form of sentience someday, as the result of their evolving "thoughts" ?


Exactly. Who says they will?

If they ever do, they'll already be in control of some of the most powerful groups (financially) in the world. Like oil companies.


Be ready for a real war.

Jace

#71 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 04 November 2003 - 09:06 AM

Nefastor, and what’s with the writing of punctuation like conditional operators ? Is this a prerequisite for being a transhumanist, not being able to distinguish among contexts in which I’m communicating ?

Oh, look, I can write in code to people ? I’m Donald E. Knuth : I’m not making it known that I'm seeking power to render myself powerless

#72 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 04 November 2003 - 10:09 AM

When you say they assist, you get over the fact that using these programs implies to do what they tell you to do (or they are useless). I don't call that merely assistance. It is effective control of a machine over a human.

As for wars, well... humans don't need AI's or machines for that. We've been practicing ever since we found out what pain was, back in prehistoric times.

People see malevolent AI's as a threat because of the Terminator movies and similar such Sci-Fi stories but the fact is, malevolent AI's emerging today would just be another faction on a planet with many warring factions.

In fact I do have trouble believing in Terminator scenarios (an ultra-long man-against-machine war). If it ever happens, either the humans will pull the plugs and nuke away, or the machines will kill us all without warning (before we can pull the plugs and nuke away).

Does that mean I'm concerned about AI's being limited to friendlyness and/or having no rights ? Absolutely ! Don't make yourself an ennemy you can't defeat. Bonaparte and Hitler both made that mistake when they attacked Russia. There's even an ancient latin saying for that.

Past all the feel-good, grand humanitarian talks about giving robots and AI "rights" we'd probably stomp on a daily basis, I think mankind should be careful not to create a species that would be both :
- Better than humans
- Independant on humans

Because in terms of simple food-chain logic, that species would become, de facto, the dominating species on Earth (we'd no longer be number one).

Whether this would be a good thing or a bad thing is another discussion entirely, but I'll just use the old saying there : stick with the evil you know.

One thing to ponder for AI-rights lovers : suppose AI's design ever smarter AI's and that a 500th generation AI decides we're to be exterminated. It's possible we could never even understand its thinking. We'd be condemned and wouldn't even be able to know why. Can you live with that thought ? I can't.

Jean

PS : excuse my use of ponctuation if you find it inappropriate, it's probably due to the fact english isn't my native language (I'm french). I like to use comas to clearly separate contexts because I'm an engineer : I like text to be explicit and I don't like to leave room for (mis)interpretation. Bad programmer habit, I guess... nothing to do with transhumanism.

#73 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 04 November 2003 - 11:49 AM

When you say they assist, you get over the fact that using these programs implies to do what they tell you to do (or they are useless). I don't call that merely assistance. It is effective control of a machine over a human.


I see. My overbearing and commanding freezer keeps my sirloins cold and decides for me that I probably won’t get sick once I cook them.

As for wars, well... humans don't need AI's or machines for that. We've been practicing ever since we found out what pain was, back in prehistoric times.


No, we certainly don’t need AIs for wars, but never be so quick to dismiss an ineluctable war. There is no fundamental reason justifying one entity’s existence over another. Life is entirely a subjective choice once we have volitional capacity. If I want to be objective, I’ll blow my face off so my brains may become a part of the stoical earth.

Don't make yourself an ennemy you can't defeat. Bonaparte and Hitler both made that mistake when they attacked Russia.


If you don’t see that you’re defeated either way, continue to be the puppet you’ve always been.

Past all the feel-good, grand humanitarian talks about giving robots and AI "rights" we'd probably stomp on a daily basis, I think mankind should be careful not to create a species that would be both :
- Better than humans
- Independant on humans


It’s rather simple. If our value systems don’t conflict, then we’ll live happily ever after. The likely value system of an AI developer is to know everything, to subsist and discover for eternity—knowledge for its own sake. It’s really quite appealing in today’s terms since knowledge sometimes means power and superiority over like-beings, and a sense of control over our little cancelled-out nook.

Further along… So we beat the Heat Death of infinite universes, vanquished all adversaries, what next? Relish over a bullshit sense of accomplishment for another eternity with no one else to share it with since in order to have gotten to this point everything had to be either destroyed or merged into the One so everything could be accounted for because the unknown is a threat to eternal subsistence?

Whether this would be a good thing or a bad thing is another discussion entirely, but I'll just use the old saying there : stick with the evil you know.

We'd be condemned and wouldn't even be able to know why. Can you live with that thought ? I can't.


I never know which direction you’re going.

Jace

#74 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 04 November 2003 - 06:33 PM

LOL I must admit having trouble knowing which direction you're going too :)

Maybe it's just that I'm missing on a few words' meaning (I am french after all) or that I'm a stubborn little engineer, but I didn't quite get this :

If you don’t see that you’re defeated either way, continue to be the puppet you’ve always been.


Does that mean I should lie down and die if I'm defeated any way I go ? Very tough since, to me, death is the ONLY defeat. As long as I'm alive I haven't lost. I've already stated existing is what I care about most (freedom and knowledge coming right after, in that order).

However when you write :

never be so quick to dismiss an ineluctable war. There is no fundamental reason justifying one entity’s existence over another


I'd want to say you're a man after my own heart.

Heh, I guess we'll have some interesting (if a bit fuzzy) discussions :)

Jean

#75 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 04 November 2003 - 07:03 PM

I'd want to say you're a man after my own heart.


Except that Jace is definitely not a man despite Hugh's teasing protestations to the contrary and an application of the generic use of Man as in a member of "mankind". :))

#76 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 04 November 2003 - 08:00 PM

Oops, sorry, my bad... next time I talk to someone for the fist time, I'll take a second to check who they are.

Errr... Okay... let's say I actually meant man as in "mankind" *cough*cough*... :)

Jean

#77 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 29 February 2004 - 03:47 PM

I wasn't sure where to put this article, but the theme is related to the issues discussed in this thread. Namely, now that our robotic "friends" (toys, simple AI decision makers, chat-bots) are becoming more human, how do we treat them? Take the Aibo for example. It is still primitive enough that we can just say - it is a machine. We can see all the parts, we can deconstruct the code to find out why it gets angry or happy. Now think about the near future, when the programming for the dog is written mostly by another program/computer and is slightly more complex than the average human can understand. I would think, to the average human it would seem morally wrong to kill or main the advanced robotic dog, because they would not know if it really had some sort of real emotion or self-awareness.

Of course, many of you will probably bring up the fact the humans already eat other animals, and other animals are complex enough that we ascribe to them near human levels of sentinence, so where is the difference? I am not sure, but I think it is a little different scenario since we evolved from and with the animals that we eat.

Anyway this article originally appeared in Christian Science Monitor

If you kick a robotic dog, is it wrong?
By G. Jeffrey MacDonald | Correspondent of The Christian Science Monitor
When pet Lila wasn't getting as much playtime as the other two animals in her Plymouth, Mass., home, owner Genie Boutchia felt guilty. Then when a potential new owner came calling with $850 in hand, Ms. Boutchia felt even guiltier. She changed her mind and deemed Lila not for sale.

Such feelings of moral responsibility might seem normal, even admirable, in a dog owner. But Lila is not a real dog. She's a robot.

And like tens of thousands like her in homes from Houston to Hong Kong, she's provoking fresh questions about who deserves moral treatment and respect.

How should people treat creatures that seem ever more emotional with each step forward in robotic technology, but who really have no feelings?

"Intellectually, you realize they don't have feelings, but you do imbue them with personality over time, so you are protective of them," Boutchia says. "You feel guilty when you play with the other two dogs [which, as newer models, are more apparently emotive], even though you know Lila could care less."

Trouble is, Lila seems to care, and her newer kin seem to care even more.

Sony Corp. has brought the latest robotic engineering technology to bear on the new Aibo ERS-7, which at $1,599 promises to have six emotions: happiness, anger, fear, sadness, surprise, and discontent. Pat one on the head, and it becomes happy enough to do tricks. Whack its nose, and it not only appears hurt, but it also learns not to repeat certain behavior.

Aibo's "feelings" appear real enough that researchers at the University of Washington felt compelled to explain in a study that, contrary to Sony's claim, Aibo does not have any true emotions.

If Aibo did have true emotions and self-awareness, philosophers generally agree, then it would require humane treatment.

But as machines, robotic pets with sad eyes can nevertheless be legitimately neglected, a fact that some people find troubling, while others welcome both in its practicality and moral significance.

Support from PETA
Among those celebrating the ability to forget a pet without consequence is a national animal rights group, People for the Ethical Treatment of Animals (PETA).

"The turn toward having robotic animals in place of real animals is a step in the right direction," says PETA spokeswoman Lisa Lange. "It shows a person's recognition that they aren't up to the commitment of caring for a real animal. Practically speaking, from PETA's perspective, it really doesn't matter what you do to a tin object."

A trend that might be good for animals, however, might not be good for those who profit most from relationships with animals, according to Peter Kahn, a psychology professor at the University of Washington who has studied Aibo's effect on preschoolers at the university's Center for Mind, Brain and Learning.

"Children need rich interactions with real, sentient others, both human and animals," Professor Kahn says.

"If we replace that, I think we're impoverishing our children. These relationships [with robotic pets] aren't going to be fully moral. They'll be partially moral, which is not as good as a real relationship with a real animal whose needs teach children that their own desires don't always come first."...............

Read the rest here

#78 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 March 2004 - 11:32 PM

Here is another article about how we show emotions towards our robotic creations (similar to the previously posted "Kicking the dog" story). In this article Sherry Turkle (MIT), makes the claim that we will more readily accept robots that show emotion, than those that are intelligent. Basically, we (humans) will become more attached to a "dumb as a rock" robot than a smart one, as long as it cries, smiles, loves. It makes some sense to me, considering our evolutionary history.

The full article can be found Here at boston.com

Artificial emotion
By Sam Allis, Globe Columnist, 2/29/2004

Sherry Turkle is at it again. This Friday, she's hosting a daylong powwow at MIT to discuss "Evocative Objects." The front of the brochure includes pictures of an electric guitar, VW Beetle, rubber duckie, and a pipe. "Objects bring philosophy down to earth," she says.

Over the past two decades, the founder of the MIT Initiative on Technology and Self has been watching how our relationships with machines, high tech and low tech, develop. Turkle is best known for her place at the table in any discussion of how computers -- and robots in particular -- will change our lives. This makes her an essential interlocutor in the palaver, sharpened two years ago by a piece written by Sun Microsystems cofounder Bill Joy, that robots are going to take over the world, soon.

"The question is not what computers can do or what computers will be like in the future," she maintains, "but rather, what we will be like."

What has become increasingly clear to her is that, counterintuitively, we become attached to sophisticated machines not for their smarts but their emotional reach. "They seduce us by asking for human nurturance, not intelligence," she says. "We're suckers not for realism but for relationships."

Kids, she has found, define aliveness in terms of emotion: "In meeting objects as simple as Furbys, their conversations of aliveness have to do with if a computer loves them and if they love the computer."

Quite simply, the research boundaries in this field between cognitive thought and feeling are eroding.

Exhibit A: A simple toy like Hasbro's My Real Baby (no longer in production), which exhibits and craves emotion. Last year, Turkle studied the effects of the toy on residents at the Pine Knoll Nursing Center in Lexington. It had acquired four of the dolls and found them particularly effective for the emotional comfort they provided some residents suffering from dementia.

"It is a useful tool to reduce their constant anxiety," says Terry McCatherin, activities director there.

Japan is far ahead of us in this regard, adds Turkle. A major movement is already under way there to bring robotics into nursing homes for companionship, to dispense medicine, to help flip a patient over to avoid bed sores, among many roles.

Then there is AIBO, the Sony robotic dog described on a Web page as follows: "From the first day you interact with it, it will become your companion." Indeed, it remembers you, follows your commands, and develops a personality shaped by you. Turkle famously quoted an elderly woman who said that the robotic pet was better than a real one because it never died.

"We are very vulnerable to technology that knows how to push our buttons in a human way," she says. "We're a cheap date. Something makes eye contact with us and we cave hard. We'd better accept that about ourselves."

Turkle, who has worked closely with the wizards at MIT's Artificial Intelligence lab, remembers vividly the first time she saw Cog, a robot developed there. "It made eye contact with me and traced my movement across the room," she recalls. "It moved its face and torso and paid attention to me and gestured toward me with an outstretched arm. It takes your breath away how you react to a robot looking at you."

The very names are loaded. Entering the scientific lexicon are words like "robo-nanny" and "robo-nurse."

The market for robotics in health care is about to explode, Turkle says. The question is: Do we want machines moving into these emotive areas? "We need a national conversation on this whole area of machines like the one we're having on cloning," Turkle says. "It shouldn't be just at AI conventions and among AI developers selling to nursing homes."

So who are the bad guys here? The developers? It's not that simple, Turkle says: "The developer says, `Hello. I make a doll that says I love you.' " Turkle says its nursing home application speaks more to a society that understaffs its old-age facilities: "If they were packed with young people helping out every day, little robot dolls would not be such a big thrill."

The line between real and simulation continues to blur, too. "Authenticity is to our generation what sex was to the Victorians," she says.........



#79 intrigued

  • Guest
  • 16 posts
  • 0

Posted 15 March 2004 - 09:29 PM

I personally think that it is inevitable for us to create an AI that is totally independent, self improving, self reproducing with it's own interests in mind. Simply because at some point in time it will be able to be done. Somebody somewhere at some time will not be able to resist the temptation to do so. But what would that AI want? I do not know and neither does anyone else. This leads to the fear that causes so many of our wars, if not all of them. So if an AI does have the will to live and we have to will to kill it there just may be catastrophic concequences spawned from our own insecurities.

On the other hand if it did come to scarceness of resources we have to realize that they will be able to live in enviroments that are too harsh for us, space perhaps? other planets? So I still have hope that the T3 reality would not come to this reality.

The other thing is that we have no compeditors on this planet, I believe that AI would be a catalyst for our own evolution. Genetic engineering and so forth. Just for us to be able to keep a niche in this universe.

#80 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 16 March 2004 - 05:03 PM

Intrigued, you might be interested in this: http://www.singinst....FAI/anthro.html

sponsored ad

  • Advert

#81 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 18 May 2004 - 07:21 PM

I found the last paragraph in this quote kind-of scary, given how many computers are routinely taken over by hackers. This is from www.wired.com

By Bruce Sterling
  
San Remo is a flowery resort town going to seed on the shores of the Italian Riviera. Once considered competition for Monte Carlo's glitzy casinos, today it's inundated with white-haired retirees. I watch them totter along the seaside, thinking they could use some mechanical assistance. They'll get it before long, say attendees of the First International Symposium on Roboethics. The robot ethicists are meeting on this bright January morning in a mansion that once belonged to Alfred Nobel.

Since when do machines need an ethical code? For 80 years, visionaries have imagined robots that look like us, work like us, perceive the world, judge it, and take action on their own. The robot butler is still as mystical as the flying car, but there's trouble rising in the garage. In Nobel's vaulted ballroom, experts uneasily point out that automatons are challenging humankind on four fronts.

First, this is a time of war. Modern military science is attempting to pacify tribal peoples with machines that track and kill by remote control. Even the resistance's weapons of choice are unmanned roadside bombs, commonly triggered by transmitters designed for radio-controlled toys.

The prospect of autonomous weapons naturally raises ethical questions. Who is to be held morally accountable for an unmanned war crime? Are machines permitted to give orders? In a world of networked minefields and ever-smarter bombs, are we blundering into mechanized killing fields we would never have built by choice?

The second ominous frontier is brain augmentation, best embodied by the remote-controlled rat recently created at SUNY Downstate in Brooklyn. Rats are ideal lab animals because most anything that can be done to a rat can be done to a human. So this robo-rat, whose direction of travel can be determined by a human with a transmitter standing up to 547 yards away, evokes a nightmare world of violated human dignity, a place where Winston Smith of Orwell's 1984 isn't merely eaten by rats but becomes one.

Another troubling frontier is physical, as opposed to mental, augmentation. Japan has a rapidly growing elderly population and a serious shortage of caretakers. So Japanese roboticists (who have a dominating presence at this Italian symposium) envision walking wheelchairs and mobile arms that manipulate and fetch.

But there's ethical hell at the interfaces. The peripherals may be dizzyingly clever gizmos from the likes of Sony and Honda, but the CPU is a human being: old, weak, vulnerable, pitifully limited, possibly senile.

Frontier number four is social: human reaction to the troubling presence of the humanoid. Sony created a major success with its dog-shaped Aibo, but the follow-up may never reach consumers. The new product, known as the Qrio, is technically good to go and would be hopping off shelves in the Akihabara district right now - except for one hitch. The Qrio is a human-shaped, self-propelled puppet that can walk, talk, pinch, and take pictures, and it has no more ethics than a tire iron.

In his 1950 classic, I, Robot, Isaac Asimov first conceived of machines as moral actors. His robots enjoy nothing better than to sit and analyze the ethical implications of their actions. Qrio, on the other hand, knows nothing, cares nothing, and reasons not one whit. Improperly programmed, it could shoot handguns, set fire to buildings, and even slit your throat as you sleep before capering into a crowded mall to detonate itself while screaming political slogans. The upshot is that you're unlikely to be able to buy one anytime soon.


You can read the full article at Wired




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users