A.L.I.C.E
Infernity 31 Mar 2005
Human: How would someone who just got defrost from cryonics react?
ALICE: I have idea HOW you WOULD do it.
Human: how?
ALICE: Dr. Richard S. Wallace programmed me for it.
Human: Then how would I react after getting defrost from cryonics?
ALICE: Maybe I could do it for you. Do you have your own personal chat robot?
Human: How would you react?
ALICE: Dr. Richard S. Wallace programmed me to say specific things in specific contexts.
For all the people who wondered, or wasn't sure (simply those who fell for that and started telling "her" your name lol).
~~~
-I see no point in creating the robots species, but having the knowledge of how to do it-
Yours truthfully
~Infernity
Matt 31 Mar 2005
I like this chat bot better
But I havent tried the ALICE bot in a long time..
EDIT
Actually ALICE is a lot better now
Edited by whoa182, 31 March 2005 - 04:18 PM.
sponsored ad
Jay the Avenger 31 Mar 2005
I see no point in creating the robots species...
Check the link in my signature and you'll have an idea pretty soon.
Infernity 31 Mar 2005
Yours truthfully
~Infernity
Jay the Avenger 01 Apr 2005
Infernity 01 Apr 2005
But those who created them must be pretty smart and of course bored heh.
Yours truthfully
~Infernity
Infernity 02 Apr 2005
Seed AI, ok- only for as long as we can always shut it off and it won't be able to destroy us or have the ability to wanna do such thing.
Yours truthfully
~Infernity
Infernity 08 Apr 2005
Yours truthfully
~Infernity
avi_89 09 Apr 2005
hopefully they are not self-though becasue that may bring on some problems but creating robots like us can be very benificial assuming that they have fundimental laws intregated into them such as not hurting humans and etc. some of the things robots could do is become maids, workers and things of such sort which would require atleast some interaction with a humans. And who knows if you are home alone you may get bored and may want to play chess or a board game with the robot and feel like playing an actual person rathar than non-speaking chess machine that calculates 100s of moves per second.
Infernity 09 Apr 2005
However, I do think it would be very useful to create robots whom made to serve us and designed to protect us and never hurt us, but when we give them the opportunity to be like us, simply made of metal- there will be argues if they should get money, and same entitlements. It will be hellish for them, a fight on survival... That will make them wanna rebel.
That will give rise to fights between flesh and blood race and metal race which possibly is stronger.
Another way for dystopia to begin.
Yours
~Infernity
avi_89 09 Apr 2005
edited
i suppose they could be used for physiological research if their mind is build to emulate the human brain.
and ancer me this if you were in a democratic country would you vote for robots to take over your jobs? or even be created in the first place. [sfty]
i doubt it very much, they may have one or two of these for research but i dont think their gonna have any use for it or atleast for now, i cant think of any reason.
Edited by avi_89, 09 April 2005 - 12:40 PM.
Infernity 09 Apr 2005
Yours
~Infernity
avi_89 09 Apr 2005
I can see someone might dream of a world where we coexist but I need to get into the persons head to understand his purposes.
i can give you simple ancers but ther are a lot of external factors that go into it and it just Incomprehensible as of right now.
[glasses] [ii]
Infernity 09 Apr 2005
Won't you be impressed to see someone created a total humanistic robot with an artificial brain which works exactly like human's? won't you think he is a pretty wise guy by having the knowledge to create that?
~Infernity
avi_89 09 Apr 2005
i personally am neutral, if they should or should not get equal treatement and until they create a robot like that,. i would not know because i dont know how they would behave. but for now i'm little bit more on the organic side than mechanical.
Infernity 09 Apr 2005
OK, now that's all these people want to know.yes i would be impressed
Exactly like humans, but the treatment of humans to these robots will be different naturally. There will be a terrible war of races.i dont know how they would behave
Yours truthfully
~Infernity
justinb 11 Apr 2005
Exactly like humans, but the treatment of humans to these robots will be different naturally. There will be a terrible war of races.
I am not so sure. Here, in America, most of the people treated blacks as inferior animals... even as recent as the 1960s. But today, the converse is true. Blacks are generally welcomed by everyone. Children today are more and more comfortable with technology, so when robots start popping up I don't think the youth will care much, but the older people might be put off. And, if a robot can emulate humans in everyway, then who cares if they are robots or not? Robots will appear inhuman at first, but as they are refined, the differences will go away.
Infernity 11 Apr 2005
We are talking about an avoidable expectable overpopulation too! You have to get them jobs, and houses, education etcetera- but what for?? Why working so hard on making hard on ourselves?!?!
Sure, they will get scorn! Some people will not employ robots.
And I still don't understand what is so good in it. Only one thing- the knowledge we are able to create such things. But- it will be more useful to create robots whom are from the beginning more inferior than us, programed to act for our brnfits...
Moreover, robots may rebel if they are keep getting such treatment, they will create lots and lots of robots, will destroy us.
War.
Yours truthfully
~Infernity
justinb 11 Apr 2005
We are talking about an avoidable expectable overpopulation too! You have to get them jobs, and houses, education etcetera- but what for?? Why working so hard on making hard on ourselves?!?!
Sure, they will get scorn! Some people will not employ robots.
Since robots will most likely function on some sort of inexhaustible power source, they wont need jobs. They will never need to sleep, eat, go to the restroom, etc. They can easily download information and experiance from others, so they wont need to go to school. They could develop all of their own clothes so they fit in with humanity.
If robots are advanced enough from the onset, I don't see any of the problems you describe happening. If they are aware, i.e. conscious, then they will have the same rights that humans have. Hell, they will probably know how to help humans accomplish a lot of tasks, perhaps even give them advice on dealing with others.
The only problem I see happening is them being much smarter then us and in turn we will feel envious and afraid of them. If this happens, then humanity will be forced to become cybernetic.
Infernity 11 Apr 2005
No advisers nor smarter or helpful.
Can't you see? they will be like us!
If as you say they won't need all these stuff, then why are they living? to download information and steal ideas...?
Eventually they will be bored, and do you know what people tend to do when they are bored? break the conventional laws. Evil. They will try and rule us, so I believe.
What comes out of them? What can understand a person more than a friend? How can someone become a real friend of some... fake? How many of them will be created? What accidents will kill them?
WHY?
Yours truthfully
~Infernity
justinb 11 Apr 2005
Justing,
No advisers nor smarter or helpful.
Can't you see? they will be like us!
If as you say they won't need all these stuff, then why are they living? to download information and steal ideas...?
Eventually they will be bored, and do you know what people tend to do when they are bored? break the conventional laws. Evil. They will try and rule us, so I believe.
What comes out of them? What can understand a person more than a friend? How can someone become a real friend of some... fake? How many of them will be created? What accidents will kill them?
WHY?
Yours truthfully
~Infernity
That all depends on how we construct them, if we make them benevolent this wont happen. That is, if we make them more human them we are, we shouldn't worry about them becoming evil.
Infernity 11 Apr 2005
Doah... That's what I'm saying! we don't want them to like us, we want to program them to be useful and harmful! To protect us, to serve us...That all depends on how we construct them, if we make them benevolent this wont happen
Why creating a new race just like us? I think we have an agreement here, do you where I am striving to now?
Yours
~Infernity
justinb 11 Apr 2005
Justin,
Doah... That's what I'm saying! we don't want them to like us, we want to program them to be useful and harmful! To protect us, to serve us...
Why creating a new race just like us? I think we have an agreement here, do you where I am striving to now?
Yours
~Infernity
Yes, I do understand. There is no point in creating robots since they will undoubtly be smarter then us anyways. We will become the "robots" eventually anyways. Plus, robots are a bit retro to begin with.
Infernity 13 Apr 2005
What's wrong with them being smarter? I mean, for as long as they are programed to teach us, that's pretty cool with me. They just not suppose to be humanistic. Serve, Protect, Teach- IT!
Yours
~Infernity
Matt 13 Apr 2005
take a look around here justin http://www.singinst.org/
armrha 13 Apr 2005
First off, there is a solid line between robots and artificial intelligence.
In the future, our automation capability will increase. This is unavoidable, and is currently happening. I remember being very very impressed, seeing an Aibo make it's way back to it's charging station for the night. It won't be long before they find a practical application for an AIBO-ish utility robot, maybe vaccuming robots that recharge themselves and preform simple duties around a garage. As the prices drop, they will get more and more prevelant. Eventually, fast food employees will be replaced with tireless self-managing robots. 7-11's. Not that the robots will look like people or even anything of the sort: I picture a futuresque convenience store where your identity is probably stored as you enter, you pick up what you want and walk out, automatically crediting your account based on the RFID tags within the objects you picked up. Who knows how they will really eventually end up, but any job that requires no real thinking can be replaced by robots, it's just a question of when it is cheaper to do it with robots then with people.
This is also good for the economy. To get the same amount of work for less money is at least theoretically good for everyone in the long run. (except the people who get fired...) Eventually humans would be free to work on only things worthy of human attention. The economic gains are obvious. Imagine the money saved if a garbage collection agency didn't have to pay any workers and just had a fleet of tireless self-repairing robots. All they have to do is keep the parts in stock, and theoretically they would be cheaper then the human counterparts (or they wouldn't buy them in the first place.)
But an important thing to note is that none of these robots would have any more capability of rebellion then your word processor. They wouldn't (and shouldn't: it would be cruel) have any emotions. There is no sentience. They are no more capable of creative thinking then an AIBO or robotic arm in an assembly line. A robotic receptionist might act like a male or a female, look basically human, display some personality quirks or fake emotions, but in reality they'd just be an ALICE with better NLP and programs for recognizing and manipulating physical objects when direct too. ALICE is incapable of stepping outside of the bounds of what she was programmed to do, as you demonstrated. The robots of the future will be the same way.
Now to the point of intelligence. Eventually we may construct some programs with the ability to reason the way we do. These programs may or may not have to have emotions to function properly. If not, then we don't have to worry about anything. Without emotions, there is no pride, no desire, no shame, no embarassement, no being 'tired', no fear, no anger, no prejudice, etc, etc. They can be smart and crunch numbers but they won't try to rebel. They can't 'want' anything. This just expands the range of tasks that we can automate.
The strong AI point is a little tricky. Making intelligent, emotional constructs would most certainly be creating our competition. We'd be idiots to do that, unless we ensured that those constructs were as thoroughly a part of us as our minds are (as Stephen Hawking recently suggested).
Alternatively, we could ensconce those intelligences in a simulation such that it is as impossible for it to reach a 'reality' level up as it would be apparently impossible for us to reach a level up into the framework of our reality. We could design lose predictive models based on the same set of rules that governs the AI to give guesses as to what it was about to do without the AI's knowledge, and if it was probably about to try something dangerous to us automatically halt it. I think this would all be ethically wrong, though. Though a thoroughly constructed AI might want nothing more in it's life but to browse and catalog the internet, store semantic information in a special tree for search engines, it would have been crippled from it's birth. It's like genetically modifying a person so that all they ever want to do is to work spreadsheets or die for their country. Anything a human wouldn't want to do, a sentient intelligence shouldn't be forced to do. Overall, the 'rebellious AI' scenario is pretty easy to contain.
There's no reason to object to autonomous dumb robots doing what they can, and artificial intelligence poses a far greater gain then a risk in the future. While a 'rebellious AI' could possibly have it's own telepresence robot, that's not really a 'robot revolution' as just a sentient creature's revolution. To me, robots are very seperate from intelligent creatures.
Anyways, I don't think it's anything to worry about... at least not until the point comes where the intelligent, emotional constructs are our consciousnesses pulled from our minds, or the intelligences are smarter then we are and emotional (neither one is going to sneak up on us.) In the case of the either of those two, I think it is just a stage of our development. Our descendants are taking to the forefront; the children of the human race. I hope we can make sure we are the first option instead of subject to the whims of the second. Either way they'll probably outlast these ecosphere dependent non-retundant degradation-prone mind-support units we currently call home.
Even a weak singularity couldn't hurt humanity. It couldn't do anything more then what we ask it too: It has no motivation. One would have to wonder why a strong singularity would want us all dead, but if it truly is a strong singularity it pretty much has to be right about it... heh.
Of course I could be wrong: Your word processor could be plotting your demise as you read...
Infernity 14 Apr 2005
However, it was a smart move to mention 'her' and even referred to the beginning of the thread...
Thanks, that brings everything to proportion.
Well, everything is 'perfect' somewhat, gathered good enough, but there is one problem- I suppose the robots builders does not think of all of your points in the same way.
Why?
Because it has too many sub-ideas in it.
It would be easy to try and change someone's mind while you have one basic datum, but when it is built out of lot of tiny ones- you sure will fall in one and not make people realize the whole thing.
And people are flinching when they have to face too much at once.
Now this comes together, you cannot separate, it will be unbalanced to have few details mentioned progressively- it will make the process fall in the middle and may play against you ideas!
The problem is- one disorder aborts the whole think; one wrong detail and the balance is gone...
What are the percents you'll convince those crazy to act according that? -insignificant...
I would worry instead of you, not because it will surly lead up to a mess, but because people might not act nor even think that way...
summarizing: What seems to you as a perfect somewhat whole picture, contains lots of many little pendency data that may cause lots of argues and would be harder to convince. (addition:) Although abundance of details means- a composition which is based. That was given a thought to all aspects.
But some human just won't listen...
Hopefully I'm wrong
Yours truthfully
~Infernity
P.s.
Heh well, theoretically that's possible- as everything. But not much reasonable... However- live to see and tell me that was a wrong assumption (no point talking to a demised person and 'prove' 'him' you were right) .Your word processor could be plotting your demise as you read...
sponsored ad
emerson 14 Apr 2005
Look how different a human and a dolphins behaviour and interaction with the world are. They're both highly intelligent, but is it the same kind of intelligence? As different as the structure of these two brains are, how different are our own attempts to mimic some of their basic functions? Personally, I don't judge AI by human standards. I do use biological standards, but only because it's the only range we really have at hand. Is a human intelligent, I'd say yes, even if I have some issues with a lot of how the human mind works. It's not perfect, but it can learn, communicate, reason, and possibly even improve on itself. Are other apes intelligent, I'd say yes to this as well. They may not be able to modify their own intelligence, but they can fashion crude tools, communicate with each other, learn, and reason. Not as well as humans, but most of the same characteristics are still there. What about non-primates, dogs for example. I'd certainly call them intelligent as well. As intelligent as apes, no, but they can still reason, learn, and communicate. To lesser extents on some, and much lesser on others such as communication, but it's still impressive. One could keep on playing this game until getting to bacteria or lower order fungi. And even there an argument could still be maintained for either of them having some measure of intelligence, even if minuscule. I've heard of one study on slime mold doing maze running experiments for example. Though I'm also not really familiar with it, so perhaps I should hold my tongue on that. Still, even at a low level I'd be willing to argue for some intelligence. Not much, to be sure, but some. And I see some of that in our current AI as well. Not chatbots like alice to be sure, I'd make a limit as to there needing to be some actual learning involved in their behaviour. But even for a simple neural network, I'd be willing to argue for intelligence in the same way I would argue for some small measure of intelligence in a fruit fly. I do see some importance to chatbots though, even if only at the level of text output as a result of if-then statements. Not so much for what they are, but for what watching people interacting with them says about how we view the world and the things in it.
Sorry if this is a bit rambling, schoolwork seems to be quickly burning me out to around the level of alice myself...
Edited by emerson, 14 April 2005 - 06:38 AM.