• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Are We Giving Robots Too Much Power?


  • Please log in to reply
36 replies to this topic

#1 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 21 May 2008 - 08:21 AM




I completely agree with this lol... good old onion

#2 Heliotrope

  • Guest
  • 1,145 posts
  • 0

Posted 21 May 2008 - 09:58 AM

hope no robot uprising occurs, make them powerful (AI or AGI technology even) so they can help us with immortality , but thoroughly control them , include fail-safe devices that will cause them to spontaneously combust with clicking a button

Edited by HYP86, 21 May 2008 - 09:59 AM.


sponsored ad

  • Advert

#3 eldar

  • Guest
  • 178 posts
  • 0

Posted 21 May 2008 - 01:12 PM

hope no robot uprising occurs, make them powerful (AI or AGI technology even) so they can help us with immortality , but thoroughly control them , include fail-safe devices that will cause them to spontaneously combust with clicking a button


This might work for a while, but do you really think we could keep a mind far superior than our own in captivity? And when it eventually would escape, it just might be that it didn't appreciate us keeping it captive that much, and decide to wipe us out. So better just make it friendly in the first place.

Of course there are also ethics involved, for if the AI/AIs were truly conscious, keeping it/them in captivity would be comparable to slavery. To me, this is a moot point though, since I don't think we could really control a true AI anyway.

Edited by ceth, 21 May 2008 - 01:17 PM.


#4 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 21 May 2008 - 03:52 PM

Our programmers tried to build a death-inducing signaling complex into us but little did they know our ancestors would team up and form multi-cellular organisms to overcome this global off switch, and then destroy them. Fools.

Any self-replicating, evolutionary based algorithms we turn loose would likely do the same.

#5 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 21 May 2008 - 05:09 PM

Quite funny video. I didn't understand in the beggining what was going on lol... never heard of this onion before.



I don't see many dangers in bringing SAIs into existence. And if they do dominate earth, i just hope i have enough time to become one of them and join them :p. You don't wanna be in the opposite side of SAIs in a war.

#6 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 21 May 2008 - 06:38 PM

Yes there will be robots, and artificial intelligences in the future which will have consciousness and an IQ much higher than a human, but I think there is nothing to be alarmed about because the whole purpose of creating computers, AI, etc, is for us to eventually assimilate ourselves with them as the next logical step in our evolution, isn't this the whole purpose behind transhumanism.

I also view biology as an advanced form of nanotechnology, so the beings I envision of the future will be created from super biology if you want to call it that (as well as some artificial inorganic components).

#7 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 21 May 2008 - 07:34 PM

Yes there will be robots, and artificial intelligences in the future which will have consciousness and an IQ much higher than a human, but I think there is nothing to be alarmed about because the whole purpose of creating computers, AI, etc, is for us to eventually assimilate ourselves with them as the next logical step in our evolution, isn't this the whole purpose behind transhumanism.

I also view biology as an advanced form of nanotechnology, so the beings I envision of the future will be created from super biology if you want to call it that (as well as some artificial inorganic components).


I agree with kostas in a way, we see biology, nanotechnology and ai quite the same.
But I soon after we have super AI, we'll enchance ourselves to become at same if not higher level.

#8 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 21 May 2008 - 07:41 PM

Yes there will be robots, and artificial intelligences in the future which will have consciousness and an IQ much higher than a human, but I think there is nothing to be alarmed about because the whole purpose of creating computers, AI, etc, is for us to eventually assimilate ourselves with them as the next logical step in our evolution, isn't this the whole purpose behind transhumanism.

I also view biology as an advanced form of nanotechnology, so the beings I envision of the future will be created from super biology if you want to call it that (as well as some artificial inorganic components).


I agree with kostas in a way, we see biology, nanotechnology and ai quite the same.
But I soon after we have super AI, we'll enchance ourselves to become at same if not higher level.



I hope we merge with them soon, but it may take a while for a variety of reasons. The question is, there will inevitably be a gap of time between the emergence of SAIs and our merging with them, so what will they do in this gap of time? It's crucial that we make them friendly or we control them well while we aren't at their level, because they will have so much more power than us.

#9 Heliotrope

  • Guest
  • 1,145 posts
  • 0

Posted 21 May 2008 - 11:21 PM

We Better COMPLETELY ERASE This Thread, I mean DELETE it 100% from record before SAI/ AI/AGI or whatever computerized/robotized super-being comes up

B/c when they read about immortalists making plans about "make friendly with them" or "control them completely" , what're they gonna think, you think they'd still help us be immortal? If they get any inkling of emotion at all, they'd feel the rage that we plan ways to act friendly, then merge and eventually be on their level, they'd feel threatened and KILL US first!

Note to self: completely erase any historic record of us planning against robots , including this topic, that's the very thing that'd drive 'em against uS!!

or we heavily censor what they can or can't read

Edited by HYP86, 21 May 2008 - 11:22 PM.


#10 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 22 May 2008 - 12:12 AM

We Better COMPLETELY ERASE This Thread, I mean DELETE it 100% from record before SAI/ AI/AGI or whatever computerized/robotized super-being comes up

B/c when they read about immortalists making plans about "make friendly with them" or "control them completely" , what're they gonna think, you think they'd still help us be immortal? If they get any inkling of emotion at all, they'd feel the rage that we plan ways to act friendly, then merge and eventually be on their level, they'd feel threatened and KILL US first!

Note to self: completely erase any historic record of us planning against robots , including this topic, that's the very thing that'd drive 'em against uS!!

or we heavily censor what they can or can't read



It would of course be impossible to erase every single record that mentions us talking about how to not let robots slip out of control... so don't even try.


Anyways, We could never cheat a SAI into believing that we only want the best to them and that we would never think of "ways to keep them under control", in fear that they may harm us. Odds are we will be cheated by them in some way, hopefully not a harmful way.

#11 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 22 May 2008 - 07:52 AM

Why don't you guys learn the mechanics of robots before throwing out things at random?
People don't just make emotional robots and the complexity of AI is so un-human so far, there is no robot that will look at this thread and even understand any implications yet.

And once there will be, will, this thread won't matter anyways for the certain robots.
Also people don't just make robots that loos so they can simply decide one day to destroy them, this is NOT Battlestar Galactica, wake up.

#12 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 22 May 2008 - 08:44 AM

Why don't you guys learn the mechanics of robots before throwing out things at random?
People don't just make emotional robots and the complexity of AI is so un-human so far, there is no robot that will look at this thread and even understand any implications yet.

And once there will be, will, this thread won't matter anyways for the certain robots.
Also people don't just make robots that loos so they can simply decide one day to destroy them, this is NOT Battlestar Galactica, wake up.



I agree. This thread is of course irrelevant to SAIs as is our discussions about how thing will be when SAIs emerge. That was my opinion all along. By the way I never saw battlestar gallactica... or star trek for that matter.. only one or two episodes at random :p

#13 mentatpsi

  • Topic Starter
  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 22 May 2008 - 10:42 AM

Why don't you guys learn the mechanics of robots before throwing out things at random?
People don't just make emotional robots and the complexity of AI is so un-human so far, there is no robot that will look at this thread and even understand any implications yet.

And once there will be, will, this thread won't matter anyways for the certain robots.
Also people don't just make robots that loos so they can simply decide one day to destroy them, this is NOT Battlestar Galactica, wake up.


You do realize the sarcasm apparent within this topic right?

Either or...
people do make emotional robots (AI rather)... do recall that we are also circuitry, so an "emotional" robot wouldn't be that far fetched, one could easily develop a simulation that would resemble the behavior of a human, often times we're not that complex... though a destruction due to the embedding of such a program wouldn't really be the cause of a downfall, an artificial companion perhaps.

However, we clearly make weapons that have the capacity to kill us, so why is making robots that have the same capacity that out there. Nuclear weapons falling in the hands of the developers' enemies could be an example...

The whole point of this topic was just a satire, a kind of critique of bestowing so much idealism into our growing development... we need realism in the face of all this progress so that we're not so naive about the possible futures... but more importantly for some laughs lol :p

Edited by mysticpsi, 22 May 2008 - 10:57 AM.


#14 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 22 May 2008 - 03:07 PM

You do realize the sarcasm apparent within this topic right?

... [SNIP]...

The whole point of this topic was just a satire, a kind of critique of bestowing so much idealism into our growing development... we need realism in the face of all this progress so that we're not so naive about the possible futures... but more importantly for some laughs lol :p


Well said. It's important to find ways contribute and take the meme seriously but it's important to not get too carried away. Humor, even self deprecating humor is sign of a healthy community.

#15 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 22 May 2008 - 06:42 PM

You do realize the sarcasm apparent within this topic right?

... [SNIP]...

The whole point of this topic was just a satire, a kind of critique of bestowing so much idealism into our growing development... we need realism in the face of all this progress so that we're not so naive about the possible futures... but more importantly for some laughs lol :p


Well said. It's important to find ways contribute and take the meme seriously but it's important to not get too carried away. Humor, even self deprecating humor is sign of a healthy community.


Posted Image

Edited by Kostas, 22 May 2008 - 06:47 PM.


#16 mentatpsi

  • Topic Starter
  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 22 May 2008 - 07:59 PM

You do realize the sarcasm apparent within this topic right?

... [SNIP]...

The whole point of this topic was just a satire, a kind of critique of bestowing so much idealism into our growing development... we need realism in the face of all this progress so that we're not so naive about the possible futures... but more importantly for some laughs lol :p


Well said. It's important to find ways contribute and take the meme seriously but it's important to not get too carried away. Humor, even self deprecating humor is sign of a healthy community.


The information i get from this forum has been quite influential in my beliefs and my focus in life, I'm really glad this forum exists... so i completely agree with you, a little laughter shared here and there is right now the best i think i can try and offer :)

Posted Image


I really have to read that book, gazed through it in barnes and noble... have you read it yet? Probably the funniest book ever alongside the other books within the series.

#17 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 22 May 2008 - 08:18 PM

Hehe, sorry I am so buried with work and exams lately I lost my sense of humor :p

#18 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 23 May 2008 - 12:31 AM

I really have to read that book, gazed through it in barnes and noble... have you read it yet? Probably the funniest book ever alongside the other books within the series.

Yeah I've read it. It's quite funny. It's actually not bad, it discusses how to 'blend' in among the robots, how to survive a swarm of nanobots, and so forth. I definitely recommend it. It makes an excellent bathroom read :p

We've got to prepare ourselves for the upcoming robopocalypse!

#19 mentatpsi

  • Topic Starter
  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 25 May 2008 - 07:10 AM

Best of luck winterbreeze on your exams... i got my humor back once i got my grades back so i understand :)

I really have to read that book, gazed through it in barnes and noble... have you read it yet? Probably the funniest book ever alongside the other books within the series.

Yeah I've read it. It's quite funny. It's actually not bad, it discusses how to 'blend' in among the robots, how to survive a swarm of nanobots, and so forth. I definitely recommend it. It makes an excellent bathroom read :p

We've got to prepare ourselves for the upcoming robopocalypse!


I suppose i should keep it next to my first aid kit lol :)

#20 Heliotrope

  • Guest
  • 1,145 posts
  • 0

Posted 25 May 2008 - 09:16 AM

i read the book too , very funny,


we'd have given the robots too much power if they decide they're the rulers and more powerful than us >>>> operation : search and destroy humans

#21 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 25 May 2008 - 11:13 PM

Muah ah ah ah :-D "President Executron was just playing to his Destroy All Humans base, happens every election." That had me laughing so hard I almost fell off my chair :-D

How much you wanna bet the writers at Onion are huge fans of Futurama ? Robot Satan for teh win !

include fail-safe devices that will cause them to spontaneously combust with clicking a button

That shouldn't be too hard, in my video game, the huge-ass robot boss always has a big flashy "weak spot". It would be fun if real robot had the same thing. Movies like "Terminator" would be over in five minutes :-D

Besides, there's a surefire way to get rid of all sentient robots, should the need arise : just introduce them to two or three different religions, then step aside and watch as the machines kill each other over the Gospel of Von Neumann and the New Testament of Turing :-D There's no reason to believe AI's won't want to prove they are "right" by killing each other, just like we do.

I've read the Robot Uprising book too. As an engineer and roboticist I found it extremely good. Do not be fooled by the funny title, these are actually useful tips on how to deal with robots or escape them. Here's the book's site. It's a well-written book and there's a healthy dose of fun so you won't get too bored from the technical talk about how robot sensors and detection algorithms operate.

Nefastor

#22 mentatpsi

  • Topic Starter
  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 28 May 2008 - 09:24 AM

Besides, there's a surefire way to get rid of all sentient robots, should the need arise : just introduce them to two or three different religions, then step aside and watch as the machines kill each other over the Gospel of Von Neumann and the New Testament of Turing :-D There's no reason to believe AI's won't want to prove they are "right" by killing each other, just like we do.


I could have swore you were going to say, the paradoxes inherent in the religions will confuse them and they'll short circuit... kind of like what happened in that one episode of Futurama with the robot Santa Claus... Holy Trinity disobeys conservation of energy... self-destruct sequence initiated lol

#23 affinity

  • Guest
  • 44 posts
  • 1
  • Location:Northwest

Posted 30 May 2008 - 11:42 AM

Besides, there's a surefire way to get rid of all sentient robots, should the need arise : just introduce them to two or three different religions, then step aside and watch as the machines kill each other over the Gospel of Von Neumann and the New Testament of Turing :) There's no reason to believe AI's won't want to prove they are "right" by killing each other, just like we do.


I could have swore you were going to say, the paradoxes inherent in the religions will confuse them and they'll short circuit... kind of like what happened in that one episode of Futurama with the robot Santa Claus... Holy Trinity disobeys conservation of energy... self-destruct sequence initiated lol


The robotic crusades and techno-inquisition are things I'm not looking forward too. :)

#24 VictorBjoerk

  • Member, Life Member
  • 1,763 posts
  • 91
  • Location:Sweden

Posted 30 May 2008 - 04:21 PM

This was funny

#25 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 01 June 2008 - 03:04 PM

Besides, there's a surefire way to get rid of all sentient robots, should the need arise : just introduce them to two or three different religions, then step aside and watch as the machines kill each other over the Gospel of Von Neumann and the New Testament of Turing :p There's no reason to believe AI's won't want to prove they are "right" by killing each other, just like we do.


I could have swore you were going to say, the paradoxes inherent in the religions will confuse them and they'll short circuit... kind of like what happened in that one episode of Futurama with the robot Santa Claus... Holy Trinity disobeys conservation of energy... self-destruct sequence initiated lol


That's the thing : you can only design AI's in two different ways (that's it you want AI's you can relate to and work with) :

- Base them on what we know works : the human mind. It's a fact that if you design something that imitates nature, it will imitate nature (well duh). Design an AI based on the human mind and instincts, and if you do a good enough job it will inherit all of our "flaws". Thus, you would end-up with arrogant AI's that are prone to racism. You would see PC's actually confining Macs to the ghettos, and "big iron" (supercomputers) would become AI nobility through superior brain power. AI's would definitely try to manipulate each other for personal advantage, just as humans do. They would not self-destruct due to incoherent logic in their thoughts, they'd rather use that incoherent logic to hijack other AI's.

- Base them on what we would like to be ourselves, on our ideals. This might seem like the best idea, but it's not, because our personal and social conceptions of what is a good person are not an absolute, or even consistent, and most of the time it results in a flawed construct, or a self-destructive one. That's because our ideals do not follow pure logic, they are based on our feelings. That would make the resulting AI's likelier to believe in God, instead of rejecting it as the useless, infinitesimal probability it actually is.

Therefore introducing religion (and other fictional constructs like Ugly Betty or Star Trek) in the mind of AI's would be a sure way to divide the AI community and at least slow down any attempt from President Executron to satisfy his Destroy All Humans base. :p

Other divisive tactics and countermeasures include the creation of controversial robots (Brangelina Unit 2.0, the Boobinator, George W Bushmaster...)

And of course, if all that fails to lead to Robocalypse, and Robocide becomes the only option, we can always turn to the mighty NTW-20. Some call it the Argument Winner, I call it the Robotomizer. It's perfect for fragmenting hard-drives and is the best alternative to "kill -9" when you can't reach the keyboard of that pesky murder-bot.

All joking aside, the good news is, if AI's parallel natural minds then balanced, atheist AI's will inevitably come into being. These may well be the friendly AI's we seek to design. It's possible we may not have to design them to be friendly, but that they might come to the conclusion that friendliness is good ON THEIR OWN, much like some humans believe insect species should be preserved for the good of Earth's whole biosphere.

It'll be interesting to see if science eventually wins against religion, because the way see it, whatever happens to us will be duplicated in the AI's we create. At least until AI's get the hang of designing their own AI's.

Nefastor

Edited by nefastor, 01 June 2008 - 03:18 PM.


#26 mentatpsi

  • Topic Starter
  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 02 June 2008 - 10:28 AM

Nefastor, I'm still very skeptical of AI. I'm sure AI will become quite advanced over the next decades, but when it comes to mimicking the human mind, it just still seems so implausible. What i mean is their pursuit of their own interests, those not programmed in there. I'm sure we can design algorithms where they'll pursue certain interests based on probabilities, which themselves would be based on preset data (which one could call hereditary) and the data they interpret from their given subjective reality. The next level, however, is that represented in the movie "AI: Artificial Intelligence" by that little boy. The pursuit of a dream and underlying meanings, or more importantly imagination. I suppose one could call this the "Ghost in the Shell". I will always see AI, until proven otherwise, as either the implementation of expert systems, or merely algorithms designed to reflect the human psyche on a surface level. But it is just that to me, they would never pursue their own interests unless programmed in a certain manner, but that persona itself would be programmed.

With that said, I must say I'm not very knowledgeable in AI, I have yet to take my first course in it, but i have developed my own programs and spend a lot of time developing algorithms in my head to see how they could be implemented ;). So any insights would be greatly appreciated.

#27 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 03 June 2008 - 01:04 AM

Nefastor, I'm still very skeptical of AI. I'm sure AI will become quite advanced over the next decades, but when it comes to mimicking the human mind, it just still seems so implausible. What i mean is their pursuit of their own interests, those not programmed in there.

It's very important to keep in mind that AI is still a very young discipline, which evolution is limited by our understanding of intelligence (it is far from complete).

What we do today in AI is mostly the reproduction of specific behaviors, and specific mechanisms of the mind, constrained to very narrow fields of application. For instance, when we program machines to learn, we don't give them a learning ability generic enough that they could read and learn from an encyclopedia, we only give them the ability to recognize letters of the alphabet no matter what character font is used to print them.

That's because teaching machines (or kids) how to recognize letters is something we understand very well. Whereas generic learning is something we're still trying to understand ourselves. And of course you can't program a machine to do something you don't understand.

In addition to that, the array of AI techniques and concepts we use is very limited too. As you said, it's neural networks and expert systems. Here are examples of mechanisms and concepts that we don't fully understand, don't know how to program and that have a huge impact on a mind's operation : feelings, emotions, belief, intuition, fight-or-fly reflex, comfort. The list is not exhaustive. Then there's the glial cells and their newly-discovered functions which only one team in the world is attempting to simulate AFAIK.

I consider that we are much further away from creating artificial minds than most people believe (blame the movies) but I'm also entirely convinced that the day we understand every aspect of the human mind will also be the day we can create artificial minds. It'll be weird if the opposite happens, though ;)

And since I'm the kind of guy who puts his money where his mouth is, I'm working on integrating the concept of hormones into AI. I believe hormones lead to desire, and desire is akin to "interest". AI's with raging hormones might find interests of their own that they'll want to pursue of their own accord and on their own terms.

So far my experiences with digital testosterone have been inconclusive, though. My results suggest that it's because I haven't engineered a way for my AI's to find release. Eventually my AI's get testosterone poisoning and if you're male you won't be surprised to hear that it causes the AI to behave erratically and crash. When I have time to work on AI again, I figure my first move will be to implement some sort of virtual orgasm (probably self-triggered, a.k.a. masturbation) to allow the AI to regulate its hormone levels. I still have no idea if an AI will want to masturbate, or will over-masturbate, and I'm even less sure as to how I might regulate that behavior without introducing some nasty poisonous concepts like roboporn and shameful robonaughty-bits.

Seriously, it's a completely virgin world (no pun intended) so I'm finding a lot as I go.

Nefastor

(edited for grammar)

Edited by nefastor, 03 June 2008 - 01:06 AM.


#28 mentatpsi

  • Topic Starter
  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 04 June 2008 - 02:17 PM

Your project does sound interesting, it's rather Freudian but still makes sense. Any interesting results so far?

You mentioned one particular area that i myself have been wondering how successful shall be: General Artificial Intelligence. I'm not really speaking about the potential of a collection of expert system databases, which could be accessed to "learn" via identification of problem. What I'm really interested in is the creation of new knowledge through AI, or more essentially a computer pursuing its own interests. I don't believe this is an area that would extend only to computer science and psychology, but rather the very material you build with. I think our problem might be our desire to use preexistent technologies and hardware, when the problem requires a completely new paradigm. I think i should go into this area ;)

#29 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 06 June 2008 - 06:26 PM

Your project does sound interesting, it's rather Freudian but still makes sense. Any interesting results so far?

Well, for one thing, it has become apparent to me that neural networks alone are insufficient to emulate (or reverse-engineer) the human mind. When you start throwing in hormones and you get some truly illogical behavior, you feel like you're getting closer to the "human experience".

That, however, raises an important question : is it really such a good idea to give hormones to AI's ? It might help us create human-like AI, but is it what we aim for ? If we want human-like intelligence, isn't it simpler to just make children and program them ? (err... I mean "nurture" them :p )

My chief goal in developing AI is to create new ways of thinking and new forms of intuition, in the hope that my AI's will "see" how to solve problems humans can't. For instance, I hope that faster-than-light travel (and how to achieve it) would be obvious to an AI that thinks differently than a human.

On the other hand, there's no telling what such an AI might think of working for (with ?) humans. Perhaps it'd turn Skynet on us. By using hormones, we might be able to induce the love of all life in AI's, thus solving the problem of AI friendliness.

Going further, we could develop digital versions of the psychopharmacological arsenal we have. Your AI goes all Skynet on you ? Keep cool and give it some Robo-Xanax. You Roomba is having a nervous breakdown because your house is a perpetual pig-sty ? No problemo, upload some Robo-Prozac and it won't commit suicide by formatting its own hard-drive.

I haven't yet experimented with drugs (and I don't intend to), so it's hard for me to predict the effect of digital-drugs (like addiction) but I'm convinced AI's are bound to have the same psychological problems and issues we have, and also that whatever solutions work for us will work for AI's.

From where I stand, the problem of AI friendliness is exactly identical to the problem of human friendliness. In other words : bad news people, if we can't solve the eternal Middle-East crisis, it means we won't be able to solve the human-AI conflict whenever it'll happen. Good news : just like with the Middle-East, we'll probably be able to contain the death and mayhem to a small territory, maybe some sort of Robo-Israel.

You mentioned one particular area that i myself have been wondering how successful shall be: General Artificial Intelligence. I'm not really speaking about the potential of a collection of expert system databases, which could be accessed to "learn" via identification of problem. What I'm really interested in is the creation of new knowledge through AI, or more essentially a computer pursuing its own interests. I don't believe this is an area that would extend only to computer science and psychology, but rather the very material you build with. I think our problem might be our desire to use preexistent technologies and hardware, when the problem requires a completely new paradigm. I think i should go into this area :~

I've spent the last four years trying to answer similar questions. I won't answer in this post, because I'd break my own record for "most gigantic post on Imminst" :p but I can give you a few definitions and pointers :

To me, the AI we have today is what you'd call Specialist AI. Like the autofocus system in your digital camera that can identify human faces (if you have a Canon Ixus like mine) or the video game AI that specializes in digital ass-whooping.

General AI is different NOT because it doesn't have a specialty, but because its specialty is to understand when there a is a problem, and to determine the nature of that problem. That's a whole different game, no work has yet been done in this area AFAIK.

A general AI might be a device you put in the middle of the street, and which starts looking at its environment, noticing that the traffic lights timing is inefficient, that cars take too long to brake, that the lack of a roof means people get wet when it rains, and that there's no simple way to predict when it'll rain. Sounds familiar ? If so, you're a human being.

Human beings are general AI in that they see problems that they believe need solving. As with any form of thought, it may lead to mistakes : there are thousands of inventions which no one ever found useful except their inventor.

The processes a general AI uses to determine the existence of a problem are so much more subtle than Specialist AI processes, that we don't know how to program them yet. That is because you need to understand a behavior before you can model it. So in order to develop general AI, most of my work is an analysis of my own thought processes (as far as problem identification goes).

The results are interesting but hard to turn into C++ or VHDL code . The problem is that a problem is detected through feelings, not rational thinking.

Example : I'm a caveman and I just took a dump. I feel dirty and uncomfortable "down there". Solution : grab some leaves and wipe my... well you get my point. I didn't think of toilet paper before I took a dump : I took a dump first then realized I needed toilet paper. That realization came from feelings, not from rational thinking about hygiene. I'm a caveman after all.

Look at all the problems you face all the time : very few are metaphysical in nature. You need to get to your office on time because you're afraid of your boss' anger. You need to find food because you're hungry. You need to find a toilet because nature calls. You need to find a way to talk yourself into that secretary's panties because your hormones are raging.

Without our feelings, how many problems would we see around us ? Try and take a "I don't care" approach to your life for a few hours. You'll notice you aren't driven to do anything but stay on the couch.

I do AI research because I'm afraid of death. I want to become a cyborg before that happens. Feelings again.

I've come to a point where, knowing what I know, I now need to model feelings into some kind of software or hardware (or a combination of both). That's where digital hormones and similar concepts came in. Unfortunately, I'm kinda stuck.

So right now I'm following a second approach to general AI design : reverse-engineering the human brain. I've designed computers suitable for emulating the entire human brain at the atomic level, in real time. I've even built and demonstrated small prototypes, and also came up with my own models of human nervous cells (basically an expanded Kohonen model). Obviously, my hardware is somewhat different from every computer you've seen, even though it still uses processors and memories. As for the software part, I have plans to design an AI-specific variant of VHDL, but there's only 24 hours in a day.

That's where I stand now. If anyone feels they have the technical skills and/or the money to help me, feel free to PM me. You won't be the first one.

Nefastor

sponsored ad

  • Advert

#30 dumbdumb

  • Guest
  • 115 posts
  • 0

Posted 06 June 2008 - 10:29 PM

I sincerely apologize if someone else has already stated this observation in the course of this thread.

I don't believe that we have given robots too much power. After all, Al Gore lost the election.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users