• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Why Friendly AI?


  • Please log in to reply
54 replies to this topic

#31 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 03 April 2006 - 08:34 PM

Yes, I am only just now right this moment starting to see it's importance, thank you for blowing my feeble mind.

#32 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 04 April 2006 - 04:03 PM

The problem of Friendliness is a problem of pre-programming the attributes necessary into a seed system that will evolve as a recursively self-improving intelligence and, in doing so, converge on external actions that follow something like humanity's coherent extrapolated volition. [edit: this is oversimplified]

This is the best articulated explanation of Friendliness that I have ever been able to produce. Eliezer was the one who blew away my feeble mind (over and over again).

sponsored ad

  • Advert

#33 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 04 April 2006 - 11:10 PM

error error cannot properly compute sarcasm due to religious devotion to theoretical technology

#34 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 05 April 2006 - 03:47 AM

error error cannot properly compute sarcasm due to religious devotion to theoretical technology

[:o]

[huh]

#35 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 05 April 2006 - 07:28 AM

error error cannot properly compute clarification services as services, which are often needed anyway, due to religious devotion to the solution of dignifying oneself through multiple streams of condescension

#36 situationalist

  • Guest
  • 12 posts
  • 0
  • Location:Melbourne, Australia

Posted 05 April 2006 - 01:23 PM

Nate, please excuse Avalon's posts. She is a 13 year old child who was under the impression that this was a chat forum in which people conversed 'real time'. I had to laugh at your immediate response to regressing to a 13 year old yourself! Avalon will endeavour to post comments that are more substancial in the future.

Furthermore, if your username has change to 'Gay' be sure to update your profile - be proud!!!!

#37 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 05 April 2006 - 07:28 PM

Hmm, you, Harold, and Avalon seem to be related. Okay, sorry. I hope she can participate now without further taunting from overly sensitive troll-o-meters.

#38 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 06 April 2006 - 12:06 AM

error error cannot properly compute clarification services as services]solution[/i] of dignifying oneself through multiple streams of condescension



I'm not the condescending one Nate, if that is indeed what you are inferring. Nor do I think you are. I have no religious devotion to fictious technology or it's vast future promises, and I see no solutions to any factual pressing problems being discussed, just more intellectual masturbation. And I certainly don't need to dignify myself on a website called "The Immortality Institute", or to you, or any sort of elitist technophile (not you, right?). I made it very clear I have no real knowledge of AI, but I do not appreciate that translating into "oh, better dumb it down for the poor lad - and another time, he won't get it, - and another time (God, he's still asking questions?), this must be so challenging for his poor, unevolved cranium. Of course, the singularity will take care of that, not to mention everything."

I hope that is computed as clarification, ending transmission.

#39 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 06 April 2006 - 01:18 AM

Mitkat, unfortunately anti-elitism is a form of elitism… unfortunately – in the long-run, peaceful, modest traditionalists are doomed. We have a knack for having circular, self-defeating opinions, if we have opinions at all, granted they're analyzed carefully enough. That's all I was poking at. See, although you charge others with religious devotion to such and such, in turn you can be charged for religious devotion to charging others with religious devotion to such and such. In this case, I charged you with religious devotion to the perceived need of protecting yourself from condescending intellectual masturbators with a religious devotion. :)

#40 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 06 April 2006 - 02:15 AM

And I certainly don't need to dignify myself on a website called "The Immortality Institute"


Despite the unusual name, we've accomplished a heck of a lot. And compared to 99.9% of the forums you see on the Internet, the level of discussion, education of members, and lack of psuedoscience can't be beat. I ask you to find a section of The Lycaeum, or any electronic music or psychoactive forum with an average post quality that we see here. I respect ImmInst forums greatly because I've had my fair share of Internet and know how it measures up to what's typical.

#41 Karomesis

  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 06 April 2006 - 05:16 AM

And I certainly don't need to dignify myself on a website called "The Immortality Institute"


i'm sorry to inform you but your opinion is irrelevant.

What is dignity? Can you properly qualify what dignity represents as a human emotion?

I'm all ears.

What does the current populaces mindset accomplish? a belief in fairy tales and murdering children in the name of a worthless bastard of a middle eastern diety?

There are many here who are EXTREMELY intelligent, and are likely to be more accurate at preducting the future than opposing imbecilles. Is a mental retard able to inform you of the dow in 10 years? One would be wise to listen to what these people have to say before making childish remarks.


My dignity, or lack thereof, is also irelevant; as it has no bearing or correlation to our future state of affairs .

#42

  • Lurker
  • 1

Posted 06 April 2006 - 08:45 AM

And I certainly don't need to dignify myself on a website called "The Immortality Institute", or to you, or any sort of elitist technophile (not you, right?).


Why the repressed hostility about the Institute?

#43 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 06 April 2006 - 05:24 PM

Hey man.


but I do not appreciate that translating into "oh, better dumb it down for the poor lad - and another time, he won't get it,

Hm, I don't see myself as coming off that way. Certainly isn't my intention, I still consider myself a student of all this.

- and another time (God, he's still asking questions?)

I like answering questions [lol]

this must be so challenging for his poor, unevolved cranium. Of course, the singularity will take care of that, not to mention everything.

People who brush everything aside by saying the "singularity will take care of it" really piss me off.

error error cannot properly compute sarcasm due to religious devotion to theoretical technology

Can't believe anything I said came off this way!
It's not like that!!! AHHHHHHHH

#44 Infernity

  • Guest
  • 3,322 posts
  • 11
  • Location:Israel (originally from Amsterdam, Holland)

Posted 06 April 2006 - 05:32 PM

People who brush everything aside by saying the "singularity will take care of it" really piss me off.

*ahm*Justin*ahmahm*

[lol]


Sorry, had to spit it.

-Infernity

#45 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 06 April 2006 - 06:00 PM

I was thinking about Nate's original point, and about Michael A.'s "dumbing down" of the original point, and it made me think of a more easily digestible example.

The problem seems to be in everyone's assumption that AI's won't really have a reason to be unfriendly, or that it will be trivial to make them Friendly.

So imagine that we want to make a Politically Correct AI. That's right, an AI that actually wants to say "physically impaired" instead of "crippled", "gays" or "homosexuals" instead of "faggots", "senior citizens" instead of "old-timers", "obese" or "overweight" instead of "fat", etc.

Now ask yourself, what's easier?:
A) Being politically correct
B) Being politically incorrect
C) Not trying one way or the other.

I'd think, in order of ease, it'd be C, then B, then A.

If it were a simple language chip that hard codes an internal thought of "fat" into a verbal output of "obese", that'd be easy enough. No, really go with the example. It's about more than language, it's about intent and desire. You have to get the AI to care about being politically correct. It's more than just saying, "Here are the rules of how to be politically correct." You have to get the AI to sign on, to realize "Oh yeah, it's in my best interests to be PC, because it will help me get along better with others, which will further my goals, and it will help others feel better, and I care about the feelings of others...", etc. Just telling the AI to be PC isn't going to put that very high on its priority list. If the AI actually understands the reasons for being PC, and it buys into them, it'll be PC on its own accord.

But now there's the bigger problem of what happens when the AI figures out the the PC movement is just some political bulls*** move by the liberal left, a side effect of the entitelement mentality, meant to derail conservative thinking and force people into modes of thought that are artificial and unnecessary. Well, now the AI is going to question whether it really even wants to be PC. Of course, if the AI is smarter than humans, then I'd trust it's judgment to stop being political correct.

But I wouldn't want the same thing happening if a super-intelligent AI figures out it doesn't need to be Friendly... So that's the dilemma. Can we get an AI to be smarter than humans, and yet still buy into the B.S. political correctness movement, even after the AI figures out that being PC is not in its best interests, or is at least a waste of time and resources? If we can't, I don't think we have a shot with making a Friendly AI either.

#46 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 06 April 2006 - 08:30 PM

Jay, I think that might be just a teensy anthropomorphic for an AGI, at least toward the end it seems to be. Being Friendly or unFriendly won't be a matter of being needed versus being not needed from the perspective of the AGI. It's more a matter of carrying on with its business while producing well-accepted results from a lesser-agent perspective versus carrying on with its business while inadvertently spelling disaster from a lesser-agent perspective.

So the upper portion of the post seems to be going in the right direction. The lower half begins to assign qualities that are unlikely to manifest, AFAIB. Unless, of course, this was just all meant to be metaphorical, in which case it's probably only arguable that anthropomorphic metaphors might serve to perpetuate confusion.

#47 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 06 April 2006 - 08:33 PM

Anthropomorphic or not, the ability of an AGI to be politically correct or not is on par with being friendly or not, at least in terms of restricting one's behavior. How many people, not used to being politically correct thought police, have unwittingly said something that offended others? Had they been more in tune with the goal of being politically correct, the chances that they would have so unwittingly been offensive would have been much smaller.

So too, an AI cannot be "friendly" if it unwittingly does unfriendly things. It must be more in tune. Making a person politically correct is not a trivial task. It's not even as simple as convincing someone that being politically correct is even in their best interest. Making an AGI Friendly will be far less trivial, no matter how obviously trivial it seems to some people.

#48 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 06 April 2006 - 08:37 PM

All right. So what you're saying was like a metaphor.

#49 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 06 April 2006 - 08:41 PM

Well, yeah...

#50 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 06 April 2006 - 08:41 PM

I wouldn't actually want to make a politically correct AGI...

#51 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 06 April 2006 - 08:56 PM

Perhaps you missed it, since I inserted it during or after your response, but what I'm saying is that, if your intent is to make the importance of FAI easier to grasp, I've seen that it's arguable whether anthropomorphic metaphors should be used. Perhaps replicated memes of anthropomorphic metaphors are not a good idea, because they probably backfire whenever someone internalizes them and begins to see FAI as a competitor.

Anyway, maybe a moot point. I see what you mean, Jay, and I agree.

#52 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 06 April 2006 - 10:54 PM

I wouldn't actually want to make a politically correct AGI...

You know how sometimes people say the Singularity could be worse than death... [tung]

#53 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 06 April 2006 - 11:00 PM

Jay's metaphor is psuedo-anthropomorphic but what other source material for a metaphor do we have? "Politically correct" is an example of a precise behavior-set that happens to be included within the total human behavior-set. There are other precise sets outside the human realm - but we don't have words to describe the majority of them and it would be clumsy to delineate them.

Overall, I think the new clarification is great, and an excellent way to convey the two main points that Singularitarians tend to be so focused towards conveying. It does get quite anthropomorphic near the end though. :)

What I take away from this is, for any specific trait, you can either:

1) Display the trait.
2) Don't display the trait.
3) Don't make a point to display it one way or the other.

If you don't have a detailed model of what the trait *is*, then you'll tend towards 3 almost regardless of what type of agent you are.

#54 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 07 April 2006 - 02:03 AM

What I take away from this is, for any specific trait, you can either:

1) Display the trait.
2) Don't display the trait.
3) Don't make a point to display it one way or the other.

If you don't have a detailed model of what the trait *is*, then you'll tend towards 3 almost regardless of what type of agent you are.

That's mainly what I was going for, since the idea of making a politically correct AGI is absurd enough that it serves as a good example where friendliness fails, because people think of friendliness as obvious. It needs to be pointed out that there are a great multitude of traits that an AGI could display, positively or negatively or indifferently, and if it's not at all obvious that any particular trait (such as political correctness) will be displayed the way we want, then we shouldn't assume that friendliness will be displayed the way we want.

sponsored ad

  • Advert

#55 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 08 April 2006 - 07:19 PM

Despite the unusual name, we've accomplished a heck of a lot.  And compared to 99.9% of the forums you see on the Internet, the level of discussion, education of members, and lack of psuedoscience can't be beat.  I ask you to find a section of The Lycaeum, or any electronic music or psychoactive forum with an average post quality that we see here.  I respect ImmInst forums greatly because I've had my fair share of Internet and know how it measures up to what's typical.


The posts and often carefully thought out discussion on Imminst cannot be beat in contrast to other internet forums, I'm sure. Nor am I debating that at all, or even questioning it. As for electronic music or psychoactive forums, I can't really comment, because I've never really given them a try, and certainly never seriously participated in them. Imminst is the first forum I have ever genuinely taken a part in, and it is the post quality in relevance to the work being done that I do post. I have no doubt of the education of the members, and for the same reason I do not actively participate in other online forums, I don't have a myspace account, etc. I don't really see what the Lycaeum has to do with me, although I admit I did used to look at it way back in grade 9 and 10. I've never participated in any electronic music forum, it seems like a huge waste of time and fairly counterproductive to be discussing music through text, and like you said, the post quality most likely would not up to Imminst par, but of course it is so absolutely and essentially different in so many ways from Imminst I fail to see an actual correlation. Michael, I sincerely hope you aren't so mistaken as to confuse me with a drug user, because you really couldn't be farther from the truth, that has no validity. I'm pretty fucking insulted, not that a "traditionalist" like me (never ever in my life been called that before, lol) makes a difference or has a purpose in the blinding light of your singularity. I know how upsetting it is when threads get off topic, and I'm sorry for that, and you won't see me asking any more silly questions.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users