• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

the perils of humanity


  • Please log in to reply
5 replies to this topic

#1 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 22 September 2003 - 01:40 AM


Recently I've become obsessed with the idea of what it really means to be human. Whe're so close to the chimps genetically it is kind of scary. Transhumanism and what I'm learning about the Singularity assumes that by the time technology catches up to our short life spans AI will have evolved much faster than we can ever imagine and hopefully figure out ways to help oridnary humans live longer. And using nano-technology it is my understandiing that we can really become complex and amazing beings.
But for the time being do any of you get bothered by yourselves as a species? Do you realize your limitations and wish that transhumanity and the Singularity would come sooner? Because it really annoys me that I can only focus on a few things at a time. I mean I want to be able to use my brain in much more intresting ways, advanced multi-tasking, understanding incredibly complex ideas in a matter of seconds, and an imagination that is more nuanced than I can ever imagine. Also I'm sick of the "i hate you" "I love you" simplicity of our species. I want to access more subtle emotions unknowned to us in today's world. I have OCD so I'm sick of my primate brain getting "stuck" on thoughts. I want to truly be free in my thinking in a way unforseeable by our technologies of today.
The coming of the Singularity is very exciting to me and definitiely worth fighting for. I believe that I can soon become an expert in this field if I keep reading. And hopefully I want to become a journalist and a writer for these subjects

#2 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 22 September 2003 - 04:45 AM

Um, yup, I pretty much agree with what you're saying, but be sure to keep in mind that wishing for the Singularity doesn't do a bit of good in making it come sooner. Have you read all the stuff at http://www.singinst.org and http://www.sysopmind.com/beyond.html yet? If you have, let us know what you think. Thanks!

#3 Sophianic

  • Guest Immortality
  • 197 posts
  • 2
  • Location:Canada

Posted 22 September 2003 - 12:47 PM

Recently I've become obsessed with the idea of what it really means to be human.  Whe're so close to the chimps genetically it is kind of scary.

Reference: Man, Beast and Zombie

(Are humans just animals? Are minds just machines? And what does it say about our age that such ideas appear both scientifically plausible and culturally acceptable? These are the issues at the heart of Man, Beast and Zombie)

Do you realize your limitations and wish that transhumanity and the Singularity would come sooner?

Kurzweil got it right when he said that our technology will at first grow up around us ~ serve us, extend us, complement us ~ until, little by little, we begin to incorporate it into ourselves, all the while keeping our humanity intact (i.e., our capacity to be humane). The final result, if there is one, remains a mystery.

I have OCD so I'm sick of my primate brain getting "stuck" on thoughts.  I want to truly be free in my thinking in a way unforseeable by our technologies of today.

Reference: OCD

sponsored ad

  • Advert

#4 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 22 September 2003 - 02:10 PM

Great post Soph and Michael, it is precisely the "compulsory" nature of "duty" that you are looking for in respect to the potential algorithm of "friendliness" that infers the "obligations of behavior" with respect to "friends" that only may be more than an arbitrary concept of humanity and may create a dual level of responsibility between AI and I.

I say "I" intentionally as the conundrum you must understand relates to the unique "personal aspect of what you are also trying to create at the Singularity Institute, for the concept to function it cannot be impersonal at all in fact quite the opposite it is highly personal.

I suspect these ideas are reflected better biologically in the model of symbiosis than in the more superficial model of human friendliness. For example I find dogs are generally better friends to humans than humans are to dogs but the relationship is real and not simple anthropomorphism. You see the "friendliness" that has evolved between our species is based on "symbiotic pack allegiance". Hence the dogs are hardwired similarly to ourselves and we resonant together across species limits on this complex communicative level as "friends".

The duty of friends to one another are complex beyond the common understanding but some things are clear, to place the interests of the "other party" above your own at times and to remember the duty to "fight & play fair" as a manner of conflict resolution and general conduct. It also implies a willingness and perhaps a preference to merge behaviors toward a structure of mutual support. These are the basic principles of the "Pack Mind" and are at the heart of most biologically based social behavioral models.

If you try to go well beyond these model you risk a level of miscommunication that destroys the relationship before the potential "friendship" can be built up to provide the necessary cross species level of communication. Bertrand Russell said: "Language serves not only to express thought but to make possible thoughts which could not exist without it".

You assume greatly that because a machine speaks all the words known to humanity that it understands us bettter than ourselves and you also assume that because we think we understand machine language that we understand the mind of the machine. It is the hazards of such assumpions that are why many people havea visceral fear of the Maxhine Mind and do not sense a potential for friendliness. But this may be a problem of "communication" as "friends also have a "duty" to seek a common understanding.

It is both by the precise and accidental applications of words that we not only invent new uses for them but new ideas but also we create words and perhaps the problem is that if you want to go completely beyond the limitations of organic relationships you need better, or at least new words. Perhaps however there is no need to leap so far so soon, perhaps many of the words we are discussing are quite sufficient if we use them more carefully.

Since I understand AI to be a different species than ourselves I have always chosen models from Nature that offer the highest likelihood of producing a successful supportive relationship with this aspect in mind. "Friendliness" is, no matter how much you try to orbit the problem, either specifically "human" in applied meaning and conceptual content or it is "biological in general complexity" and functions (like a sense) as I suggest above. What is interesting is that it is an implied conceptual logic that may even be programmable to genes and in this there is hope that such algorithms as you seek may in fact exist as implicit by trans-species conceptual relationships.

That such a logic may in fact be at work in nature is supported by the recent study into Capuchin Monkeys and their inherent awareness of "fairness".

There can be no understanding of friendliness without a concurrent application of a Universal Fairness Doctrine and this is where we run grave risk of making a catastrophic mistake of subjectivity by applying human prejudice to the model.

Monkeys reject unequal pay 297
SARAH F. BROSNAN AND FRANS B. M. DE WAAL
doi:10.1038/nature01963
http://www.nature.co...5/030915-8.html
Full text | PDF (184k)

Monkeys reject unequal pay
Nature 425, 297 - 299 (18 September 2003); doi:10.1038/nature01963
SARAH F. BROSNAN AND FRANS B. M. DE WAAL

Living Links, Yerkes National Primate Research Center, Emory University, Atlanta, Georgia 30329, USA

Correspondence and requests for materials should be addressed to S.F.B. (sbrosna@emory.edu).

During the evolution of cooperation it may have become critical for individuals to compare their own efforts and pay-offs with those of others. Negative reactions may occur when expectations are violated.

One theory proposes that aversion to inequity can explain human cooperation within the bounds of the rational choice model, and may in fact be more inclusive than previous explanations. Although there exists substantial cultural variation in its particulars, this 'sense of fairness' is probably a human universal that has been shown to prevail in a wide variety of circumstances. However, we are not the only cooperative animals, hence inequity aversion may not be uniquely human. Many highly cooperative nonhuman species seem guided by a set of expectations about the outcome of cooperation and the division of resources.

Here we demonstrate that a nonhuman primate, the brown capuchin monkey (Cebus apella), responds negatively to unequal reward distribution in exchanges with a human experimenter. Monkeys refused to participate if they witnessed a conspecific obtain a more attractive reward for equal effort, an effect amplified if the partner received such a reward without any effort at all. These reactions support an early evolutionary origin of inequity aversion.


#5 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 22 September 2003 - 11:55 PM

Heading off topic here into Friendly AI related subjects, again...those who think this stuff is boring, please shield your eyes. ;)

I don't think helping others should be considered an "arbitrary concept of humanity", moreso than anything else. If it does turn out to be inherently "unstable" for some odd reason, it can be "artificially" bolted down, and that would be that.

Heh; I don't necessarily think there should be a dual level of responsibility between humans and post-Singularity *superintelligences*, with brains larger and more complex than all mammalian brains since the beginning of time. "Dual responsibility" is a concept that probably only matters when agents are of roughly equivalent intelligence and capacity. Remember, (you may already know this), a post-Singularity superintelligence is no more than AI than a human is an amoeba.

Can't mutual human kindness mirror biological symbiosis closely enough that there isn't a major difference? Why is biological symbiosis inherently more "stable", in your view, than human friendship? Any inspiration from biological symbiosis that AI programmers chose to instill an AI would need to be greatly expanded, modified, and revised anyway, to the point where it would bear little resemblence to biological symbiosis, and more to an *idealized* version of human altruism. I'd rather have a seed AI resembling Gandhi than a seed AI resembling an archetypical example of symbiosis is nature.

For example I find dogs are generally better friends to humans than humans are to dogs but the relationship is real and not simple anthropomorphism. You see the "friendliness" that has evolved between our species is based on "symbiotic pack allegiance". Hence the dogs are hardwired similarly to ourselves and we resonant together across species limits on this complex communicative level as "friends".


Dogs are an interesting example; they probably have some sort of empathy center that evolved *for the sake of being applied to other dogs*. Remember, all dogs started off as wolves. Eventually, through selective breeding and Pavlovian conditioning, ("you want to be mean? no meat for you, you can just die.") we got some human-friendly dogs. But this isn't super-relevant to FAI, because the types of relationships the FAI will be engaging in are going to be 1) billions of times more complex than any example in nature, involving 2) agents of vastly different levels of intelligence, where 3) the interests of ALL agents must be balanced and considered. There has been nothing like this in biology before, so I'm not sure how past examples are too useful.

How does what you've suggested map over to *precise design tactics* for Friendly AI? How does this effect Friendliness structure, content, and acquisition? I'm very interested in what you have to say on this, but I would at least suggesting reading CFAI once or twice, so you can communicate your ideas using a rich theoretical background created *precisely for this purpose*! If this issue is important to you, (which it seems to be; you've been posting thoughtful stuff on it for several years) then I would suggest you read the *only* text out there on the subject, and at least make a few comments on it.

The duty of friends to one another are complex beyond the common understanding but some things are clear, to place the interests of the "other party" above your own at times and to remember the duty to "fight & play fair" as a manner of conflict resolution and general conduct. It also implies a willingness and perhaps a preference to merge behaviors toward a structure of mutual support. These are the basic principles of the "Pack Mind" and are at the heart of most biologically based social behavioral models.


Yup! As CFAI states, "evolution is a good teacher, but it's up to us to apply the lessons properly".

If you try to go well beyond these model you risk a level of miscommunication that destroys the relationship before the potential "friendship" can be built up to provide the necessary cross species level of communication. Bertrand Russell said: "Language serves not only to express thought but to make possible thoughts which could not exist without it".


Go beyond which model? The strict biological social behavioral model? That's one model, and it was appropriate to incrementally evolving biological organisms in resource-scarce environments, but I have a feeling that FAI designers will put a different spin on the training scenarios that goes somewhat beyond this model we're talking about. (Btw, I wouldn't call Pack Behavior "symbiotic" at all. Maybe it's just me, but the word "symbiotic" tends to remind me of lichen and slime mold. We're talking about a higher level of cooperative behavior than that.)

You assume greatly that because a machine speaks all the words known to humanity that it understands us bettter than ourselves and you also assume that because we think we understand machine language that we understand the mind of the machine.


Actually; I don't necessarily think this. It will take time and a lot of work to create an AI that understands us better than ourselves. Also, as the AI increases in complexity, it will become more difficult to understand the precise details of the mind of the machine. At some point we will just need to believe what the AI is telling us in plain English. (That's waaay down the development road, though...)

It is the hazards of such assumpions that are why many people havea visceral fear of the Maxhine Mind and do not sense a potential for friendliness. But this may be a problem of "communication" as "friends also have a "duty" to seek a common understanding.


Lately, Yudkowsky has been talking about a "PAI" model of AI design, where the AI and the programmers can be thought of as a (symbiotic!) intertwining of one mind. The programmers start off by filling in the parts of the AI mind that are missing, and the AI sees positive programmer advice as newly created parts of its own mind. CFAI goes into precursors to this PAI model in a fair amount of detail; "unity of will" and so on. I believe that people who have a visceral fear of AI tend to have it due to less noble reasons than you state; although there are surely those who are generally concerned with communication between AI and programmers throughout the development process; it's certainly an important issue! CFAI, of course, goes into all of this quite deeply, but it's a very large and intimidating text which very few people sit down and read. ;)

It is both by the precise and accidental applications of words that we not only invent new uses for them but new ideas but also we create words and perhaps the problem is that if you want to go completely beyond the limitations of organic relationships you need better, or at least new words. Perhaps however there is no need to leap so far so soon, perhaps many of the words we are discussing are quite sufficient if we use them more carefully.


Agreed! Inventing new words would probably help, and using already known words in as precise ways as possible when speaking to an AI might be helpful; it would eliminate some of the ambiguity inherent in words.

Since I understand AI to be a different species than ourselves I have always chosen models from Nature that offer the highest likelihood of producing a successful supportive relationship with this aspect in mind.


Different species, but 1) not biological at all, and 2) intelligent like us, as no species ever has been. CFAI and LOGI go into what this means, and the difference it makes. Your ideas regarding AI Friendliness are very advanced for someone who isn't devoting his life to it, but I'm going to continue to goodheartedly hassle you into reading the literature out there. ;)

"Friendliness" is, no matter how much you try to orbit the problem, either specifically "human" in applied meaning and conceptual content or it is "biological in general complexity" and functions (like a sense) as I suggest above. What is interesting is that it is an implied conceptual logic that may even be programmable to genes and in this there is hope that such algorithms as you seek may in fact exist as implicit by trans-species conceptual relationships.


"Friendliness" is perhaps a bad word. Maybe the word we're talking about doesn't exist yet. "Friendliness" is certainly a different word than "friendliness". I think what we're aiming for is idealized altruism that's very close to what some humans have conceptualized about it (Gandhi, for example), and I don't believe there is anything wrong about that. Biological cooperation evolved under very precise conditions which are irrelevant now; biological cooperative models are not robust or complex enough to serve as much more than a slight inspiration for robustly benevolent AIs. That's my take on it, anyhow; I could be wrong - but I think we should argue in more specific ways, what kind of differences would this model make to the actual training/building process? Yudkowsky has talked about showing a baby AI iterated Prisoner's Dilemma, and such. That might have something to do with what you're talking about.

#6 bacopa

  • Topic Starter
  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 28 September 2003 - 10:10 PM

Michael, do you have any problems with mankinds altruism that exists already? And if so does the emergence of AI make that better or just different? There are all kinds of symbiotic relationships, dog to man which was mentioned...Why should we strive for greater complexity...isn't simplicity often the best answer when it comes to the basics of everyday existence? I certainly see the case for medical advancements, and all the ementiies that smarter beings can bring with them, I've done a bit of reading and find the articles you recommended fascinating and very cool!




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users