Heading off topic here into Friendly AI related subjects, again...those who think this stuff is boring, please shield your eyes.
I don't think helping others should be considered an "arbitrary concept of humanity", moreso than anything else. If it does turn out to be inherently "unstable" for some odd reason, it can be "artificially" bolted down, and that would be that.
Heh; I don't necessarily think there should be a dual level of responsibility between humans and post-Singularity *superintelligences*, with brains larger and more complex than all mammalian brains since the beginning of time. "Dual responsibility" is a concept that probably only matters when agents are of roughly equivalent intelligence and capacity. Remember, (you may already know this), a post-Singularity superintelligence is no more than AI than a human is an amoeba.
Can't mutual human kindness mirror biological symbiosis closely enough that there isn't a major difference? Why is biological symbiosis inherently more "stable", in your view, than human friendship? Any inspiration from biological symbiosis that AI programmers chose to instill an AI would need to be greatly expanded, modified, and revised anyway, to the point where it would bear little resemblence to biological symbiosis, and more to an *idealized* version of human altruism. I'd rather have a seed AI resembling Gandhi than a seed AI resembling an archetypical example of symbiosis is nature.
For example I find dogs are generally better friends to humans than humans are to dogs but the relationship is real and not simple anthropomorphism. You see the "friendliness" that has evolved between our species is based on "symbiotic pack allegiance". Hence the dogs are hardwired similarly to ourselves and we resonant together across species limits on this complex communicative level as "friends".
Dogs are an interesting example; they probably have some sort of empathy center that evolved *for the sake of being applied to other dogs*. Remember, all dogs started off as wolves. Eventually, through selective breeding and Pavlovian conditioning, ("you want to be mean? no meat for you, you can just die.") we got some human-friendly dogs. But this isn't super-relevant to FAI, because the types of relationships the FAI will be engaging in are going to be 1) billions of times more complex than any example in nature, involving 2) agents of vastly different levels of intelligence, where 3) the interests of ALL agents must be balanced and considered. There has been nothing like this in biology before, so I'm not sure how past examples are too useful.
How does what you've suggested map over to *precise design tactics* for Friendly AI? How does this effect Friendliness structure, content, and acquisition? I'm very interested in what you have to say on this, but I would at least suggesting reading CFAI once or twice, so you can communicate your ideas using a rich theoretical background created *precisely for this purpose*! If this issue is important to you, (which it seems to be; you've been posting thoughtful stuff on it for several years) then I would suggest you read the *only* text out there on the subject, and at least make a few comments on it.
The duty of friends to one another are complex beyond the common understanding but some things are clear, to place the interests of the "other party" above your own at times and to remember the duty to "fight & play fair" as a manner of conflict resolution and general conduct. It also implies a willingness and perhaps a preference to merge behaviors toward a structure of mutual support. These are the basic principles of the "Pack Mind" and are at the heart of most biologically based social behavioral models.
Yup! As CFAI states, "evolution is a good teacher, but it's up to us to apply the lessons properly".
If you try to go well beyond these model you risk a level of miscommunication that destroys the relationship before the potential "friendship" can be built up to provide the necessary cross species level of communication. Bertrand Russell said: "Language serves not only to express thought but to make possible thoughts which could not exist without it".
Go beyond which model? The strict biological social behavioral model? That's one model, and it was appropriate to incrementally evolving biological organisms in resource-scarce environments, but I have a feeling that FAI designers will put a different spin on the training scenarios that goes somewhat beyond this model we're talking about. (Btw, I wouldn't call Pack Behavior "symbiotic" at all. Maybe it's just me, but the word "symbiotic" tends to remind me of lichen and slime mold. We're talking about a higher level of cooperative behavior than that.)
You assume greatly that because a machine speaks all the words known to humanity that it understands us bettter than ourselves and you also assume that because we think we understand machine language that we understand the mind of the machine.
Actually; I don't necessarily think this. It will take time and a lot of work to create an AI that understands us better than ourselves. Also, as the AI increases in complexity, it will become more difficult to understand the precise details of the mind of the machine. At some point we will just need to believe what the AI is telling us in plain English. (That's waaay down the development road, though...)
It is the hazards of such assumpions that are why many people havea visceral fear of the Maxhine Mind and do not sense a potential for friendliness. But this may be a problem of "communication" as "friends also have a "duty" to seek a common understanding.
Lately, Yudkowsky has been talking about a "PAI" model of AI design, where the AI and the programmers can be thought of as a (symbiotic!) intertwining of one mind. The programmers start off by filling in the parts of the AI mind that are missing, and the AI sees positive programmer advice as newly created parts of its own mind. CFAI goes into precursors to this PAI model in a fair amount of detail; "unity of will" and so on. I believe that people who have a visceral fear of AI tend to have it due to less noble reasons than you state; although there are surely those who are generally concerned with communication between AI and programmers throughout the development process; it's certainly an important issue! CFAI, of course, goes into all of this quite deeply, but it's a very large and intimidating text which very few people sit down and read.
It is both by the precise and accidental applications of words that we not only invent new uses for them but new ideas but also we create words and perhaps the problem is that if you want to go completely beyond the limitations of organic relationships you need better, or at least new words. Perhaps however there is no need to leap so far so soon, perhaps many of the words we are discussing are quite sufficient if we use them more carefully.
Agreed! Inventing new words would probably help, and using already known words in as precise ways as possible when speaking to an AI might be helpful; it would eliminate some of the ambiguity inherent in words.
Since I understand AI to be a different species than ourselves I have always chosen models from Nature that offer the highest likelihood of producing a successful supportive relationship with this aspect in mind.
Different species, but 1) not biological at all, and 2) intelligent like us, as no species ever has been. CFAI and LOGI go into what this means, and the difference it makes. Your ideas regarding AI Friendliness are very advanced for someone who isn't devoting his life to it, but I'm going to continue to goodheartedly hassle you into reading the literature out there.
"Friendliness" is, no matter how much you try to orbit the problem, either specifically "human" in applied meaning and conceptual content or it is "biological in general complexity" and functions (like a sense) as I suggest above. What is interesting is that it is an implied conceptual logic that may even be programmable to genes and in this there is hope that such algorithms as you seek may in fact exist as implicit by trans-species conceptual relationships.
"Friendliness" is perhaps a bad word. Maybe the word we're talking about doesn't exist yet. "Friendliness" is certainly a different word than "friendliness". I think what we're aiming for is idealized altruism that's very close to what some humans have conceptualized about it (Gandhi, for example), and I don't believe there is anything wrong about that. Biological cooperation evolved under very precise conditions which are irrelevant now; biological cooperative models are not robust or complex enough to serve as much more than a slight inspiration for robustly benevolent AIs. That's my take on it, anyhow; I could be wrong - but I think we should argue in more specific ways, what kind of differences would this model make to the actual training/building process? Yudkowsky has talked about showing a baby AI iterated Prisoner's Dilemma, and such. That might have something to do with what you're talking about.