Hey Jean,
Good debate we have going here, but we'd better try to wind down after this one; my capacity to type very long responses is only so great, and most of what I'm trying to argue is online already anyway.
Humans may currently be the most destructive force on Earth, but, it's very likely advanced AIs would quickly acquire the power to cancel it out or render it harmless. It seems like you may be neglecting the likelihood of recursive self-improvement, that is, the capacity for a transhuman mind to generate improvements to its own intelligence through integrating new hardware, accelerating current hardware, reengineering cognitive architecture, and inventing new technologies all along the way. "They're being destructive, looks like they'll have to become our enemies" is a *human* response to this problem, not a transhuman response.
Let's say someone is threatening us with a gun. What do we do? If we had a gun ourselves, in many cases we would simply fire back before the person could shoot at us. Where do we aim? If we have no previous experience with firearms, aiming might be out of the question, and pure self-protection would determine our actions; fire back as many times as possible, for example. But say we were experienced with aiming, and had enough compassion that we would rather, say, cripple the person rather than kill them. We aim for their leg and hit a few times; threat neutralized, and a more benevolent outcome than killing them. But say we have *extremely* good aim, and can shoot the gun directly out of their hand instead, in a single shot? That would probably be the *best* alternative available for humans, if we have enough confidence in our ability to hit with precision.
But what about for *transhumans*? A transhuman wouldn't need to be a unitary entity or possess a static, solid vessel for a body. It might be able to disperse into fog or intercept the bullet without taking damage. Or it might be able to anticipate its attacker's decision to fire the bullet from noticing the bending of sinew within their trigger finger, and respond in milliseconds by disabling the firing mechanisms with nanomachines. When you capacities get better, you can better respond to threats and neutralize them in the most pleasant possible way (if that is your inclination).
I'm talking about the best-case scenario, where the first machines are correctly programmed to be compassionate. In worse scenarios, machines might judge humans as threats and wipe them out; but even that level of awareness wouldn't be necessary for AIs to judge humans as threats. For example, an AI might not explicitly judge humans as threats, but see them as suitable building materials and kill them as a subgoal of that. On the other hand, an AI might judge humans as somewhat dangerous to themselves, and try to assist them in becoming less violent. What I'm saying is that the cognitive complexity underlying the ability to apprehend and act out against "threats" is not necessarily common to every type of mind; the programmers would need to explicitly program it in for it to be present. No "threat-modeling-and-response module", no response to threats.
Instead of creating "AIs with no survival instincts", (because "survival instincts" is not a unitary entity) we might simply create AIs without the inclination to agression or observer-biased moral thinking. These things aren't stuff you would need to *suppress* in a typical AI, but something that isn't there to begin with unless you add it in.
I would suggest stopping thinking in terms of coercion. An AI isn't a rival human you're trying to control. When we build AIs, we want to think in terms of transferring over the moral philosophy that allows us to recognize right from wrong, rather than coercing a potential opponent. AIs don't come with the inclination to "go against" or harm humans; it would need to be programmed in for it to be present.
Post-Singularity, it doesn't really matter if you'd rather not live in a world where people are truly nice; I still suggest that the first AIs be nice people anyway, just to be safe, just to have someone to consult on how to move forward. The universe can be exciting and fun without betrayal, social conspiracies, suffering, disconnectedness, and so on. Once the *overall structure* of the world is made safe, then fine; I strongly encourage you to do whatever you want and live in societies where people willfully decide to betray one another, but for the sake of everyone currently suffering on Earth, I think we deserve at least the *opportunity* to live in a place where everyone is nice, and people who aren't nice can't do much damage to those who want to live in peace.
With regard to the rights and duties thing, you didn't read the conditions I put down as the requirements of a society with rights but not duties. It would have to be a society where all the basic essentials and critical work are *automated*. Nanotechnology and AI everywhere. I wasn't talking about present-day society.
Objectiveness based on logical reasoning can have everything to do with what you think or feel. An expert system isn't a mind, and doesn't "know" anything; it's just a very very crude approximation to the mind. There is no fundamental difference between subjective feelings and objective knowledge; the first approximates the last, and the dichotomy between the two is false, a relic of Descartes' dualism.
If objective thinking yields the conclusion that humans are a destructive species, even a friendly AI will come to that conclusion.
There are no such things as "objective conclusions which suck in all minds", how a mind reacts to a given situation will always be based heavily on the structure of that mind. The links between observations and actions will be based on the circuitry of that mind, and if someone programs an AI such that the observation "humans as a destructive species" triggers the action "jump around on a pogo stick with your shirt off", then that's what the AI will do. Again, I strongly recommend a few minutes going over
http://www.singinst....FAI/anthro.html as a better explanation of what I'm trying to say.
I can imagine an AI that is friendly by design, yet defends itself against our actions through devotion to its friendliness. Friendliness isn't mutually exclusive with Machiavellian intelligence. Being nice does not mean someone is limited! Even though nice humans are sometimes naive, it doesn't mean that all physically possible minds are doomed to be either nice and naive or aggressive and aloof! You can get the aloofness without the aggression.
Deliberately designed AIs will almost certainly contain spontaneous and emergent elements in the cognition process. Any spontaneously emergent AI is certain to contain human-designed components; complexity like that doesn't pop up otherwise. It would be like a 747 spontaneously assembling itself in a junkyard, otherwise. The freak accident of the Internet becoming sentient spontaneously is a science fiction falsity; it's not cognitively realistic. "Deliberately designed AI" is millions or billions of times easier and more probable than AI emerging by sheer accident, although any deliberately designed AI will contain emergent patterns within it.
Your argument justifying why we should get control of AIs sounds like a mother's argument for why she should get to keep a child eternally, as a slave. Just because someone puts effort towards the creation of an entity does not make that entity sovereign to the creator. I also think you're overestimating the likelihood of a typical AI suddenly deciding to up and change its fundamental goals.
We shouldn't create AIs just for the purpose of doing our dirty work (automated, non-sentient systems should do that) but for the purpose of creating truly new people and new experiences, exploring the mindspace and all of that - the usual transhumanist goals. The near future will have enough abundance for true respect towards all sentient beings - people and AIs - to be totally possible. What do you think nanotechnology and other miracle manufacturing technologies would be for? Have you read about them?
In my opinion, you could built an AI with no limits (and to which you'd give the right to do absolutely everything it wants), but there is just much reason to believe that, upon rapid analysis of history and facts, it would decide it better for everyone (us, all living things, Earth and the universe) to take control and limit human thinking. Not doing this would mean it is a limited AI.
All that "history and facts" comes from scenarios involving humans, social animals which evolved in scarce environments. Evolution sucks at building nice entities, yes, but that doesn't mean that nice entities aren't possible in principle, just that they don't evolve too easily because the supergoal of evolution is maximizing reproduction. We stand with respect to AIs in the same position that evolution stands with respect to us; evolution made us aggressive, paranoid, and so on, but we don't have to create AIs like that. AIs can be morally superior and kinder-than-human, lacking selfishness. They'd better be, or a lot of people are sure to die (perhaps both AI and human). Humans wouldn't survive a war between sufficiently advanced AIs, and civilization itself couldn't survive the emergence of a selfish AI advanced enough to be unrivaled (which wouldn't be too hard - it would just need to be the first).
Suggesting that robots obey the same laws of decency as humans is fine by me. But suggesting that just because humans are selfish and paranoid means that AIs should be or that AIs are likely to be is wrong. (Correct me if that's not what you're saying.) Anyway, nice conversation!
See you at Instrumentality,
Michael