mgarrissIt seems to me (mind you, after just a little thought on the subject) that since a major aspect of the Singularity is the ability to reinvent itself that nothing we 'program into it' would or should survive this reinventing process. This includes notions of right and wrong, which we could be right or wrong about.
The goal of
Friendly AI is to have a method such that anyone who follows it ends up with an AI with essentially the same goals. This is also known as "programmer independence", and to the extent that someone could follow the principles of Friendliness and still exert their personal influence on the end result, they're "stealing the Singularity". This is considered to be a bad thing.
Although you're quite right that an AI will and should do much reinventing (that's what sets the approach apart from less thought-out ones such as Asimov's Laws and eternally fixed supergoals), you can't expect the end result to be completely independent of the initial conditions. Hopefully it will be independent of the specific details, but not of the general approach used. If an AI is to reinvent itself to update its notions of right and wrong, it has to be built with a structure that allows it to think of its ultimate goals as only temporary approximations. For example, you could probably program an AI to have as its ultimate goal to convert the matter in the universe to as many windmills as possible. Why doesn't it wise up and replace this with a more clever, sophisticated goal? Because that would compromise its ability to convert the universe to windmills (and thereby, just as an uninteresting side effect, make the Earth a monument to nonexistence). To make judgments such as "making as many windmills as possible is not a very sophisticated goal" humans use a lot of brainware that doesn't show up automatically in any mind.
In the mean time, before an AI goes "whoosh" into posthumanity, before it finishes reinventing itself into something it likes (if that ever happens), you wouldn't want nasty things to happen, so it makes good sense to give it certain notions of right and wrong so that nothing goes wrong in that (temporary) period.
After that, it's a bit like throwing a ball (but less predictable). You can choose in which direction to throw it, but you can't "control" it afterward.
I believe that the direction it takes will be 100% in it's control and we will affect this direction only when 'We' become 'It' and join the Sigularity.
Not sure what you mean there, it sounds a bit Borgish. A superintelligence would probably either not value our existence and build computers out of us, or value our existence in itself and leave us around as individuals rather than assimilate us.
If what you're saying is the AI (IA, whatever) will become posthuman first, look for dangers, and then help humans become posthuman, at which point they can (at least partly) take over, then I agree.
MindAn altruistic human upload (or machine merger) wouldn't gaurantee a friendly superintelligence but I think it would be a better starting point. A human upload/merger would have prior knowledge and memories and thus may empathize with the rest of us.
A properly designed AI would empathize with us, too, unless it had an excellent reason not to ("humans are a virus, like the dinosaur" is not an excellent reason). Furthermore, a properly designed AI would be free from all the icky parts of the human brain that cause us to sometimes be selfish, hateful, xenophobic, irrational, and so on.
The human brain was not designed with forward-compatibility in mind (in contrast to a properly designed AI), and I think there are various ways it could go mad under the stress of constant modification and upgrading. Or it could control the future based on its personal preferences and opinions, which is just as bad.
mgarriss again
One could argue that compassion and love are an evolutionary advantage to societies of individuals but it might not be useful for a 'society' of one, i.e. the Singularity.
This assumes the AI cares about evolutionary advantages. Why should it, if it has the same capacity for altruism and empathy as humans? I wouldn't make war against humanity even if it were an evolutionary advantage (neither would you, I hope), and there's no reason an AI couldn't share this view.
For "it's not useful" to be a valid argument, there has to be some other, more important goal that it's not useful to. A Friendly AI will of course want to survive, but not as a goal that is independent from and more important than goals such as helping humans. To a Friendly AI, what's "useful" is what helps sentient beings, and while staying alive is a good way to achieve this goal, there's no sense in sacrificing that which you're staying alive for (the well-being of sentients) for a very slightly better chance to survive.
Friendliness is not a constraint the AI has to work with while pursuing other goals; Friendliness is what a Friendly AI is, and wants to be.
Does a being with 'X times' our intelligence have 'X times' the capacity for compassion?
A being with 'X times' our intelligence could act on its compassion maybe 'X^2 times' or '10^X times' as effectively.
A dog has a very small capacity for evil, what capacity would a hyper-intelligence have?
A hyperintelligence could achieve 'evil' goals a lot more effectively, too, if it wanted to. The trick is making sure it doesn't want to. Many humans don't want to, and humans are buggier than properly designed AIs, so you could consider this an existence proof that it's possible.