I just found the SingInst site recently and have been reading many interesting papers on Friendly AI. Coming on the heels of reading 'The Singularity is Near' I can appreciate the need to build something friendly.
With that in mind I was wondering if it made sense to design and build something that will be judged a success or failure based on passing the Turing Test. As a criteria for deciding if something intelligent was actually built, it’s a brilliant idea. But, seeing as even the friendliest most considerate human still has the same basic angers, fears and desires of everyone else, would you really want to evolve a complex system that exhibits those emotions? For example, if during the course of the Turing Test I threatened the FAI with destruction, what would a human realistic reaction be? Self-Defense? Pissed-off? Or maybe just the idea that humans are threatening? This is an extreme example, but there would be all kinds of lesser affronts that the AI would need to pass in order to convince a skeptical judge. Why would you want to program these emotions into a FAI? How about greed, a strong desire to reproduce, territoriality, lust?