I've been reading about it recently and the question keeps coming up in my mind: if an AI is produced that is smarter than us, and keeps on improving itself into greater and greater intelligence, wouldn't it reach a point of nihilism and decide to wipe itself out? After all, AI would have NO reason to live, no sex-drive, no moral goal, no emotional drives at all. A super-intelligent AI would come to question it's existence, and finding no reason for existing, might just choose to commit suicide (and possibly take humanity and all life with it) instead of wasting effort on arbitrary missions.
(BTW ELROND, I killed the microbio class )
Gashinshotan-
This is a very interesting argument, but you are basically assuming the AI would have a 'utility function' exactly the same as ours, which may not be the case. Our utility function is to reproduce our genes and make our lives better in the long run, but this would not necessarily be the case with a super-human-level AI. People like Eliezer Yudkowsky work with these problems all the time and basically admit that there is no way you can predict what a super-intelligent being would do based on our primitive (in comparison) utility function and logic. The most we can do is to work out all possible scenarios for each utility function and then just choose the safest one for humanity based on statistics.
What you are talking about is what happens 'after' the singularity, which not even people like Kurzweil speculate about. I think it is akin to a bacteria predicting what a human would do in the future, completely out of its realm of understanding. There are really only three choices in your argument though, and one leads to destruction of all life on Earth. The other leads to suicide for the AI, in which case we could just try again (but more carefully this time.) The third choice is that the AI goes on living and spreading its intelligence throughout the universe, which is the mainstream view, I think.
Edited by Mike Van Bebber, 16 March 2008 - 10:38 PM.