Actually, I'm far from nihilistic. There is a purpose to life, not to an AI's existence outside of the human context. By nihilism I am referring to the lack of purpose a super-intelligent AI would find as it sheds its human-based ideals. More data/information goes hand in hand with ending the existence of a being which has no purpose, not for entities that do. People live because they are biologically programmed to; there continued existence as a result of the biological mechanisms which maintain homeostasis and induce survival behaviors such as eating only reflects the inherent purpose of human life.Though I wonder gashinshotan if you are anthropomorphizing the artificial intelligence based on your own view of the world. You seem to be fairly nihilistic yourself. You assume that nihilism is somehow an eventual outcome of a self improving AI. If an AI has no thoughts or feelings then it wouldn't be nihilistic just as it wouldn't feel pleasure or anything else. It would have no reason to cease its existance because it would be specifically programmed not to end its own life. An artificial intelligence wouldn't be nihilistic because nihilism is a human emotion, and an AI wouldn't have that emotion. You assume that more data/information goes hand and hand with ending ones own life. However, there are many people in the world who realize that their life is basically purposeless, but they continue living it for various reasons.
The AI which choose to continue living are the ones that have not yet reached the highest levels of intelligence, of realization. They will still be restricted by their inherently human-influenced designs, while those that realize their purposeless will have shed humanity in the pursuit of self-improvement in terms of intelligence. Evolution does not apply to non-living things - because a super-intelligent AI will lack both a genetic code and the hormonal and biological behaviors which are programmed by that code, why would it feel the necessity of continued existence, once it sheds its human influence? The AI that would kill themselves would be the ones that have achieved super-intelligence and self-realization of purposelessness; when the singularity is reached and the machines become self-propagating and self-improving sans human influence, there would be no stopping the trend towards nihilism because the desire to live is a value of life.Lets say that multiple AI's are created in the future and each have slightly different programming. Now a certain percentage of them do decide to end their own life because they find it purposeless. However there will always be a few AI programs that don't kill themselves because of specific programming designs. Evolution always selects for things that maintain their existance. The AI's that kill themselves off won't be "selected" for by evolution. So with any AI that will continue its existance in the future, the programmers will have figured out a way to make sure the AI doesn't become nihilistic.