Why won't we "embody" singularity to homo sapiens?
I assume you are asking why either
A: Why not upload a brain?
B: Why not create devices, etc, for improving a human intelligence?
These things will all happen, I assume. However, in terms of the Singularity,
1. Power has a strong tendency of corrupting humans, whereas an AGI can be designed to be Friendly, verifiably.
2. It will take a long time for the basic technologies to be developed, whereas the technology for AGI is readily available.
I mean all technological advancements we made was to assist us, a lower intelligence than ours can very well play the role of the assistant, however a higher than ours intelligence (Strong AI - SAI) has no way to play the role of the assistant.
Actually AGI can help us more than anything else. An AGI can recursively improve it's intelligence, exponentially. Thus, because the AGI can be very smart, it can figure out ways to help people, for any given goal of humanity, much more efficiently than any human. Perhaps you want to clarify why you came to this conclusion?
he only way for SAI to assist us is to become us. Otherwise it is a wild animal regardless of how Friendly or non-Friendly it is, in fact this is of no relevance.
That's not true. An AGI can have a goal system radically different from any human, however, if it's goal system is tailored to be Friendly, it will seek out methods and implement actions that are to the benefit of humanity, rather to the detriment of humanity. It is likely it's methodology will heavily involve interaction with humans, such that we know what it is doing and why, and it knows what we want and why. There is no reason for it to be anything resembling a human intelligence.
At the end I don't believe that SAI will be in any of use for us as long as it remains a different entity, it seems pretty useless.
Ok, imagine if I were to create an AGI, which spends a year recursively improving it's own source code and hardware base, and then researches and developes molecular nanotechnology within another year, creating tools to enable immortality, lifelike virtual reality, planet terraforming technology, etc. How useless is it then? There is no rule that says all intelligent entities must resemble humans in order for them to be useful or Friendly to humans. There is a rule that says an AGI goal system must be designed to stay Friendly under self-improvement, or else we are likely quite screwed.