Here I will try to explain the future of humanity, the possibility of immortality, and the relation of these things to persistence. To cut to the chase, I will first go into the tool used for analysis.
Natural selection needn't be limited in application to organisms. In applying natural selection more broadly in what I call a persistence pattern, I modify Darwin's statement of, "But, if variations useful to any organic being do occur, assuredly individuals thus characterized will have the best chance of being preserved in the struggle for life", to, "But, if variations useful to any system do occur, assuredly systems thus characterized will have the best chance of persisting". And, you can apply this pattern of persistence towards atoms forming compounds or market driven efficiency or many other things.
Technology is a natural product of this persistence pattern. In a sense, this persistence pattern is a matter of optimization. Where, before, you had phone operators manually connecting callers, now systems do this automatically in a much faster, an optimized, fashion. And, this pattern of optimization will continue so long as there are further optimized states which are reachable from current states.
The non-reachability of more optimized states, or prevention of optimization, can take the form of resource unavailability or super systems preventing sub system optimization. To put the latter case in more grounded terms, cultural or societal paradigms can prevent optimization. Similarly, power monopolies can prevent optimization. A power monopoly can be likened to a solitary creature on an island; it doesn't matter whether the creature is maladapted to its environment, it will not be replaced through natural selection because it is the only creature on the island.
Assuming that the persistence pattern continues to lead to further optimization, there are various things we can extrapolate.
If the only effect of this persistence pattern on humanity were to optimize unpleasurable tasks and to favor pleasurable tasks, we would be seeing a much different world. Instead, what we have is a scenario where many non-human drivers drive the creative minds that fuel progress, and consequential progress is implemented in ignorance of, for the most part, how the progress will affect the pleasure and pain of the humans who encounter it.
Non-human drivers of progress are largely derivatives of human desires. For example, a company, in an effort to make more money for the human owners sets out to optimize, drive progress, its internal systems. And, the internal systems of this company could be just about anything - like extracting oil, harvesting crops, surveying citizenry, providing news.
As a result of non-human drivers, one can not negate the emergence of AI or bio-mechanical integrated systems just because such things don't benefit humans. Instead, regardless of whether AI systems or human augmentation improve the qualiy of life of humans, these things will emerge by the fact they are reachable optimized states.
And, now, getting into immortality, the issue is how AI or human augmentation will influence the long life and potential immortality of the individual.
There are many overlapping ways to categorize augmentation, and I initially wrote this part going into a few of them, but instead, I will explain a simple dynamic. The degree that an augmentation is integrated with the human brain is a reflection of humanity's understanding of the human brain, and when the human brain is understood conceptually, in can be reproduced programmatically. And, when the human brain can be reproduced, AI can be formed. And so, even assuming augmentation will move forward quicker than AI, at some level of brain integrated augmentation, there will be AI, and then the question will be a matter of optimization.
The matter of optimization when it comes to AI has a number of parts to it. The human brain is a very complex and efficient bio-computer, and reproducing it with a non-bio-computer has a large potential expense; however, there are a couple downsides the human brain has when compared to a computer brain:
1. The human brain is isolated with a small bandwidth for interfacing with other systems. This bandwidth takes the form of language and body movement.
2. The human brain may not be able to scale upwards. IE, it's architecture is fixed and would not support expansion without extensive re-architecting.
3. The human brain has limited linear computational ability; and, as computers advance, the human brain will also be seen to have limited non-linear computational ability (non linear being multi process entity relation computations).
Technological advancements have resulted in an increased complexity in systems. The growth of this complexity in consideration of human operators has led to two things:
1. interfaces with more complex back ends and relatively simple front ends.
2. humans with more aptitude for semi-technical matters.
The nature of the increased complexity in the back end of the system is represented by the increased number of actions per each user action. An action by a user in a user interface represents some desire for action by the system. A simplified example of this:
User types in 11 buttons to call a friend on phone (phone number + call button)
User types 2 buttons to call a friend on phone (recent calls, call button)
The increased complexity of the back end is simplified on the front end by anticipating user actions and grouping low-level actions on concepts the user can select from. This forward movement of technology with increasingly complex systems while having a relatively simple user interface for the human operator has a limit. The limit comes in two forms. The first form is when the number of distinct concepts necessary to operate the system becomes to large for many human operators to conceive of in a reasonable amount of time or to enter into the system in a reasonable amount of time. Consumer electronics do not have this problem, as they are designed specifically to be used by humans. However, As governments and corporations implement increasingly complex systems, they will find human operators of those systems less and less capable of operating the systems, and they will rely more and more on algorithmic system behavior. This algorithmic system behavior eventually equates to AI system control.
The nature of any open system attempting to gain absolute control of a thing is to expand into other systems to gain control over formerly outside variables that influence the system's control over the thing. In terms of companies, this can be seen as vertical expansion. In addition to this, integration of systems with other systems often represents an optimized state. When a governmental center for disease control is integrated into a system like google that has the ability to anticipate the emergence and spread of contagions based on user actions, there is a greater ability for the center for disease control to control a contagion. Or, when traffic systems (currently represented by traffic control lights) are integrated with automobiles, there is a greater ability to optimize the flow of traffic.
This system expansion and integration has the effect of increasing complexity of the systems and decreasing the ability of human operators to operate the systems at any level apart from a very abstracted, high level. This expansion and integration also has the effect of decreasing the number of systems which are not complex. And, the effect of these things is to decrease the need for human operators. And, as a result, jobs are in the form of three categories:
1. Jobs in which robots have not yet economically outperformed humans.
2. Jobs building systems while the systems can't build themselves
3. High level, abstracted control of systems.
And, so, we have a scenario in which AI and robotics will largely replace humans in terms of controlling systems at any level other than high level. And, when I say high level, I refer back to the part in which a user interface is simple where the back end is complex by having an increasing number of back end actions corresponding with each single user actions. For example, a user action might be to say "raise taxes", after which, the system set up to calculate and collect taxes changes to follow the command of raising taxes, and this system change has the potential to be equated to greater than millions of "back end" actions.
A person seeking immortality amid the above eventualities has hope of finding it in two places.
The first idea of a place to find immortality is with the idea that the increasingly complex systems will simply serve to accommodate the pleasure and pain desires of some number of humans. An example of this would be the example put forth in the movie WALL-E. This idea, however, appears to ignore the fact that the emergence of such complex systems came from non-human driven demands, and a likely consequence of this is that the systems will not stop optimizing to accommodate human driven desires. It is a common notion that in the creation of AI, humanity will get a "bad" AI that turns on its creators. I would look at the scenario in a much less "good" and "evil" light. Any system AI will do just what we have done; it will optimize the system, and if that means discounting the needs of humans, that is precisely what it will do.
The second idea of a place to find immortality is in uploading the mind into a computer and having persistence of that mind in a computer system. The idea that the mind will persist in a controlling fashion is rather trivial to disprove. As a human, the personality traits and characteristics that one might consider defining oneself are largely irrelevant when considering optimizations of systems and AI control of systems. In other words, when human minds are built into systems with much higher bandwidth capacity, not only are the personality traits stemming from the limitations and capabilities of the human body irrelevant, the whole language that the human mind knew and used to communicate becomes largely obsolete in the presence of much more efficient forms of computer system communication.
And so, there is no immortality of the "self". Instead, you will have longevity while the technology does not make you obsolete, and in considering the persistence of the brain, the knowledge and algorithms in your brain that are useful may be used for the time they stay useful, but this extraction only represents a continuation of self to the extent that something like extracting the knowledge of arithmetic from a brain represents a continuation of the "self" of that brain. And, where you once had individuals, you will have largely interconnected systems with standard libraries of algorithms - a sort of modular hive-mind.
In conclusion, there are two questions that come in consideration of the ideas herein presented. The first: "Do we want to follow this persistence pattern?". Of course, even if the answer were no, can one actually stop progress? Try as he might, the unabomber did not stop progress. There are various ways, however, in which progress can be halted:
1. Societal paradigms or systems getting "stuck".
2. Elimination of humanity on Earth by humanity or natural disaster
3. Societies reducing themselves to previous, more primitive states. Potentially, societies can bomb themselves back to the stone age, or can crumble by corruption. And, in these scenarios, societies can rise and fall in a infinite loop and never progress out of the loop.
The second question is: "What will we do in the interim of now and being made obsolete?". Well, for those who want to live as long as possible, I imagine working towards advances in health maintaining technologies and medicines would be the way to go. After all, you can always hope for future "nature preserves" where you, as a human, can live forever foraging around in the "wilderness" of some former city with your life sustaining technologies in a park maintained by T1000 park rangers. For people like me, I care to build useful societal systems that help to both promote progress and better the life of the members of the society; and, there are certainly synergies between building such societal systems and the longevity of the societal members. And, making the choice to progress instead of attempting to stop progress, a society based on a persistence goal may help to avoid the ways in which progress can be halted. But, what to do in the interim is really just a matter of taste.