Wow, take a couple days off and all sorts of new angles are pushed in a discussion like this one.
Lazarus, I have to hand it to you, you think things out very well, and I like the style...
Okay, not much to argue with here. I agree with most of what you say, if I make a single assumption. Without that assumption, I don't necessarily disagree with much, but it becomes somewhat irrelevant.
The assumption has to do with simulation and duplication. It also goes back to Don Spanton's question of whether he becomes a simulation of himself via the incremental neuronal replacement model, and at what point this happens. This of course goes all the way back to question 3 at the start of this thread:
If I copy my brain, only one neuron at a time, replacing each neuron with that copy, over a long period of time, will I still be me at the end? When will have the transition occured?
A variety of views have been expressed here, and as with much in these philosophical debates, a single basic assumption can take the same body of scientific evidence and theories and come to completely different conclusions. At some point a personal belief must enter the system, even if only veiled as an "assumption" based on probabilities: even those probabilities are giving purely arbitrary values to certain basic stipulations that affect the entire value structure. I like cosmos' usual approach of defining uncertainty as his most basic value (again an assumption), but one that seems to be the most resistant to self-contradiction.
However, in this case, I will not sit by idly. To the task, then.
First, to answer Don Spanton, as I promised previously. While I speak of course with admitted uncertainty and with admitted ignorance, I personally am inclined to think that, if transferred into a purely deterministic simulation (e.g. what I call a "classical computer"), then yes, you become a simulation of yourself. A simulation with complete fidelity, perhaps, but the new you will not be "you". No outside observation would be able to determine this; only you are aware of what's you and what isn't. If the transition is smooth and slow, then there might not be a specific moment in time at which you are no longer you. I've been thinking about this over the past few days, and I can come to no other conclusion.
Part of my reasoning has to do with this whole idea of complete fidelity. I have seen it argued by many that a perfect copy of one's self is the same as the original, so that if killing one's self could somehow benefit the copy, they would do it.
In my opinion, it is not the complete fidelity of information that makes a copy of you really "you". The whole perfect copy thing is a red herring. As with many realizations I've had recently, this came to me by way of a dream (it's sad to have
received so many insights from dreams, because it only continues to remind me of the prophets of many religions who received such inspired divine messages, and yet I don't consider my "revelations" to be divinely inspired; rather, it's just the work of my mind operating on things from a different approach...)
I'm sure many have had dreams like this, and I have had many myself. But because I was thinking about what constitutes "me", I read more into it in the moments after the dream when the details, and the mood, and the very thought processes themselves are still in the mode they were in before the dream ended. The post-game analysis, if you will, when you're still hyped up on the game.
At this point in time, I don't remember any of the details; this was several days ago. However, I do remember the post-game analysis. And what I remember was this: in the dream, I was not really myself. I don't just mean I was play-acting some other person: it was as if I were some other person. I had different values, different ways of making decisions and of viewing the world. It was as if I had a completely different belief structure. In the dream, I didn't just act differently, I actually thought and felt and believed things I would not normally think or feel or believe.
In other words, it was as if I had someone else's mind. Forget perfect fidelity, forget marginal fidelity. It wasn't me. But the point that struck me was, "I" was still the one experiencing it. It would be as if I could have "me" transferred into another person, and think and feel as though I were that person, with that person's memories, feelings, emotions, values, everything. But it would be "me" that was experiencing it. I currently do not experience what Don Spanton experiences, he does! Not because of his memories, or his values, or his brain wiring. He experiences those things because... well, it sounds damn non-materialist now that I've said it.
The point is, perfect fidelity in copying isn't the issue. I have no problem with the assumption that we will be able to simulate sentient minds that are indistinguishable from "real" human minds. I have no problem with the idea of copying someone with perfect fidelity, or of merging minds, or splitting minds, or anything. But I separate the outward appearance of those sentiences from the inner "self" of those sentiences. A copy of me is not "me", no matter how perfect a copy, even if there were no physical difference measureable from within the system.
Whether this means that there is a component of us outside the system (the "soul" or ghost or whatever), or if it is a component inside the system which is immeasurable (e.g. quantum states), or whether it's something intrinsic in the system that is not measurable because it simply is (I don't know exactly what I mean by this, it's sort of a vague thought at the moment, I'll try to distill it over the next few days), I can't say.
However, given that a) I don't believe that we can be duplicated, b) I believe that a human mind can be simulated to the satisfaction of any physical test of sentience, and c) that complete fidelity alone does not describe me, then I conjecture that a transferrance of my mind into a classical computer, no matter how complex, will result in a simulation of me rather than me. Whether that simulation is sentient is irrelevant to me, since it's not me.
Now, as I've said before, I'm leaving the door open for non-classical computers, a category to which "quantum computers", a rather nebulous term if you ask me, belong.
A final note related to cryonics: now that I no longer regard complete fidelity as the requirement for "me" to continue, I have a little more respect for the prospects of reanimation. I've seen analysis of how much would have to be preserved to preserve a person, and how much damage could be taken in the freezing/vitrification/other processes involved. Now that I am starting to draw a big divide between the information and "me", I no longer see it as necessary to preserve things down to the molecular level (a very daunting task, currently unattainable). As some, including Robert Bradbury and Ray Kurzweil, have discussed, simply preserving the position and number of neurotransmitter sites on all interneuronal connections might be sufficient.
You see, I break it down into two problems. The first is restarting a mind that is essentially as identical as possible to the mind before the cryonic suspension. The other is restarting the "original" mind. In my mind (pun intended), these are two very distinct problems.
The first requires high fidelity, but places no requirement on the prevention of duplicates, or the issue of destruction and/or external reconstruction.
The second problem does not require high fidelity. I'm not even sure it requires fidelity at all. This is the "me" that's doing the observing of itself, regardless of what state that "self" is in. I am sentient, and Don is sentient (I assume

), but I observe myself, and he observes himself. There's the me of my actions and physical body and even my thoughts and emotions, and then there's the me that is observing all of this.
Now, the ideal situation is admittedly to solve both problems. However, as far as I'm concerned, I would rather solve the second problem at the expense of the first, rather than the other way around. Otherwise, I'm just creating a duplicate of me, and if I'm not there to enjoy it, then why should I care? It would be the same as having a child to continue my legacy, or saving the life of a friend or stranger at the expense of my own. Sure, that person lives on, but I don't. Just because that new person has my thoughts and emotions and memories, that doesn't make me any less lost to oblivion.
That doesn't mean I wouldn't sign up for cryonics: after all, I still don't know if the second problem (preservation of the observing me) will be solved or not, and if it were solved and I didn't sign up, that would be a great loss.
Hmm, looking back, I can see that I'm probably rehashing a long list of positions held by other people which have already been "asked and answered". But I'm new to all this, so I'm thinking in a rather stream-of-consciousness kind of mode right now.