Posted 16 November 2003 - 01:12 AM
Now we have some basics down we can now look at restoring the boxed up computer to its previous functionality.
In principle we need only repair the frozen brain, for the brain is the most critical and important structure in the body. Faithfully repairing the liver (or any other secondary tissue) molecule by molecule (or perhaps atom by atom) appears to offer no benefit over simpler techniques -such as replacement. The calculations and discussions that follow are therefore based on the size and composition of the brain. It should be clear that if repair of the brain is feasible. then the methods employed could (if we wished) be extended in the obvious way to the rest of the body.
The brain, like all the familiar matter in the world around us, is made of atoms. It is the spatial arrangement of these atoms that distinguishes an arm from a leg, the head from the heart. and sickness from health. This view of the brain is the framework that we must work. Our problem. broadly stated, is that the atoms in a frozen brain are in the wrong places. We must put them back where they belong (with perhaps some minor additions and removals, as well as just rearrangements) if we expect to restore the natural functions of this most wonderful organ.
In principle, the most that we could usefully know about the frozen brain would be the coordinates of each and every atom in it. This knowledge would put us in the best possible position to determine where each and every atom should go. This knowledge, combined with a technology that allowed us to rearrange atomic structures in virtually any fashion consistent with the lawsof chemistry and physics, would clearly let us restore the frozen structure to a fully functional and healthy state. In short. we must answer three questions. Where are the atoms? Where should they go? How do we move them from where they are to where they should be?
Regardless of the specific technical details involved, any method of restoring a person in suspension must answer these three questions, if only implicitly. Current efforts to freeze and then thaw tissue (e.g.. experimental work aimed at freezing and then reviving sperm. kidneys. etc.) answer these three questions indirectly and implicitly. Ultimately, technical advances should allow us to answer these questions in a direct and explicit fashion.
Rather that directly consider these questions at once. we shall first consider a simpler problem, how would we go about describing the position of every atom if somehow this information was known to us? The answer to this question will let us better understand the harder questions.
How many bits to describe one atom? Each atom has a location in three-space that we can represent with three coordinates; x.y. and z. Atoms are usually a few tenths of a nanometer apart. If we could record the position of each atom to within 0.01 nanometers, we would know its position accurately enough to know what chemicals it was part of, what bonds it had formed, and so on. The brain is roughly .1 meters across. 50 .01 nanometers is about 1 part in 10(10).
That is, we would have to know the position of the atom in each coordinate to within one part in ten billion. A number of this size can be represented with about 33 bits. There are three coordinates. x. y. and z. each of which requires 33 bits to represent, so the position of an atom can be represented in 99 bits. An additional few bits are needed to store the type of the atom (whether hydrogen, oxygen, carbon. etc.), bringing the total to slightly over 100 bits.
Thus, if we could store 100 bits of information for every atom in the brain, we could fully describe its structure in as exacting and precise a manner as we could possibly need. A memory device of this capacity should be quite literally possible. To quote Feynman. “Suppose, to be conservative, that a bit of information is going to require a little cube of atoms 5x5x5 - that is 125 atoms.tm This is indeed conservative. Single stranded DNA already stores a single bit in about 16 atoms (excluding the water that it’s in). It seems likely we can reduce this to only a few atoms. The work at IBM suggests a rather obvious way in which the presence or absence of a single atom could be used to encode a single bit of information (although some sort of structure for the atom to rest upon and some method of sensing the presence or absence of the atom will still be required, so we would actually need more than one atom per bit in this case).
If we conservatively assume that the laws of chemistry inherently require 10 atoms to store a single bit of information, we still find that the 100 bits required to describe a single atom in the brain can be represented by about 1.000 atoms. Put another way, the, location of every atom in a frozen structure is (in a sense) already encoded in that structure in a analog format. If we convert from this analog encoding to a digital encoding, we will increase the space required to store the same amount of information. That
is, an atom in three-space encodes its own position in the analog valve of its three special coordinates. If we convert this spatial information from its analog format to a digital format, we inflate the number of atoms we need by perhaps as much as 1,000. If we digitally encoded the location of every atom in the brain, we would need 1,000 times as many atoms to hold this encoded data as there are atoms in the brain. This means we would require roughly 1,000 times the volume. The brain is somewhat over one cubic decimeter. so it would require somewhat over one cubic meter of material to encode the location of each and every atom in the brain in a digital format suitable for examination and modification by a computer.
While this much memory is remarkable by today’s standards, its construction clearly does not violate any laws of physics or chemistry. That is, it should literally be possible to store a digital description of each and every atom in the brain in a memory device that we will eventually be able to build.
While such a feat is remarkable, it is also much more than we need. Chemists usually think of atoms in groups called molecules. So your talking about water as a molecule made of three atoms. If we describe each atom separately, we will require 100 bits per atom, or 300 bits total. If,however, we give the position of the oxygen atom and give the orientation of the molecule. we need 99 bits for the location of the oxygen atom t 20 bits to describe the type of molecule (“water”, in this case) and perhaps another 30 bits to give the orientation of the water molecule (to bits for each of the three rotational axes). This means we can store the description of a water molecule in only 150 bits, instead of the 300 bits required to describe the three atoms separately. (The 20 bits used to describe the type of the molecule can describe up to 1.000.000 different molecules many more than are present in the brain).
As the molecule we are describing gets larger and larger, the savings in storage gets bigger and bigger. A whole protein molecule will still require only 150 bits to describe, even though it is made of thousands of atoms. The canonical position of every atom in the molecule is specified. once the type of this molecule (which occupies a mere 2O.bits) is given. A large molecule might adopt many configurations, so it might at first seem that we’d require many more bits to describe it. However, biological macromolecules typically assume one favored configuration rather that a random configuration, and it is this favored configuration that I will describe. (Because proteins are always produced as a linear chain. they must of necessity be able to adopt an appropriate three dimensional configuration by themselves. Usually, the correct configuration is unique, if it isn’t, it is usually the case that the molecule will spontaneously cycle through appropriate configurations by itself, e.g. an ion channel will open and close at appropriate times regardless of whether it was initially started in the “or ‘closed” configuration. If any remaining cases should prove to be a problem. a few additional bits can be used to describe the specific configuration desired).
We can do even better. The molecules in the brain are packed in next to each other. Having once described the position of one, we can describe the position of the next molecule as being such-and-such a distance from the first. If we assume that two adjacent molecules are within 10 nanometers of each other (a reasonable assumption) then we need only store 10 bits of ‘delta X.” 10 bits of “delta V.” and 10 bits of “delta 2” rather that 33 bits of X, 33 bits of V. and 33 bits of 2. This means our molecule can be described in only 10 + 10 + 10 + 20 + 30 or 80 bits.
We can compress this further by using various other clever stratagems (50 bits or less is quite achievable). but the essential point should be clear. We are interested in molecules, and describing a molecule takes fewer bits than describing an atom.
A further point will be obvious to any biologist. Describing the exact position and orientation of a hemoglobin molecule within a red blood cell is completely unnecessary. Each hemoglobin molecule bounces around within the red blood cell in a random fashion, and it really doesn’t matter precisely where it is, nor exactly which way it’s pointing. All we need do is say. “it’s in that red blood cell!” So, too, for any other molecule that is floating at random in a ‘cellular compartment.” we need only say which compartment it’s in. Many other molecules. even though they do not diffuse freely within a cellular compartment. are still able to diffuse fairly freely over a significant range. The description of their position can be appropriately compressed.
While this reduces our storage requirements quite a bit. we could go much further. Instead of describing molecules, we could describe entire subcellular organelles. It seems excessive to describe a mitochondrion by describing each and every molecule in it. It would be sufficient simply to note the location and perhaps the size of the mitochondrion, for all mitochondria perform the same function. they produce energy for the cell. While there are indeed minor differences from mitochondrion to mitochondrion, these differences don’t matter much and could reasonably be neglected.
We could go still further, and describe an entire cell with only a general description of the function it performs: this nerve cell has synapses of a certain type with that other cell, it has a certain shape, and so on. We might even describe groups of cells in terms of their function: this group of cells in the retina performs a “center surround” computation, while that group of cells performs edge enhancement.
Cherniak said. “On the usual assumption that the synapse is the necessary substrate of memory, supposing very roughly that (given anatomical and physiological noise each synapse encodes about on binary bit of information, and a thousand synapses per neuron are available for this task: 10(10) cortical neurons x 10(3) synapses + 10(13) bits of arbitrary information (1.25 terabvtes) that could be stored in the cerebral cortex.
This kind of logic can be continued, but where does it stop? What is the most compact description which captures all the essential information? While many minor details of neural structure are irrelevant, our memories clearly matter. Any method of describing the human brain which resulted in loss of long term memory has rather clearly gone too far. When we examine this quantitatively, we find that preserving the information in our long term memory ought require as little as 10(9) bits (somewhat over 100 megabytes). We can say rather confidently that it will take at least this much information to adequately describe an individual brain. The gap between this lower bound and the molecule-by-molecule upper bound is rather large, and it is not immediately obvious where in this range the true answer falls. I shall not attempt to answer this question. but will instead (conservatively) simply adopt the upper bound.
Determining when “permanent cessation of all vital functions” has occurred is not easy. Historically, premature declarations of death and subsequent burials alive have been a major problem. In the seventh century, Celsus wrote, “Democritus. a man of well merited celebrity, has asserted that there are in reality, no characteristics of death sufficiently certain for physicians to reply upon.
Montgomery, reporting on the evacuation of the Fort Randall Cemetary, states that nearly two percent of those exhumed were buried alive.
Many people in the nineteenth century, alarmed by the prevalence of premature burial, requested, as part of the last offices, that wounds or mutilations be made to assure that they would not awaken. Embalming received a considerable impetus from the fear of premature burial.
Current criteria of “death” are sufficient to insure that spontaneous recovery in the mortuary or later is a rare occurrence. When examined closely, however, such criteria are simply a codified summary of’ symptoms that have proven resistant to treatment by available techniques. Historically. they derive from the fear that the patient will spontaneously recover in the morgue or crypt. There is no underlying theoretical structure to support them, only a continued accumulation of ad hoc procedures supported by
empirical evidence. To quote Robert Veach, “We are left with rather unsatisfying results. Most of the data do not quite show that persons meeting a given set criteria have, in fact, irreversibly lost brain function. They show that the patients lose heart function soon, or that they do not “recover.” Autopsv data are probable the most convincing. Even more convincing, though, is that over the years not one patient who has met the various criteria and then been maintained, for whatever reason, has been documented as having recovered brain function. Although this is not an elegant argument. it is reassuring.” In short, current criteria are adequate to determine when current medical technology will fail to revive the patient, but are silent on the capabilities of future medical technology.
Each new medical advance forces a reexamination and possible change of the existing ad hoc criteria. The criteria used by the clinician today to determine “death” are dramatically different from the criteria used 100 years ago, and have changed more subtly but no less surely in the last decade. It seems almost inevitable that the criteria used 200 years from now will differ dramatically from the criteria commonly employed today.
These ever shifting criteria for “death” raise an obvious question; Is there a definition which will not change with advances in technology? A definition which does have a theoretical underpinning and is not dependent on the technology of the day?
The answer arises from the confluence arid synthesis of many lines of work. ranging from information theory, neuroscience. physics. biochemistry and computer science to the philosophy of the mind and the evolving criteria historically used to define death.
When someone has suffered a loss of memory or mental function. we often say they “aren’t themselves.” As the loss becomes more serious and all higher mental functions are lost, we begin to use terms like persistent vegetative state. “While we will often refrain from declaring such an individual “dead”. this hesitation does not usually arise because we view their present state as “alive” but because there if still hope of recovery to a healthy state with memory and personality intact.” From a physical point of view we believe there is a chance that their memories and personalities are still present within the physical structure of the brain, even though their behavior does not provide direct evidence for this. If we could reliably determine that the physical structures encoding memory and personality had in fact been destroyed, then we would abandon hope and declare the person dead.
Clearly, if we knew the coordinates of each and every atom in a person’s brain then we would (at least in principle) be in a position to determine with absolute finality whether their memories and personality had been destroyed in the information theoretic sense, or whether their memories and personality were preserved but could not. for some reason, be expressed. If such final destruction had taken place. then there would be little reason for hope. If such destruction had not taken place. then it would in principle be possible for a sufficiently advanced technology to restore the person to a fully functional and healthy state with their memories and personality intact.
Considerations like this lead to the information theoretic criterion of death. A person is dead according to the information, theoretic criterion of their memories, personality. hopes, dreams. etc.. have been destroyed in the information theoretic sense. That is, if the structures in the brain that encode memory and personality have been so disrupted that it is no longer possible in principle to restore them to an appropriate functional state then the person is dead. If the structures that encode memory and personality are sufficiently intact that inference of the memory and personality are feasible in principle, and therefore restoration to an appropriate functional state is likewise feasible in principle, then the person is not dead.
A simple example from computer technology is in order. If a computer is fully functioning then its memory and “personality” are completely intact. If it fell out the seventh floor window to the concrete below, it would rapidly cease to function. However, its memory and “personality” would still be present in the pattern of magnetizations on the disk. With sufficient effort, we could completely repair the computer with its memory and personality intact.
In a similar fashion, as long as the structures that encode the memory and personality of a human being have not been irretrievably “erased” (to use computer jargon) then restoration to a fully functional state with memory and personality intact is in principle feasible. Any technology independent definition of “death” should conclude that such a person is not dead. for a sufficiently advanced technology could restore the person to a healthy state.
On the flip side of the coin, if the structures encoding memory and personality have suffered sufficient damage to obliterate them beyond recognition, then death by the information theoretic criterion has occurred. An effective method of insuring such destruction is to burn the structure and stir the ashes. This is commonly employed to insure the destruction of classified documents. Under the name of “cremation” it is also employed on human beings and is sufficient to insure that death by the information theoretic criterion takes place.
It is not obvious that the preservation of life requires the physical repair or even the preservation of the brain. Although the brain is made Of neurons, synapses, protoplasm. DNA and the like, most modern philosophers of consciousness view these details as no more significant than hair color or clothing style. Three samples follow.
The ethicist and prolific author Robert ‘Match said, in Death, Dying, and the Biological Revolution, “An artificial brain is not possible at present, but a walking, talking, thinking individual who had one would certainly be considered living.” The noted philosopher of consciousness Paul Churchland said, in Matter and Consciousness, “If machines do come to simulate all of our internal cognitive activities, to the last computational detail, to deny them the status of genuine persons would be nothing but a new form of racism. Hans Moravec renowned roboticist and director of the Mobile Robot Lab at Carnegie Mellon said. “Body-identity assumes that a person is defined by the stuff of which a human body is made. 0nly by maintaining continuity of body stuff can we preserve an individual person. Pattern identity, conversely, defines the essence of a person. say myself, a the pattern and the process going on in my head and body, not the machinery supporting that process. If the process is preserved, I am preserved. The rest is merey jelly.”