• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
* * * * * 1 votes

Frozen Assets


  • Please log in to reply
37 replies to this topic

#1 thefirstimmortal

  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 15 November 2003 - 05:51 PM


If we play our cards right, we should in short, have a blank check on life. More than that, we may have a pretty substantially unrwritten guarantee of continuing life itself.

At the temperature of liquid helium, chemistry stops. On this fact, and on one reasonable hope, the largest industry of the 21st century will be built.

The reasonable hope is that the progress of medicine in past years would be matched by similar progress in the future-so that, no matter what a person might die of, at some future time a way would be found to cure it, to repair it, or at least to make it irrelevant to continuing life and activity (including a method of repairing the damage done by freezing a body to that temperature).

The fact is that freezing stops time.

#2 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 November 2003 - 06:24 PM

The fact is that freezing stops time.


The fact is this is more than a slight overstatement of fact.

Freezing (the reduction of the "temperature" of matter to its lowest limits) at best slows time ALMOST to a stop and somewhere at the elusive hyperbolically more difficult to reach retreating measure called "absolute zero" may stop it.

Nothing is yet theorized based on the combination of empirically derived data and mathematical reasoning to stop time per se except perhaps black holes and the ends of the defined universe.

The attainment of True Absolute Zero may have something to do with also creating a "quantum singularity" and "super conductive states"; both of which may be the result of Relativistic Time influencing mass/energy dominated by Lorentzian Transformations at a subatomic scale.

In other words being "Frozen in Time" may be more than a figure of speech as the density at absolute zero approaches an infinite quantity which looks very much like the physics on the other side of the event horizon surrounding Black Holes.

#3 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 15 November 2003 - 06:30 PM

I wonder if freezing, which at best slows time, as Ken indicates, rather than stops it, would work better if people decided to freeze themselves before they are clinically dead. I can imagine that there are some people who are terminal and would infer that it might be better to get the job done prior to their passing. Just think of the legal issues though.

Jace

#4 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 November 2003 - 06:33 PM

I have long argued Jace the true search should be for suspended animation technology applied before dead but the law treats that as euthanasia so it is proscribed. However I have much more confidence in cryo if it were to be applied to a live subject rather than just frozen meat.

#5 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 16 November 2003 - 12:09 AM

Ken, I am rather uninformed in the area of cryonics. Have there ever been experiments where scientists freeze live rats or something and then try to revive them, say, like months later to test suspended animation technology? I think that would be interesting.

Jace

#6 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:29 AM

The fact is this is more than a slight overstatement of fact.

Freezing (the reduction of the "temperature" of matter to its lowest limits) at best slows time ALMOST to a stop and somewhere at the elusive hyperbolically more difficult to reach retreating measure called "absolute zero" may stop it.

Nothing is yet theorized based on the combination of empirically derived data and mathematical reasoning to stop time per se except perhaps black holes and the ends of the defined universe. 


The only reactions that can occur in frozen aqueous systems at -196 degrees C are photophysical events such as the formation of free radicals and the production of breaks in macromolecules as a direct result of hits by background ionizing radiation or cosmic rays.

Over a sufficiently long period of time, these direct ionizations can produce enough breaks or other damage in DNA to become deleterious after rewarming to physiological temperatures, especially since no enzymatic repair can occur at these very low temperatures. The dose of ionizing radiation that kills 63% of representative cultured mammalian cells at room temperature is 200-400 rads. Because terrestrial background radiation is some .1 rad/yr,, it ought to require some 2,000-4,000 years at -196 degrees C to kill that fraction of a population of typical mammalian cells.

Needless to say, direct experimental confirmation of this prediction is lacking. but there is no confirmed case of cell death ascribable to storage at -196 degrees C for some 2-15 years and none even when cells are exposed to levels of ionizing radiation some 100 times background for up to 5 years. Furthermore. there is no evidence that storage at -196 degrees C results in the accumulation of chromosomal or genetic changes.

Stability for centuries or millennia requires temperatures below -130 degrees C. Many cells stored above -80 degrees C are not stable, probably because traces of unfrozen solution still exist. They will die at rates ranging from several percent per hour to several percent per year depending on the temperature, the species and type of cell, and the composition of the medium in which they are frozen.

Most implications and applications of freezing to biology arise from the effective stoppage of time at -196 degrees C. Tissue preserved in liquid nitrogen can survive centuries without deterioration. This simple fact provides an imperfect time machine that can transport us almost unchanged from the present to the future: we need merely freeze ourselves in liquid nitrogen. If freezing damage can someday be cured. then a form of time travel to the era when the cure is available would be possible.

#7 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:32 AM

There is little dispute that the condition of a person stored at the temperature of liquid nitrogen is stable, but the process of freezing inflicts a level of damage which cannot be reversed by current medical technology. Whether or not the damage inflicted by current methods can ever be reversed depends both on the level of damage and the ultimate limits of future medical technology. The failure to reverse freezing iniury with current methods does not imply that it can never be reversed in the future, just as the inability to build a personal computer in 1890 did not imply that such machines would never be economically built. We should consider the limits of what medical technology should eventually be able to achieve (based on the currently understood laws of chemistry and physics) and the kinds of damage caused by current methods of freezing.

#8 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:34 AM

What were talking about here is essentially stopping biological time. Contrary to the usual impression, the challenge to cells during freezing is not their ability to endure storage at very low temperatures, rather it is the lethality of an intermediate zone of temperature (-15 to -60 degrees C.) that a cell must traverse twice. No thermally driven reactions occur in aqueous systems at liquid N2 temperatures (196 degrees C), the refrigerant commonly used for low temperature storage. The only physical states that do exist at -196 degrees C, are crystalline or glassy, and in both states the viscosity is so high that diffusion is insignificant over less than geological time spans. Moreover, at -196 degrees C, there is insufficient thermal energy for chemical reactions.

#9 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:36 AM

Cyonics, far from being idle speculation, is available to anyone who so chooses. Of course the most important question in evaluating this option is its technical feasibility.

Given the remarkable progress of science during the past few centuries it is difficult to dismiss cryonics out of hand. The structure of DNA was unknown prior to 1953, the chemical (rather than “vitalistic”) nature of living beings was not appreciated until early in the 20th century, it was not until 1864 that spontaneous generation was put to rest by Louis Pastur, who demonstrated that no organisms emerged from heat-sterilized growth medium kept in sealed flasks, and Sir Isaac Newton’s principal established the laws of motion in 1687, just over 300 years ago. If progress of the same magnitude occurs in the next few centuries, then it becomes difficult to argue that the repair of frozen tissue is inherently and forever infeasible. Ultimately cryonics will either (a) work or (b) fail to work. It would seem useful to
know in advance which of these two outcomes to expect. If it can be ruled out as infeasible, then we need not waste further time on it, if it seems likely that it will be technically feasible, then a number of nontechnical issues should be addressed in order to obtain a good probability of overall success.

#10 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:38 AM

While many isolated tissues (and a few particularly hardy organs) have been successfully cooled to the temperature of liquid nitrogen and rewarmed, further successes have proven elusive. While there is no particular reason to believe that a cure for freezing damage would violate any laws of physics (or is otherwise obviously infeasible), it is likely that the damage done by freezing is beyond the self-repair and recovery capabilities of the tissue itself.. This does not imply that the damage cannot be repaired, only that significant elements of the repair process would have to be provided from an external source. In deciding whether such externally provided repair will (or will not) eventually prove feasible, we must keep in mind that such repair techniques can quite literally take advantage of scientific advances made during the next few centuries. Forecasting the capabilities of future technologies is therefore an integral component of determining the feasibility of cryonics.

Such a forecast should,in principle be feasible. The laws of physics and chemistry as they apply to biological structures are well understood and well defined. Whether the repair of frozen tissue will (or will not) eventually prove feasible within the framework defined by those laws is a question which we should be able to answer based on what is known today.

#11 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:40 AM

Current research supports the idea that we will eventually be able to examine and manipulate structures molecule by molecule and even atom by atom. Such a technical capability has very clear implications for the kinds of damage that can (and cannot) be repaired. The most powerful repair capabilities that should eventually be possible can be defined with remarkable clarity. The question we wish to answer is conceptually straight forward; will the most powerful repair capability that is likely to be developed in the long run (perhaps over a few centuries) be adequate to repair tissue that is frozen using the best available current methods? There if no implication here that the most powerful repair method either will (or will not) be used or be necessary. The fact that we can kill a gnat with a double-barreled shotgun does not imply that a fly-swatter won’t work just as well. If we aren‘t certain whether we face a gnat or a tiger, we’d rather be holding the shotgun than the fly-swatter. The shotgun will work in either case.. but the fly-swatter can’t deal with the tiger. In a similar vein, we will consider the most powerful methods that should be feasible rather than the minimal methods that might be sufficient. While this approach can reasonably be criticized on the grounds that simpler methods are likely to work, it avoids the complexities and problems that must be dealt with in trying to determine exactly what those simpler methods might be in any particular case and provides additional margin for error.

#12 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:41 AM

There is widespread belief that such a capability will eventually be developed though exactly how long it will take is unclear. Sources include Engines of Creation by K. Eric Drexler. “Nanotechnoloey: Wherein Molecular Computers Control Tiny Circulatory Submarines”, “Foresight Update”, a publication of the Foresight Institute, “Scanning Tunneling Microscopy: Application to Biology and Technology, “Molecular manipulation using a tunnelling microscope. Molecular Engineering: An Approach to the Development of General Capabilities for Molecular Manipulation.” by K. Eric Drexler, “Rod Logic and Thermal Noise in the Mechanical Nanocomputer,” Proceedins of the Third International Symposium on Molecular Electronic Devices. “Machines of Inner Space’ Yearbook of Science and the Future. “A Small
Revolution Gets Underway”, by Robert Pool, “Positioning Single Atoms with a Scanning Tunnelling Microscope”, by D.M. Eigler. “Nonexistent technology gets a hearing.” by I. Amato Science News. “The Invisible Factory,” Nanosystems, Molecular Machinery, Manufacturing and Computation, John Wiley. Atom by Atom, Scientist Build Invisible Machines of the Future, Andrew Pullack “Theoretical Analysis of a Site-Specific Hydrogen Abstraction Tool” by Charles Musgrave and William A. Goddard III. Nanotechnology, Jason Perry. Nanotechnology Research and Perspectives, B.C. Crandall and James Lewis. “Self Replicating Systems and Molecular Manufacturing” by Ralph C. Merkle. “Computational Nanotechnology” by Ralph C. Merkle. “NASA and Self Replicating Systems” also by Ralph C. Merkle.
Nanotechnology 1991. special issue on Molecular manufacturing.

#13 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:48 AM

New York University Scientists recently announced the development of a machine made out of a few strands of DNA, representing the first step toward building nanorobots capable of repairing cell damage at the molecular level and restoring cells, organs and entire organisms to youthful vigor.

The long storage times possible with cryonic suspension make the precise development time of such technologies noncritical. Development anytime during the next few centuries would be sufficient to save the lives of those suspended with current technology.

You are already familiar with nanotechnology so I will just clarify the technical issues involved in applying it in the conceptually simplest and most powerful fashion to the repair of frozen tissue.

Broadly speaking, the central thesis of nanotechnology is that almost any structure consistent with the laws of chemistry and physics that can be specified can in fact be built. This possibility was first advanced by Richard Feynman when he said, “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom”.

This concept is receiving increasing attention in the research community. There have been two international research conferences directly on molecular manufacturing as well as a broad range of conferences on related subjects.

The ability to design and manufacture devices that are only tens or hundreds of atoms across promises rich rewards in electronics, catalysis, and materials. The scientific rewards should be just as great, as researchers approach an ultimate level of control-assembly matter one atom at a time.

Within the decade, some scientist is likely to learn how to piece together atoms and molecules one at a time using the Scanning Tunnelling Microscope.

Eigler and Schweizer at IBM reported on the use of the STM at low temperatures to position individual xenon atoms on a single-crystal nickel surface with atomic precision. This capacity has allowed us to fabricate rudimentary structures of our own design, atom by atom. The processes I describe are in principle applicable to molecules also. In view of the device-like characteristics reported for single atoms on surfaces, the possibilities for perhaps the ultimate in device miniaturization are evident.

Scientist involved in nanotechnology will be central to the next epoch of the information age, and will be as revolutionary as science and technology at the micron scale have been since the early 70’s. Indeed, we will have the ability to make electronic and mechanical devices atom-by-atom when that is appropriate to the job at hand.

Scientists are beginning to gain the ability to manipulate matter by its most basic components, molecule by molecule and even atom by atom, and that ability, while now very crude, might one day allow people to build almost unimaginable small electronic circuits and machines, producing for example, a super computer invisible to the naked eye. Some futurists even imagine building tiny robots that could travel through the body performing surgery on damaged cells.

Drexler has proposed the assembler, a small device resembling an industrial robot which would be capable of holding and positioning reactive compounds in order to control the precise location at which chemical reactions take place. This general approach should allow the construction of large atomically precise objects by a sequence of precisely controlled chemical reactions.

Possibly the best technical discussion of nanotechnology that has recently been provided to mankind is The engines of creation by Drexler.

The plausibility of this approach can be illustrated by the ribosome. Ribosomes manufacture all the proteins used in all living things on this planet. A typical ribosome is relatively small (a few thousand cubic nanometers) and is capable of building almost any protein by stringing together amino acids (the building blocks of proteins) in precise linear sequence. To do this,the ribosome has a means of grasping a specific amino acid (more precisely, it has a means of selectively grasping a specific transfer RNA, which in turn is chemically bonded by a specific enzyme to a specific amino acid), of grasping the growing polypeptide, and of causing the specific amino acid to react with and be added to the end of the polypeptide.

The instructions that the ribosome follows in building a protein are provided by mRNA (messenger RNA). This is a polymer formed from the 4 bases adenine, cytosine, guanine, and uracil. A Sequence of several hundred to a few thousand such bases codes for a specific protein. The ribosome “reads” this “control tape” sequential, and acts on the direction it provides.

In an analogous fashion, an assembler will build an arbitrary molecular structure following a sequence of instructions. The assembler, however, will provide three- dimensional positional and full orientation control over the molecular component (analogous to the individual amino acid) being added to a growing complex molecular structure (analogous to the growing polypeptide). In addition, the assembler will be able to form any one of several different kinds of chemical bonds. not just the single kind (the peptide bond) that the ribosome makes.

Calculations indicate that an assembler need not inherently be very large. Enzymes typically weigh about 10-5 amu while the ribosome itself is about 3x10-6 amu. The smallest assembler might be a factor of ten or so larger than a ribosome. Current design ideas for an assembler are somewhat larger than this cylindrical arms about 100 nanocomputers in length and 30 nanometers in diameter, rotary joints to allow arbitrary positioning of the tip of the arm, and a worst-case positional accuracy at the tip of perhaps .1 to .2 nanometers, even in the presence of thermal noise. Even a solid block, of diamond as large as such an arm weighs only sixteen million amu, so we can safely conclude that a hollow arm of such dimensions would weigh less, six such arms would weigh less than 10-8 amu.

Edited by thefirstimmortal, 16 November 2003 - 01:04 AM.


#14 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 12:51 AM

The assembler requires a detailed sequence of control signals, just as the ribosome requires mRNA to control its actions. Such detailed control signals can be provided by a computer. A feasible design for a molecular computer has been presented by Drexler. This design is mechanical in nature, and is based on sliding rods that interact by blocking or unblocking each other at locks. This design has 4 size of about 9 cubic nanometers per lock (roughly equivalent to a single logic gate). Quadrupling this size to 20 cubic nanometers (to allow for power, interfaces, and the like) and assuming that we require a minimum of 10-4 locks to provide minimal control results in a volume of 2x10-5 cubic nanometers (.0002 cubic microns) for the computational element. This many gates is sufficient to build a simple 4- bit or 8-bit general purpose computer. For example, the 6502 8-bit microprocessor can be implemented in about l00000 gates, while an individual 1-bit processor in the connection machine has about 3,000 gates. Assuming that each cubic nanometer computer will have a mass of about 2x10-8 amu.

An assembler might have a kilobyte of high speed (rod-logic based) RAM.. (Similar to the amount of RAM used in a modern one-chip computer ) and 100 kilobytes of slower but more dense “tape” storage-this tape storage would have a mass of 10-8 amu or less (roughly 10 atoms per bit). Some additional mass will be used for communications (sending and receiving signals from other computers) and power. In addition there will probably be a “tool kit” of interchangeable tips that can be placed at the ends of the assembler’s arms.. When everything is added up, a small assembler, with arms, computer, “tool kit” should weigh less than 10-9 amu.

E. Coli (a common bacterium) weighs about 10-12 amu. So you‘re talking about an assembler being much larger than a ribosome, but much smaller than a bacterium.

It is also interesting to compare Drexler ‘s architecture ‘for an assembler with the Von Neumann architecture for a self replicating device. Von Neumanns “universal constructing automation” had both a universal tuning machine to control its functions and a “constructing arm” to build the “seconder automation,” The constructing arm can be positioned in a two-dimensional plane, and the “head” at the end of the constructing arm is used to build the desired structure. While Von Neumann's construction was theoretical (existing in a two dimensional cellular automata world), it still embodied many of the critical elements that now appear in the assembler.

Further work on self-replicating systems was done by NASA in 1980 in a report that considered the feasibility of implementing a self-replicating lunar manufacturing facility with conventional technology.. One of their conclusions was that, “The theoretical concept of machine duplication is well developed. There are several alternative strategies by which machine self-replication can be carried out in a practical engineering setting.” They estimated it would require 20 years (and many billions of dollars) to develop such a system. While they were considering thc design of a macroscopic self-replicating system (the proposed “seed” was 100 tons) many of’ the concepts and problems involved in such systems are similar regardless of size.

#15 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 16 November 2003 - 12:51 AM

Bill, what is the reference for the last post that begins with, "Although how long is unclear..."?

#16 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:07 AM

Chemists have been remarkably successful at synthesizing a wide range of compouds with atomic precision. Their successes, however, are usually small in size (with the notable exception of various polymers), Thus, we know that a wide range of atomically precise structures with perhaps a few hundreds of atoms in them are quite feasible. Larger atomically precise structures’ with complex three-dimensional shapes can he viewed as a connected sequence of small atomically precise structures. While chemists have the ability to precisely sculpt small collections of atoms, there is currently no ability to extend this capability in a general way to structures of larger size. An obvious structure of considerable scientific and economic interest is the computer. The ability to manufacture a computer from atomically precise logic elements of molecular size, and to position those logic elements into a three dimensional volume with a highly precise and intricate interconnection pattern, would have revolutionary consequences for the computer industry.

A large atomically precise structure, however, can be viewed as simply a collection of small atomically precise objects which are then linked together. To build a truly broad range of large atomically precise objects requires the ability to create highly specific positionally controlled bonds. A variety of highly flexible synthetic techniques have been considered by Drexler.

#17 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:10 AM

Lets assume that positional control is available and that all reactions take place in a hard vacuum. The use of a hard vacuum allows highly reactive intermediate structures to be used e.g. a variety of radicals with one or more dangling bonds. Because the intermediates are in a vacuum, and because their position is controlled (as opposed to solutions, where the position and orientation of a molecule are largely random). such radicals will not react with the wrong thing for the very simple reason that they will not come into contact with the wrong thing.

It is difficult to maintain biological structures in a hard vacuum at room temperature because of water vapor and the vapor of other small compounds. By sufficiently lowering the temperature, however, it is possible to reduce the vapor pressure to effectively 0.

Normal solution-based chemistry offers a smaller range of controlled synthetic possibilities. For example, highly reactive compounds in solution will promptly react with the solution.

In addition, because positional control is not provided, compounds randomly collide with other compounds. Any reactive compound will collide randomly and react randomly with anything available (including itself). Solution-based chemistry requires extremely careful selection of compounds that are reactive enough to participate in the desired reaction. but sufficiently non-reactive that they do not accidentally participate in undesired side reactions.

Synthesis under these conditions is somewhat like placing the parts of a radio into a box. shaking, and pulling out an assembled radio. The ability of chemists to synthesize what they want under these conditions is amazing.

Much of current solution-based chemical synthesis is devoted to preventing unwanted reactions. With assembler-based synthesis, such prevention is a virtually free by-product of positional control.
To illustrate positional synthesis in vacuum somewhat more concretely, let us suppose we wish to bond two compounds. A and B. As a first step. we could utilize positional control to selectively abstract a specific hydrogen atom from compound A. To do this. we would employ a radical that had two spatially distinct regions. One region would have a high affinity for hydrogen while the other region could be built into a larger tip structure that would be subject to positional control. A simple example would be the propynyl radical, which consists of three co-linear carbon atoms and three hydrogen atoms bonded to the spare carbon at the “base” end. The radical carbon at the radical end is triply bonded to the middle carbon, which in turn is bonded to the base ‘carbon.

In a real abstraction tool. the base carbon would be bonded to other carbon atoms in a larger diamondoid structure which would provide positional control, and the tip might be further stabilized by a surrounding “collar” of unreactive atoms attached near the base that would limit lateral motions of the reactive tip.

The affinity of this structure for hydrogen is quite high. Propyne (the same structure but with a hydrogen atom bond to the “radical’ carbon) has a hydrogen-carbon bond dissociation energy in the vicinity of 132 kilocalories per mole. As a consequence, a hydrogen atom will prefer being bonded to the 1-propynyl hydrogen abstraction tool over being bonded to almost any other structure. By positioning the hydrogen abstraction tool over a specific hydrogen atom on compound A, we can perform a site specific hydrogen abstraction reaction. This requires positional accuracy of roughly a bond length (to prevent abstraction of an adjacent hydrogen). Quantum chemical analysis of this reaction by Musgrave et. al. show that the activation energy for this reaction is low, and that for the abstraction of hydrogen from the hydrogenated diamond surface (modeled by isobutane) the barrier is very likely zero.

Having once abstracted a specific hydrogen atom from compound A,’ we can repeat the process for compound B. We can now join compound A to compound B by positioning the two compounds so that the two dangling bonds are adjacent to each other, and allowing them to bond.

This illustrates a reaction using a single radical. With positional control. we could also use two radicals simultaneously to achieve a specific objective. Suppose, for example, that two atoms Al and A2 which are part of some larger molecule are bonded to each other. If we were to position the two radicals Xl and X2 adjacent to Al and A2, respectively, then a bonding structure of much lower free energy would be one in which the Al-A2 bond was broken, and two new bonds A1-Xl and A2-X2 were formed. Because this reaction involves breaking one bond and making two bonds (i.e. the reaction product is not a radical and is chemically stable) the exact nature of the radicals is not critical. Breaking one bond to form two bonds is a favored reaction for a wide range of cases. Thus, the positional control of two radicals can be used to break any of a wide range of bonds.

“A range of other reactions involving a variety of reactive intermediate compounds (carbenes are among the more interesting ones) are proposed in Nanosystems Molecular Machinery, Manufacturing and Computation. by John Wiley, alone with the results of Semi-Empirical and Ab Initio Quantum Calculations and the available experimental evidence.

Another general principle that can be employed with positional synthesis is the controlled use of force. Activation energy. normally provided by thermal energy in conventional chemistry, can also be provided by mechanical means. Pressures of 1.7 megabars have been achieved experimentally in macroscopic systems. At the molecular level such pressure corresponds to forces that are a large fraction of the force required to break a chemical bond. A molecular vise made of hard diamond like material with a cavity designed with the same precision as the reactive site of an enzyme can provide activation energy by the extremely precise application of force, thus causing a highly specific reaction between two compounds.

To achieve the low activation energy needed in reactions involving radicals requires little force, allowing a wider range of reactions to be caused by simpler devices (e.g.. devices that are able to generate only small force).

Feynman said: “The problems of chemistry and biology can be greatly helped is our ability to see what we are doing, and to do things on an atomic level, is ultimately developed a development which I think cannot be avoided.” Drexler has provided the substantive analysis required before this objective can be turned into a reality. We are nearing an era when we will be able to build virtually any structure that is specified in atomic detail and which is consistent with the laws of chemistry and physics. This has substantial implications for future medical technologies and capabilities.

A repair device is an assembler which is specialized for repair of tissue in general, and frozen tissue in particular. We assume that a repair device has a mass of between 10(9) and 10(10) amu (e.g.. we assume that a repair device might be as much as a factor of 10 more complicated than a simple assembler). This provides ample margin for increasing the capabilities of the repair device if this should prove necessary.

A single repair device of the kind described will not, by itself, have sufficient memory to store the programs required to perform all the repairs. However, if it is connected to a network (in the same way that current computers can be connected into a local area network) then a single large “file server’ can provide the needed information for all the repair devices on the network. The file server can be dedicated to storing information: all the software and data that the repair devices will need. Almost the entire mass of the file server can be dedicated to storage. It can service many repair devices, and can be many times the size of one device without greatly increasing system size. Combining these advantages implies the file server will have ample storage to hold whatever programs might be required during the course of repair. In a similar fashion, if further computational resources are required they can be provided by “large” computer servers located on the network.

One consequence of the existence of assemblers is that they are cheap. Because an assembler can be programed to build almost any structure, it can in particular be programed to build another assembler. Thus, self reproducing assemblers should be feasible and in assemblers would be primarily the cost of the raw materials and energy required in their construction. Eventually (after amortization of possibly quite high development costs). the price of assemblers (and of the objects they build) should be no higher than the price of other complex structures made by self-replicating systems. Potatoes-which have a staggering design complexity involving tens of thousands of different genes and different proteins directed by many megabits of genetic information - cost well under a dollar per pound.

#18 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:12 AM

Now we have some basics down we can now look at restoring the boxed up computer to its previous functionality.

In principle we need only repair the frozen brain, for the brain is the most critical and important structure in the body. Faithfully repairing the liver (or any other secondary tissue) molecule by molecule (or perhaps atom by atom) appears to offer no benefit over simpler techniques -such as replacement. The calculations and discussions that follow are therefore based on the size and composition of the brain. It should be clear that if repair of the brain is feasible. then the methods employed could (if we wished) be extended in the obvious way to the rest of the body.

The brain, like all the familiar matter in the world around us, is made of atoms. It is the spatial arrangement of these atoms that distinguishes an arm from a leg, the head from the heart. and sickness from health. This view of the brain is the framework that we must work. Our problem. broadly stated, is that the atoms in a frozen brain are in the wrong places. We must put them back where they belong (with perhaps some minor additions and removals, as well as just rearrangements) if we expect to restore the natural functions of this most wonderful organ.

In principle, the most that we could usefully know about the frozen brain would be the coordinates of each and every atom in it. This knowledge would put us in the best possible position to determine where each and every atom should go. This knowledge, combined with a technology that allowed us to rearrange atomic structures in virtually any fashion consistent with the lawsof chemistry and physics, would clearly let us restore the frozen structure to a fully functional and healthy state. In short. we must answer three questions. Where are the atoms? Where should they go? How do we move them from where they are to where they should be?

Regardless of the specific technical details involved, any method of restoring a person in suspension must answer these three questions, if only implicitly. Current efforts to freeze and then thaw tissue (e.g.. experimental work aimed at freezing and then reviving sperm. kidneys. etc.) answer these three questions indirectly and implicitly. Ultimately, technical advances should allow us to answer these questions in a direct and explicit fashion.

Rather that directly consider these questions at once. we shall first consider a simpler problem, how would we go about describing the position of every atom if somehow this information was known to us? The answer to this question will let us better understand the harder questions.

How many bits to describe one atom? Each atom has a location in three-space that we can represent with three coordinates; x.y. and z. Atoms are usually a few tenths of a nanometer apart. If we could record the position of each atom to within 0.01 nanometers, we would know its position accurately enough to know what chemicals it was part of, what bonds it had formed, and so on. The brain is roughly .1 meters across. 50 .01 nanometers is about 1 part in 10(10).

That is, we would have to know the position of the atom in each coordinate to within one part in ten billion. A number of this size can be represented with about 33 bits. There are three coordinates. x. y. and z. each of which requires 33 bits to represent, so the position of an atom can be represented in 99 bits. An additional few bits are needed to store the type of the atom (whether hydrogen, oxygen, carbon. etc.), bringing the total to slightly over 100 bits.

Thus, if we could store 100 bits of information for every atom in the brain, we could fully describe its structure in as exacting and precise a manner as we could possibly need. A memory device of this capacity should be quite literally possible. To quote Feynman. “Suppose, to be conservative, that a bit of information is going to require a little cube of atoms 5x5x5 - that is 125 atoms.tm This is indeed conservative. Single stranded DNA already stores a single bit in about 16 atoms (excluding the water that it’s in). It seems likely we can reduce this to only a few atoms. The work at IBM suggests a rather obvious way in which the presence or absence of a single atom could be used to encode a single bit of information (although some sort of structure for the atom to rest upon and some method of sensing the presence or absence of the atom will still be required, so we would actually need more than one atom per bit in this case).

If we conservatively assume that the laws of chemistry inherently require 10 atoms to store a single bit of information, we still find that the 100 bits required to describe a single atom in the brain can be represented by about 1.000 atoms. Put another way, the, location of every atom in a frozen structure is (in a sense) already encoded in that structure in a analog format. If we convert from this analog encoding to a digital encoding, we will increase the space required to store the same amount of information. That
is, an atom in three-space encodes its own position in the analog valve of its three special coordinates. If we convert this spatial information from its analog format to a digital format, we inflate the number of atoms we need by perhaps as much as 1,000. If we digitally encoded the location of every atom in the brain, we would need 1,000 times as many atoms to hold this encoded data as there are atoms in the brain. This means we would require roughly 1,000 times the volume. The brain is somewhat over one cubic decimeter. so it would require somewhat over one cubic meter of material to encode the location of each and every atom in the brain in a digital format suitable for examination and modification by a computer.

While this much memory is remarkable by today’s standards, its construction clearly does not violate any laws of physics or chemistry. That is, it should literally be possible to store a digital description of each and every atom in the brain in a memory device that we will eventually be able to build.

While such a feat is remarkable, it is also much more than we need. Chemists usually think of atoms in groups called molecules. So your talking about water as a molecule made of three atoms. If we describe each atom separately, we will require 100 bits per atom, or 300 bits total. If,however, we give the position of the oxygen atom and give the orientation of the molecule. we need 99 bits for the location of the oxygen atom t 20 bits to describe the type of molecule (“water”, in this case) and perhaps another 30 bits to give the orientation of the water molecule (to bits for each of the three rotational axes). This means we can store the description of a water molecule in only 150 bits, instead of the 300 bits required to describe the three atoms separately. (The 20 bits used to describe the type of the molecule can describe up to 1.000.000 different molecules many more than are present in the brain).

As the molecule we are describing gets larger and larger, the savings in storage gets bigger and bigger. A whole protein molecule will still require only 150 bits to describe, even though it is made of thousands of atoms. The canonical position of every atom in the molecule is specified. once the type of this molecule (which occupies a mere 2O.bits) is given. A large molecule might adopt many configurations, so it might at first seem that we’d require many more bits to describe it. However, biological macromolecules typically assume one favored configuration rather that a random configuration, and it is this favored configuration that I will describe. (Because proteins are always produced as a linear chain. they must of necessity be able to adopt an appropriate three dimensional configuration by themselves. Usually, the correct configuration is unique, if it isn’t, it is usually the case that the molecule will spontaneously cycle through appropriate configurations by itself, e.g. an ion channel will open and close at appropriate times regardless of whether it was initially started in the “or ‘closed” configuration. If any remaining cases should prove to be a problem. a few additional bits can be used to describe the specific configuration desired).

We can do even better. The molecules in the brain are packed in next to each other. Having once described the position of one, we can describe the position of the next molecule as being such-and-such a distance from the first. If we assume that two adjacent molecules are within 10 nanometers of each other (a reasonable assumption) then we need only store 10 bits of ‘delta X.” 10 bits of “delta V.” and 10 bits of “delta 2” rather that 33 bits of X, 33 bits of V. and 33 bits of 2. This means our molecule can be described in only 10 + 10 + 10 + 20 + 30 or 80 bits.

We can compress this further by using various other clever stratagems (50 bits or less is quite achievable). but the essential point should be clear. We are interested in molecules, and describing a molecule takes fewer bits than describing an atom.

A further point will be obvious to any biologist. Describing the exact position and orientation of a hemoglobin molecule within a red blood cell is completely unnecessary. Each hemoglobin molecule bounces around within the red blood cell in a random fashion, and it really doesn’t matter precisely where it is, nor exactly which way it’s pointing. All we need do is say. “it’s in that red blood cell!” So, too, for any other molecule that is floating at random in a ‘cellular compartment.” we need only say which compartment it’s in. Many other molecules. even though they do not diffuse freely within a cellular compartment. are still able to diffuse fairly freely over a significant range. The description of their position can be appropriately compressed.

While this reduces our storage requirements quite a bit. we could go much further. Instead of describing molecules, we could describe entire subcellular organelles. It seems excessive to describe a mitochondrion by describing each and every molecule in it. It would be sufficient simply to note the location and perhaps the size of the mitochondrion, for all mitochondria perform the same function. they produce energy for the cell. While there are indeed minor differences from mitochondrion to mitochondrion, these differences don’t matter much and could reasonably be neglected.

We could go still further, and describe an entire cell with only a general description of the function it performs: this nerve cell has synapses of a certain type with that other cell, it has a certain shape, and so on. We might even describe groups of cells in terms of their function: this group of cells in the retina performs a “center surround” computation, while that group of cells performs edge enhancement.

Cherniak said. “On the usual assumption that the synapse is the necessary substrate of memory, supposing very roughly that (given anatomical and physiological noise each synapse encodes about on binary bit of information, and a thousand synapses per neuron are available for this task: 10(10) cortical neurons x 10(3) synapses + 10(13) bits of arbitrary information (1.25 terabvtes) that could be stored in the cerebral cortex.

This kind of logic can be continued, but where does it stop? What is the most compact description which captures all the essential information? While many minor details of neural structure are irrelevant, our memories clearly matter. Any method of describing the human brain which resulted in loss of long term memory has rather clearly gone too far. When we examine this quantitatively, we find that preserving the information in our long term memory ought require as little as 10(9) bits (somewhat over 100 megabytes). We can say rather confidently that it will take at least this much information to adequately describe an individual brain. The gap between this lower bound and the molecule-by-molecule upper bound is rather large, and it is not immediately obvious where in this range the true answer falls. I shall not attempt to answer this question. but will instead (conservatively) simply adopt the upper bound.

Determining when “permanent cessation of all vital functions” has occurred is not easy. Historically, premature declarations of death and subsequent burials alive have been a major problem. In the seventh century, Celsus wrote, “Democritus. a man of well merited celebrity, has asserted that there are in reality, no characteristics of death sufficiently certain for physicians to reply upon.

Montgomery, reporting on the evacuation of the Fort Randall Cemetary, states that nearly two percent of those exhumed were buried alive.

Many people in the nineteenth century, alarmed by the prevalence of premature burial, requested, as part of the last offices, that wounds or mutilations be made to assure that they would not awaken. Embalming received a considerable impetus from the fear of premature burial.

Current criteria of “death” are sufficient to insure that spontaneous recovery in the mortuary or later is a rare occurrence. When examined closely, however, such criteria are simply a codified summary of’ symptoms that have proven resistant to treatment by available techniques. Historically. they derive from the fear that the patient will spontaneously recover in the morgue or crypt. There is no underlying theoretical structure to support them, only a continued accumulation of ad hoc procedures supported by
empirical evidence. To quote Robert Veach, “We are left with rather unsatisfying results. Most of the data do not quite show that persons meeting a given set criteria have, in fact, irreversibly lost brain function. They show that the patients lose heart function soon, or that they do not “recover.” Autopsv data are probable the most convincing. Even more convincing, though, is that over the years not one patient who has met the various criteria and then been maintained, for whatever reason, has been documented as having recovered brain function. Although this is not an elegant argument. it is reassuring.” In short, current criteria are adequate to determine when current medical technology will fail to revive the patient, but are silent on the capabilities of future medical technology.

Each new medical advance forces a reexamination and possible change of the existing ad hoc criteria. The criteria used by the clinician today to determine “death” are dramatically different from the criteria used 100 years ago, and have changed more subtly but no less surely in the last decade. It seems almost inevitable that the criteria used 200 years from now will differ dramatically from the criteria commonly employed today.

These ever shifting criteria for “death” raise an obvious question; Is there a definition which will not change with advances in technology? A definition which does have a theoretical underpinning and is not dependent on the technology of the day?

The answer arises from the confluence arid synthesis of many lines of work. ranging from information theory, neuroscience. physics. biochemistry and computer science to the philosophy of the mind and the evolving criteria historically used to define death.

When someone has suffered a loss of memory or mental function. we often say they “aren’t themselves.” As the loss becomes more serious and all higher mental functions are lost, we begin to use terms like persistent vegetative state. “While we will often refrain from declaring such an individual “dead”. this hesitation does not usually arise because we view their present state as “alive” but because there if still hope of recovery to a healthy state with memory and personality intact.” From a physical point of view we believe there is a chance that their memories and personalities are still present within the physical structure of the brain, even though their behavior does not provide direct evidence for this. If we could reliably determine that the physical structures encoding memory and personality had in fact been destroyed, then we would abandon hope and declare the person dead.

Clearly, if we knew the coordinates of each and every atom in a person’s brain then we would (at least in principle) be in a position to determine with absolute finality whether their memories and personality had been destroyed in the information theoretic sense, or whether their memories and personality were preserved but could not. for some reason, be expressed. If such final destruction had taken place. then there would be little reason for hope. If such destruction had not taken place. then it would in principle be possible for a sufficiently advanced technology to restore the person to a fully functional and healthy state with their memories and personality intact.

Considerations like this lead to the information theoretic criterion of death. A person is dead according to the information, theoretic criterion of their memories, personality. hopes, dreams. etc.. have been destroyed in the information theoretic sense. That is, if the structures in the brain that encode memory and personality have been so disrupted that it is no longer possible in principle to restore them to an appropriate functional state then the person is dead. If the structures that encode memory and personality are sufficiently intact that inference of the memory and personality are feasible in principle, and therefore restoration to an appropriate functional state is likewise feasible in principle, then the person is not dead.

A simple example from computer technology is in order. If a computer is fully functioning then its memory and “personality” are completely intact. If it fell out the seventh floor window to the concrete below, it would rapidly cease to function. However, its memory and “personality” would still be present in the pattern of magnetizations on the disk. With sufficient effort, we could completely repair the computer with its memory and personality intact.

In a similar fashion, as long as the structures that encode the memory and personality of a human being have not been irretrievably “erased” (to use computer jargon) then restoration to a fully functional state with memory and personality intact is in principle feasible. Any technology independent definition of “death” should conclude that such a person is not dead. for a sufficiently advanced technology could restore the person to a healthy state.

On the flip side of the coin, if the structures encoding memory and personality have suffered sufficient damage to obliterate them beyond recognition, then death by the information theoretic criterion has occurred. An effective method of insuring such destruction is to burn the structure and stir the ashes. This is commonly employed to insure the destruction of classified documents. Under the name of “cremation” it is also employed on human beings and is sufficient to insure that death by the information theoretic criterion takes place.

It is not obvious that the preservation of life requires the physical repair or even the preservation of the brain. Although the brain is made Of neurons, synapses, protoplasm. DNA and the like, most modern philosophers of consciousness view these details as no more significant than hair color or clothing style. Three samples follow.

The ethicist and prolific author Robert ‘Match said, in Death, Dying, and the Biological Revolution, “An artificial brain is not possible at present, but a walking, talking, thinking individual who had one would certainly be considered living.” The noted philosopher of consciousness Paul Churchland said, in Matter and Consciousness, “If machines do come to simulate all of our internal cognitive activities, to the last computational detail, to deny them the status of genuine persons would be nothing but a new form of racism. Hans Moravec renowned roboticist and director of the Mobile Robot Lab at Carnegie Mellon said. “Body-identity assumes that a person is defined by the stuff of which a human body is made. 0nly by maintaining continuity of body stuff can we preserve an individual person. Pattern identity, conversely, defines the essence of a person. say myself, a the pattern and the process going on in my head and body, not the machinery supporting that process. If the process is preserved, I am preserved. The rest is merey jelly.”

#19 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:17 AM

Restoration of the existing structure will be more difficult than building an artificial brain (particularly if the restoration is down to the molecular level). Despite this, we will examine the technically more exacting problem of restoration because it is more generally acceptable. Most people accept the idea that restoring the brain to a healthy state in a healthy body is a desirable objective. A range of increasingly less restrictive objectives are posàible. To the extent that more relaxed criteria are acceptable, the technical problems are much less demanding. By deliberately adopting such a conservative position, we lay ourselves open to the valid criticism, that the methods described here are unlikely to prove necessary. Simpler techniques that relax to some degree the philosophical constraints we have imposed might well be adopted in practice. In this letter we will eschew the more exotic possibilities (without, however, adopting any position on their desirability).

Another issue is not so much philosophical as emotional. Major surgery is not a pretty sight. There are few people who can watch a surgeon cut through living tissue with equanimity. In a heart transplant. for example, surgeons cut open the chest of a dying patient to rip out their dying heart, cut open a fresh cadaver to seize its still beating heart. and then stitch the cadaver’s heart into the dying patients chest. Despite this (which would have been condemned in the middle ages as the blackest of black magic), we cheer the patients return to health and are thankful that we live in an era when medicine can save lives that were formerly lost.

The mechanics of examining and repairing the human brain, possibly down to the level of individual molecules, might not be the best topic for after dinner conversation. While the details will vary depending on the specific method used, this could also be described by lurid language that failed to capture the central issue: the restoration to full health of a human being.

A final issue that should be, addressed is that of changes introduced by the process of restoration itself. The exact nature and extent of these changes will vary with the specific method. Current surgical techniques, for example, result in substantial tissue changes. Scarring, permanent implants, prosthetics, etc. are among the more benign outcomes. In general, methods based on a sophisticated ability to rearrange atomic structure should result in minimal undesired alterations to the tissue.

“Minimal changes” does not mean “no changes.” A modest amount of change in molecular structure. whatever technique is used, is both unavoidable and insignificant. The molecular structure of the human brain is in a constant state of change. during life-molecules are synthesized, utilized, and catabolized in a continuous cycle. Cells continuously undergo slight chances in morphology. Cells also make small errors in building their own parts. For example, ribosomes make errors when they build proteins. About one amino acid in every 10.000 added to a growing polypeptied chain by ribosome is incorrect. Changes and errors of a similar magnitude introduced by the process of restoration can reasonably be neglected.

It is normally a matter of small concern whether a physician of 2200 would or would not concur with the diagnosis of “death” by a contemporary physician applied to a specific patient in 2001. A physician of today who found himself in 1800 would be able to do little for a patient whose heart had stopped. even though he knew intellectually that an intensive care unit would likely be able to save the patients life. Intensive care units were simply not available in 1800, no matter what the physician knew was possible. So. too, with the physician of today when informed that a physician 200 years hence could save the life of the patient that he has just pronounced “dead.” There is nothing he can do, for he can only apply the technologies of today, except in the case of cryonic suspension.

In this one instance, we must ask not whether the person is dead by today’s (clearly technology dependent) criteria, but whether the person is dead by all future criteria. In short. we must ask whether death by the information theoretic criterion has taken place. If it has not, then cryonic suspension is a reasionable (and indeed life saving) course of action.

It is often said that “cryonics is freezing the dead.” It is more accurate to say that “cryonics is freezing the terminally ill. Whether or not they are dead remains to be seen.

The scientifically correct experiment to verify that cryonics work (or demonstrate that it does not work) is quite easy to describe:
1. Select N experimental subjects
2. Freeze them
3. Wait 100 years
4. See if the technology available 100 years from now can (or cannot) cure them

The drawback of this experimental protocol is obvious, we can’t get the results for 100 years. This problem is fundamental. The use of future technology is an inherent part of cryonics. Criticisms of cryonics based on the observation that freezing and thawing mammals with present technology don’t work are irrelevant, for that is not what is being proposed.

This kind of problem is not entirely unique to cryonics. A new AIDS treatment might undergo clinical trials lasting a few years. The ethical dilemma posed by the terminally ill AIDS patient who might be assisted by the experimental treatment is well known. If the AIDS patient is given the treatment prior to completion of the clinical trials, it is possible that his situation could be made significantly worse. On the other hand. to deny a potentially life saving treatment to someone who will soon die anyway is ethically untenable.

In the case of cryonics this is not an interim dilemma pending the (near term) outcome of clinical trials. It is a dilemma inherent in the nature of the proposal. Clinical trials, the bulwark of modern medical practice, are useless in resolving the effectiveness of cryonics in a timely fashion.

Further,cryonics (virtually by definition) is a procedure used only when the patient has exhausted all other available options. In current Practice the patient is suspended after legal death: the fear that the treatment might prove worse than the disease is absent.. Of course. suspension of the terminally ill patient somewhat before legal death has significant advantages. A patient suffering from a brain tumor might view suspension following the obliteration of his brain as significantly less desirable than suspension prior to such obliteration, even if the suspension occurred at a point in time when the patient was legally “alive.

In such a case. it is inappropriate to disregard or override the patient’s own wishes. To quote the American College of Physicians Ethics Manual, “Each patient is a free agent entitled to full explanation and full decision-making authority with regard to his medical care. John Stuart Mill expressed it as: ‘Over himself, his own body and mind., the individual is sovereign.’ The legal counterpart of patient autonomy is self-determination. Both principles deny legitimacy to paternalism by stating unequivocally that, in the final analysis, the patient determines what is right for him.” “If the terminally ill patient is a mentally competent adult, he has the legal right to accept or refuse any form of treatment, and his wishes must be recognized and honored by his physician.”

If clinical trials cannot provide us with an answer, are there any other methods of evaluating the proposals? Can we do more than say that (a) cryonic suspension can do no harm (in keeping with the hippocratic oath), and (b) it has some difficult-to-define chance of doing good?

Trying to prove something false is often the simplest method of clarifying exactly what is required to make it true. A consideration of the information theoretic criterion of death makes it clear that. from a technical point of view (ignoring various non-technical issues) there are two and only two ways in which cryonics can fail.

Cryonics will fail if:
1. Information theoretic death occurs prior to reaching liquid nitrogen temperature.
2. Repair technology that is feasible in principle is never developed and applied in practice, even after the passage of centuries.

The first failure criterion can only be considered against the background of current understanding of freezing damage, ischemic injury and mechanisms of memory and synaptic plasticity. Whether or not memory and personality are destroyed in the information theoretic sense by freezing and the ischemic injury that might precede it can only be answered by considering both the physical nature of memory and the nature of the damage to which the brain is subjected before reaching the stability provided by storage in liquid nitrogen.

As you may readily appreciate the following information will consider only the most salient points that are of the greatest importance in determining overall feasibility.

This is necessarily too short to consider the topics in anything like full detail, but should provide sufficient information to give you an overview of the relevant issues.

There is extensive literature on the damage caused by both cooling and freezing to liquid nitrogen temperature. We will briefly, review the nature of such damage and consider whether it is likelr to cause information theoretic death. Damage per se. is not meaningful except to the extent that it obscures or obliterates the nature of the original structure.

While cooling tissue to around 0 degrees c. creates a number of problems, the ability to cool mammals to this temperature or even slightly below (with no ice information) using current methods followed by subsequent complete recovery shows that this problem can be controlled and is unlikely to cause information theoretic death.

Further, some freezing damage in fact occurs upon re-warming current work supports this idea because the precise method used to re-warm tissue can strongly affect the success or failure of present experiments even when freezing conditions are identical. If we presume that future repair methods avoid the step of re-warming the tissue prior to analysis and instead analyze the tissue directly in the frozen state then this source of damage will be eliminated. Several current methods can be used to distinguish between damage that occurs during freezing and damage that occurs while thawing. At present, it seems likely that some damage occurs during each process. While significant damage does not induce structural changes which obliterate the cell.

Many types of tissue including human embryos, sperm, skin, bone, red and white blood cells, bone marrow, and others have been frozen in liquid nitrogen, thawed, and have recovered. This is not true (as you have pointed out) of whole mammals. The brain seems more resistant than most organs to freezing damage. Recovery of overall brain function following freezing to liquid nitrogen temperature has not been demonstrated, although recovery of unit level electrical activity following freezing to -50 degrees c. has been demonstrated.

Perhaps the most dramatic injury caused by freezing is macroscopic fractures. Tissue becomes extremely brittle at or below the “glass transition temperature”. Continued cooling to the temperature of liquid nitrogen creates tensil stress in the glassy material. This is exacerbated by the skull, which inhibits shrinkage of the cranial contents. This stress causes readily evident macroscopic fractures in the tissue.
Fractures ‘that occur below the glass transition temperature result in very little information loss. While dramatic, this damage is unlikely to cause or contribute to information theoretic death.

The damage most commonly associated with freezing is that caused by ice. Contrary to common belief, freezing does not cause cells to burst open like water pipes on a cold winter’s day. Quite the contrary, ice formation takes place outside the cells in the extracellar region. This is largely due to the presence of extracellular nucleating agents on which ice can form, and the comparative absence of intracellular nucleating agents. Consequently the intracellular liquid supercools.

Extra cellular ice formation causes an increase in the concentration of the extra-cellular solute. e.g. the chemicals in the extracellular liquid are increased in concentration by the decrease in available water. The immediate effect of this increased extracellular concentration is to draw water out of the cells by osmosis. Thus, freezing dehydrates cells.

Damage can be caused by the extracellular ice, by the increased concentration of solute, or by the reduced temperature itself. All three mechanisms can play a role under appropriate conditions.

#20 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:21 AM

The damage caused by extracellular ice formation depends largely on the fraction of the initial liquid volume that is converted to ice. (The initial liquid volume might include a significant amount of cryoprotectant as well as water.) When the fraction of the liquid volume converted to ice is small, damage is often reversible even by current techniques. In many cases. conversion of significantly more than 40% of the liquid volume to ice is damaging. The brain is more resistant to such injury: conversion of up to 60% of the liquid volume in the brain to ice is associated with recovery of neuronal, function. Storey and Storey said. “if the cell volume fails below a critical minimum, then the bilayer of priosphollpids in the membrane becomes so greatly compressed that its structure breaks down. Membrane transport functions cannot be maintained and breaks in the membrane spill cell contents and Provide a gate for ice to propagate into the cell. Most freeze tolerant, animals reach the critical minimum cell volume when about 65% of total body weight is sequestered as ice.”

Appropriate treatment with cryoprotectants (in particular glycerol) prior to freezing will keep 40% or more of the liquid volume from being converted to ice even at liquid nitrogen temperates.

Fahy has said, “All of the postulated problems in cryobiology cell packing, channel size constraints, optimal cooling rate differences for mixed cell populations, osmotically mediated injury, and the rest can be solved in principle by the selection of a sufficiently high concentration of cryoprotectant prior to freezing. In the extreme case, all ice formation could be suppressed completely by using a concentration of agent sufficient to ensure vitrification of the biological system in question. Unfortunately, a concentration of cryoprotectant sufficiently high to protect the system from all freezing injury would itself be injurious. It should be possible to trade the mechanical injury caused by ice formation for the biochemical injury caused by the cryoprotectant, which is probably advantageous. Current suspension protocols at Alcor call for the introduction of greater than 6 molar glycerol. Both venous and arterial glycerol concentrations have exceeded 6 molar in several recent suspensions. If this concentration of cryoprotectant is also reaching the tissues, it should keep over 60% of the initial liquid volume from being converted to ice at liquid nitrogen temperatures.

“Dehydration and concentration of solutes past some critical level may disrupt metabolism and denature cell proteins and macromolecular complexes.” The functional losses caused by this mechanism seem unlikely to result in significant information loss. One qualification to this conclusion is that cell membranes appear to be weakened by increased solute concentration. To the extent that structural elements are weakened by increased solute concentrations the vulnerability of the cell to structural damage is increased.

Finally, denaturing of proteins might occur at low temperature. In this process the tertiary and perhaps even secondary structure of the protein might be disrupted leading to significant loss of protein function. However, the primary structure of the protein (the linear sequence of amino acids) is still intact and so inference of the correct functional state of the protein is in principle trivial. Further, the extent of protein denaturation caused by freezing must necessarily be limited given the relatively wide range of tissues that have been successfully frozen and thawed.

Intracellular freezing is another damaging event which might occur. If cooling is slow enough to allow the removal is most of the water from the cell’s interior by osmosis, then the high concentration of solute will prevent the small amount of remaining water from freezing. If cooling is too rapid, there will be insufficient time for the water within the cell to escape before it freezes. In the latter case, the intracellular contents are supercooled and freezing is abrupt (the cell ‘flashed”). While this correlates with a failure to recover function it is difficult to believe that rapid freezing results in significant loss of information.

Intracellular freezing is largely irrelevant to cryonic suspensions because of the slow freezing rates dictated by the large mass of tissue being frozen. Such freezing rates are too slow for intracellular freezing to occur except when membrane rupture allows extracellular ice to penetrate the intracellular region. If the membrane does fail, one would expect the interior of the cell to flash.

Spontaneous recovery of function following freezing to liquid nitrogen temperatures using the best currently available techniques appears unlikely for mammalian organs, including the brain. Despite this, the level of structural preservation can be quite good. The complexity of the systems that have been successfully frozen and rewarmed is remarkable, and supports the claim that good structural preservation is often achieved. The mechanisms of damage that have been postulated in the literature are sufficiently subtle that information loss is likely to be small, this is, death by the information theoretic criterion is unlikely to have occurred. Further research aimed specifically at addressing this issue is needed.

Although modern cryonic suspensions can involve minimal delay and future suspensions might eliminate delay entirely, delay is sometimes unavoidable. The most significant type of damage that such delay causes is ischemic injury.

Broadly speaking, the structure of the human brain remains intact for several hours or more following the cessation of blood flow, or iscemia. The tissue changes that occur subsequent to ischemia have been well studied. There have also been studies of the “postmortem” changes that occur in tissue. Perhaps the most interesting of these studies was conducted by Kalimo.

In order to study immediate postmortem” changes, Kalimo perfused the brains of 5 patients with aldehydes within half an hour of “clinical death.” Subsequent examination of the preserved brain tissue with both light and electron microscopy showed the level of structural preservation. In two case, the changes described were consistent with approximately one to two hours of ischemic injury. (Ischemic injury often begins prior to declaration of “clinical death,” hence the apparently longer ischemic period compared with the interval following declaration of death and prior to perfusion of fixative). Physical preservation of cellular structure and ultrastructure was excellent. It is difficult to avoid the conclusion that information loss was negligible in these cases. In two other cases, elevated intraparenchyrnal pressure prevented perfusion with the preservative, thus preventing examination of the tissue.

Without such an examination, it is difficult to draw conclusions about the extent of information loss. In the final case, the most obvious abnormality was the replacement of approximately four-fifths of the parenchyma of the brain by a fluid-containing cavity that was lined by what seemed to be very thin remnants of the cerebral cortex.” Cryonic suspension in this last case would not be productive.

As an aside, the vascule perfusion of chemical fixatives to improve stability of tissue structures prior to perfusion with cryoprotectants and subsequent storage in liquid nitrogen would seem to offer significant advantages. The main issue that would require resolution prior to such use is the risk that fixation might obstruct circulation, thus impeding subsequent perfusion with cryprotectants. Other than this risk, the use of chemical fixative (such as aldehydes and in particular glutaraldehyde) would reliably improve structural preservation and would be effective at halting almost all deterioration within minutes of perfusion. The utility of chemical preservation has been discussed by Drexler and by Olson, among others.

The events following ischemia have been reasonably well characterized. Following experimental induction of ischemia in cats, Kalimo found “The resulting cellular alterations were homogeneous and uniform throughout the entire brain: they included early chromatin clumping, gradually increasing electron lucency of the cell sap, distention of endoplasmic reticulum and golgi cisternae, transient mitochondrial condensation followed by swelling and appearance of flocculent densities, and dispersion of ribosomal rosettes.” Energy levels within the cell drop sharply within a few minutes of cessation of blood flow. The chromatin clumping is a reversible early change. The loss of energy results fairly quickly in failure to maintain transmemhrane concentration gradients (for example the Na+K+pump stops working, resulting in increased extracellular k+). The uneven equilibration of concentration gradients results in changes in osmotic pressure with consequent flows of water. Swelling of mitochondria and other structures occurs. The appearance of “flocculent densities” in the mitochondria is though to a indicate severe internal membrane damage which is “irreversible.”

Ischemic changes do not appear to result in any damage that would prevent repair (e.g. chanqes that would result in significant loss of information about structure) for at least a few hours. Temporary functional recovery has been demonstrated in optimal situations after as long as 60 minutes of total ischemia.

Hossmann, for example, reported results on 143 cats subjected to one hour of normothermic global brain ischemia. “Body temperature was maintained at 36 degrees to 37 degrees c. with a heating pad. Completeness of ichemia was tested by injecting Xe into the innominate artery, immediately before
vascular occusion and monitoring the absence of decay of radioactivity from the head during ischemia, using external scintillation detectors. In 50% of the animals, even major spontaneous EEG activity returned after ischemia. One cat survived for 1 year after one hour of normothermic cerebrocirculatory arrest with no electrophysiologic deficit and with only minor neuroligic and morphologic disturbances.” Functional recovery is a more stringent criterion than the more relaxed information theoretic criterion, which merely requires adequate structural preservation to allow inference about the preexisting structure. Reliable identification of the various cellular structures is possible hours (and sometimes even days) later. Detailed descriptions of ischemia and its time course also clearly show that cooling substantially slows the rate of deterioration. Thus, even moderate cooling “postmortem” slows deterioration significantly.

The theory that lysosomes (“suicide bags”) rupture and release digestive enzymes into the cell that result in rapid deterioration of chemical structure appears to be incorrect. More broadly, there is a body of work suggesting that structural deterioration does not take place rapidly.

Kalimo said, “It is noteworthy that after 120 mm. of complete blood deprivation we saw no evidence of membrane lysosomal breakdown, an observation which has also been reported in studies of in vitro lethal cell injury, and in regional cerebral ischemia.

Hawkins said, “Lysosomes did not rupture for approximately 4 hours and in fact did not release the fluorescent dye until after reaching the postmortem recrotic phase of injury. The original suicide bag mechanism of cell damage thus is apparently not operative in the systems studied. Lysosomes appear to be relatively stable organelles.”

Morrison and Griffin said, “We find that both rat and human cerebellar mRNA5 are surprising stable under a variety of postmortem conditions and that biologically active, high-molecular-weight mRNA5 can be isolated from postmortem tissue. A comparison of RNA recoveries from fresh rat cerebella and cerebella exposed to different postmortem treatments showed that 83% of the total cytoplasmic RNAs present immediately postmortem was recovered when rat cerebella were left at room temperature for 16 hours postmortem and the 90% was recovered when the cerebella were left at 9 degrees c. for this length of time. In neither case was RNA recovery decreased by storing the cerebella in liquid nitrogen prior to analysis. Control studies on protein stability in postmortem rat cerebella show that the spectrum of abundant proteins is also unchanged after up to 16 hours at room temperature.”

The ability of DNA to survive for long periods was dramatically illustrated by its recovery and sequencing from a 17 to 20 million year old magnolia leaf. “Sediments and fossils seem to have accumulated in an anoxic lake bottom environment, they have remained unoxidized and water, saturated to the present day.” ‘Most leaves are preserved as compression fossils, commonly retaining intact cellular tissue with considerable ultrastructural preservation, including cell walls, leaf phytoliths, and intracellular organelles, as well as many organic constituents such as flavonoids and steroids. There is little evidence of postdepositional (diagenetic) change in many of the leaf fossils.”

Gilden report that ‘nearly two-thirds of all tissue acquired in less than six hours after death was successfully grown, whereas only one-third of all tissue acquired more than six hours after death was successfully grown in tissue culture.” While it would be incorrect to conclude that widespread cellular survival occurred based on these findings, they do show that structural deterioration is insufficient to disrupt function in at least some cells. This supports the idea that structural deterioration in many other cells should not be extensive.

It is currently possible to initiate suspension immediately after legal death. In favorable circumstances legal death can be declared upon cessation of heartbeat in an otherwise revivable terminally ill patient who wishes to die a natural death and has refused artificial means of prolonging the dying process. In such cases, the ischemic interval can be short (two or three minutes). It is implausible that ischemic injury would cause information theoretic death in such a case.

As the ischemic interval lengthens, the level of damage increases. It is not clear exactly when information loss begins or when information theoretic death occurs. Present evidence supports but does not prove the hypothesis that information theoretic death does not occur for at least a few hours following the onset of ischemia. Quite possibly many hours of ischemia can be tolerated. Freezing of tissue within that time frame followed by long term storage in liquid nitrogen should provide adequate preservation of structure to allow repair.

It is essential to ask whether the important structural elements underlying “behavioral plasticity” (human memory amid human personality) are likely to be preserved by cryonic suspension. Clearly, if human memory is stored in a physical form which is obliterated by freezing, then cryonic suspension won’t work. In this section we briefly consider a few major aspects of what is known about long term memory and whether known or probable mechanisms are likely to be preserved by freezing.

It appears likely that short term memory, which can be disrupted by trauma or a number of other processes, will not be preserved by cryonic suspension. Consolidation of short term memory into long term memory is a process that takes several hours. We will focus attention exclusively on long term memory, for this is far more stable. While the retention of short term memory cannot be excluded (particularly if chemical preservation is used to provide rapid initial fixation), its greater fragility renders this significantly less likely.

To see the Mona Lisa or Niagra Falls, changes us, as does seeing a favorite television show or reading a good book. These changes are both figurative and literal, and it is the literal (or neuroscientific) changes that we are interested in: what are the physical alterations that underlie memory?

Briefly, the available evidence supports the idea that memory and personality are stored in identifiable physical changes in the nerve cells, and that alterations in the synapses between nerve cells play a critical role.

Shepherd in “Neurobiology” said: “The concept that brain functions are mediated by cell assemblies and neuronal circuits has become widely accepted, as will be obvious to the reader of this book, and most neurohiologists believe that plastic changes at synapses are the underlying mechanisms of learning and memory.”

Kupfermann in “Principles Neural Science” said: “Because of the enduring nature of memory, it seems reasonable to postulate that in some way the changes must be reflected in long-term alterations of the connections between neurons.”

Eric P. Kandel in “Principles of Neural Science” said: “Morphological changes seem to he a signature of long-term process. These changes do not occur with short-term memory. Moreover, the structural changes that occur with the long-term process are not restricted to the growth. Long-term habituation leads to the opposite change - a regression and pruning of synaptic connections. With long-term habituation, where the functional connections between the sensory neurons and motor neurons are inactivated, the number of terminals per neuron is correspondingly reduced by one-third and the proportion of terminals with active zones is reduced from 40% to 10%.”

Squire in “Memory and Brain” said: “The most prevalent view has been that the specificity of stored information is determined by the location of synaptic changes in the nervous system and by the pattern of altered neuronal interactions that these changes produce. This idea is largely accepted at the present time, and will be explored further in this and succeeding chapters in the light of current evidence.”

Lynch in “Synapses, Circuits, and the Beginnings of Memory” said: “The question of which components of the neuron are responsible for storage is vital to attempts to develop generalized hypothesis about how the brain encodes and makes use of memory. Since individual neurons receive and generate thousands of connections and hence participate in what must he a vast array of potential circuits, most theorists have postulated a central role for synaptic modifications in memory storage.”

Turner and Greenough said: “Two nonmutually exclusive possible mechanisms of brain information storage have remained the leading theories since their introduction by Ramon V Cajal and Tanzi. The first hypothesis is that new synapse formation, or selected synapse retention, yields altered brain circuitry which encodes new information. The second is that altered synaptic efficacy brings about similar change.”

Greenough and BaileY in “The Anatomy of a Memory:
Convergence of results across a diversity of tests say: “More recently it has become clear that the arranagement of synaptic connections in the mature nervous system can undergo striking changes even during normal functioning. As the diversity of species and plastic processes subjected to morphological scrutiny has increased, convergence upon a set of structurally detectable phenomena has begun to emerge. Although several aspects of synaptic structure appear to change with experience, the most consistent potential substrate for memory storage during behavioral modification is an alteration in the number and/or pattern of synaptic connections.”

#21 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:23 AM

It seems likely that human long term memory is encoded by detectable physical changes in cell structure and in particular in synaptic structure.

What, exactly, might these changes be? Very strong statements are possible in simple “model systems.” Bailey and Chen, for example, identified several specific changes in synaptic structure that encoded learned memories from sea slugs (aplysia california) by direct examination of the changed synapse with an electron microscope.

“Using horseradish peroxide (HRP) to label the presynaptic terminals (varicosities) of sensory neurons and serial reconstruction to analyze synaptic contacts, we compared the fine structure of identified sensory neuron synapses in control and behaviorally modified animals. Our results indicate that learning can modulate long-term synaptic effectiveness by altering the number, size, and vesical complement of synaptic active zones.”

Examination by transmission electron microscopy in vacuum of sections 100 nanometers (several hundred atomic diameters) thick recovers little or no chemical information. Lateral resolution is at best a few nanometers (tens of atomic diameters), and depth information (within the 100 nanometer section) is entirely lost. Specimen preparation included removal and desheathing of the abdominal ganglion which was then bathed in seawater for 30 minutes before impalement and intrasomatic pressure injection of HRP. Two hours later the ganglia were fixed, histochemically processed, and embedded. rollowing this treatment, Bailey and Chen concluded that “clear structural changes accompany behavorial modification, and those changes can be detected at the level of identified synapses that are critically involved in learning.”

The following observations about this work seem in order. First, several different types of changes were present. This provides redundant evidence of synaptic alteration. Inability to detect one type of change, or obliteration of one specific type of change, would not be sufficient to prevent recovery of the “state” of the synapse. Second, examination by electron microscopy is much cruder than the techniques considered here which literally propose to analyze every molecule in the structure. Further alterations in synaptic chemistry will be detectable when the synapse is examined in more detail at the molecular level. Third, there is no reason to believe that freezing would obliterate the structure beyond recognition.

Such satisfying evidence is at present confined to ‘model systems,” what can we conclude about more complex system, e.g., humans? Certainly, it seems safe to say that synaptic alterations are also used in the human memory system, that synaptic changes of various types take place when the synapse “remembers” something, that the changes involve alterations in at least many thousands of molecules and probably involve mechanisms similar to those used in lower organisms (evolution is notoriously conservative).

It seems likely that knowledge of the morphology and connectivity of nerve cells along with some specific knowledge of the biochemical state of the cells and synapses would be sufficient to determine memory and personality. Perhaps, however, some fundamentally different mechanism is present in humans? Even if this were to prove true, any such system would be sharply constrained by the available evidence. It would have to persist over the lifetime of a human being, and thus would have to be quite stable. It would have to tolerate the natural conditions encountered by humans and the experimental conditions to which primates have been subjected without loss of memory and personality (presuming that the primate brain is similar to the human brain). And finally, it would almost certainly involve changes in tens of thousands of molecules to store each bit of information. Functional studies of human long term memory suggest it has a capacity suggests that independent of the specific mechanism, a great many molecules are required to remember each bit. It even suggests that many synapse are used to store each bit (recall there are perhaps 10(15) synapses - which implies some 10(6) synapses per bit of information stored in long term memory).

Given that future technology will allow the molecule by molecule analysis of the structures that store memory, and given that such structures are large on the molecular scale (involving at least tens of thousands of molecules each) then it appears unlikely that such structures will survive the lifetime of the individual only to be obliterated beyond recognition by freezing. Freezing is unlikely to cause information theoretic death.

Even if information theoretic death has not occurred, a frozen brain is not a healthy structure, while repair might be feasible in principle, it would be comforting to have at least some idea about how such repairs might be done in practice. As long as we assume that the laws of physics, chemistry, and biochemistry with which we are familiar today will still form the basic framework within which repair will take place in the future, we can draw well founded conclusions about the capabilities and limits of any such repair technology.

Once again, to decide whether or not to pursue cryonic suspension we must answer the one question. Will restoration of frozen tissue to a healthy and functional state ever prove feasible? And again, if the answer is yes, then cryonics will save lives. And once again, if the answer is no, then it can be ignored. As discussed earlier, effectively the most that we can usefully learn about frozen tissue is the type, location and orientation of each molecule. If this information is sufficient to permit inference of the healthy state with memory and personality intact, then repair is in principle feasible. The most that future technology could offer is the ability to restore the structure whenever such restoration was feasible in principle. I propose that Just this limit will. be closely approached by future advances in technology.

It is unreasonable to think that my current proposal will in fact form the basis for future repair methods for two reasons.

First, better technologies and approaches are likely to be developed. Necessarily, we must restrict ourselves to methods and techniques that can be analyzed and understood using the currently understood laws of physics and chemistry. Future scientific advances, not anticipated at this time, are likely to result in cheaper, simpler, or more reliable methods. Given the history of science and technology to date, the probability of future unanticipated advances is good.

Second, this proposal was selected because of its conceptual simplicity and its obvious power to restore virtually any structure where restoration is in principle feasible. These are unlikely to be design objectives of future systems. Conceptual simplicity in advantageous when the resources available for the design process are limited. Future design capabilities can reasonably be expected to outstrip current capabilities, and the efforts of a large group can reasonably be expected to allow analysis of much more complex proposals than considered here.

Further, future systems will be designed to restore specific individuals suffering from specific types of damage, and can therefore use specific methods that are less general but which are more efficient or less costly for the particular type of damage involved. It is easier for a general purpose proposal to rely on relatively simple and powerful methods, even if those methods are less efficient.

#22 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:26 AM

Why,discuss a powerful, general purpose method that is inefficient, fails to take advantage of the specific types of damage involved, and which will almost certainly be superseded by future technology?

The purpose of this post is not to lay the groundwork for future systems, but to answer the question, will cryonics work? The value of cryonics is clearly and decisively based on technical capabilities that will not be developed for several decades (or longer). If some relatively simple proposal appears likely to work, then the value of cryonics is established. Whether or not that simple proposal is actually used is irrelevant. The fact that it could be used in the improbable case that all other technical progress and all other approaches fail is sufficient to let us decide today whether or not cryonics suspension is of value. Long-range planning does not deal with future decisions, but with the future of present decisions.

The philosophical issues involved in this type of long range technical forecasting and the methodologies appropriate to this area are addressed by work in “exploratory engineering.’ The purpose of exploratory engineering is to provide lower bounds on future technical capabilities based on currently understood scientific principles. A successful example is Konstantin Tsiolkovsky’s forecast around the turn of the century that multistaged rockets could go to the moon. His forecast was based on well understood principles of Newtonian mechanics. While it did not predict when such flights would take place, nor who would develop the technology, nor the details of the Saturn V Booster, it did predict that the technical capability was feasible and would eventually be developed. In a similar spirit, we will discuss the technical capabilities that should be feasible and what those capabilities should make possible.

Conceptually, the approach that we will follow is simple.

1. Determine coordinates and orientations of all major molecules, and store this information in a data base.
2. Analyze the information stored in the data base with a computer program which determines what changes in the existing structure should be made to restore it to a healthy and functional state.
3. Take the original molecules and move them, one at a time, back to their correct locations.

You will no doubt agree that this proposal is conceptually simple, but might be concerned about a number of technical issues. The major issues are addressed in the following analysis.

An obvious inefficiency of this approach is that it will take apart and then put back together again structures and whole regions that are in fact functional, or only slightly damaged. Simply leaving a functional region intact, or using relatively simple special case repair methods for minor damage would be faster and less costly. Despite these obvious drawbacks, the general purpose approach demonstrates the principles involved. As long as the inefficiencies are not so extreme that they make the approach infeasible or uneconomical in the long run, then this simpler approach is easier to evaluate.

The brain has a volume of 1350 cubic centimeters (about one and a half quarts) and a weight of slightly more than 1400 grams (about three pounds). The smallest normal human brain weighed 1100 grams, while the largest weighed 2050 grams. It is almost 80% water by weight. The remaining 20% is slightly less than 40% protein, slightly over 50% lipids, and a few percent of other material. Thus, an average brain has slightly over 100 grams of protein, about 175 grams of lipids, and some 30 to 40 grams of “other stuff.”

If we are considering restoration down to the molecular level, an obvious question is, how many molecules are there? We can easily approximate the answer, starting with the proteins, an “average” protein molecule has a molecular weight of about 50,000 amu. One mole of “average” protein is 50,000 grams, so the 100 grams or protein in the brain is 100/50,000 or .002 moles. One mole is 6.02 x 10(23) molecules, so .002 moles is 1.2x10(21) molecules.

We proceed in the same way for the lipids. A “typical” lipid might have a molecular weight of 500 amu, which is 100 times less than the molecular weight of a protein. This implies the brain has about 175/400 x 6.02 x 10(23) or about 2x10(23) lipid molecules.

Finally, water has a molecular weight of 18, 50 there will be about 1400 x 0.8 / 18 x 6.02 x 10(23) or about 4 x 10(25) water molecules in the brain. In many cases a substantial percentage of water will have been replaced with cryoprotectant during the process of suspension, glycerol at a concentration of 4 molar or more, for example. Both water andol will be treated in bulk, and so the change from water molecules to glycerol (or other cryoprotectants) should not have a significant impact on any calculations.

The numbers are fundamental. Repair of the brain down to the molecular level will require that we cope with them down in some fashion.

Another parameter whose value we must decide is the amount of repair time per molecule. We assume that such repair time includes the time required to determine the location of the molecule in the frozen tissue and the time required to restore the molecule to its correct location, as well as the time to diagnose and repair any structural defects in the molecule. The computational power required to analyze larger-scale structural damage e.g., this mitochondria has suffered damage to its internal membrane structure (so called “Flocculent Densities”) - should be less than the power required to analyze each individual molecule. An analysis at the level of sub-cellular organelles involves several orders of magnitude fewer components and will require correspondingly less computational power. Analysis at the cellular level involves even fewer components. We therefore neglect the time required for these additional computational burdens. The total time required for repair is just the sum over all molecules of the time required by one repair device to repair that molecule divided by the number of repair devices. The more repair devices there are, the faster the repair will be. The more molecules there are, and the more time it takes to repair each molecule, the slower repair will be.

The time required for a ribosome to manufacture a protein molecule of 400 amino acids is about 10 seconds, or about a milliseconds to add each amino acid, DNA polymerase III can add an additional base to a replicating DNA strand in about a millisecond. In both cases, synthesis takes place in solution and involves significant delays while the needed components diffuse to the reactive sites. The speed of assembler directed reactions is likely to prove faster than current biological systems. The arm of an assembler should be capable of making a complete motion and causing a single chemical transformation in about a microsecond. However, we will conservatively base our computations on the speed of synthesis already demonstrated by biological systems, and in particular on the slower speed of protein synthesis.

We must do more than synthesize the required molecules, we must analyze the existing molecules, possible repair them, and also move them from their original location to the desired final location. Existing antibodies can identify specific molecular species by selectively binding to them, so identifying individual molecules is feasible in principle. Even assuming that the actual technology employed is different, it seems unlikely that such analysis will require substantially longer than the synthesis time involved, so it seems reasonable to multiply the synthesis time by a factor of a few to provide an estimate of time spent per molecule. This should, in principle, allow time for the complete disassembly and reassembly of the selected molecule using methods no faster than those employed in biological systems. While the precise size of this multiplicative factor can reasonably be debated, a factor of 10 should be sufficient. The total time required to simply move a molecule from its original location to its correct final location in the repaired structure should be smaller than the time required to disassemble and reassemble it, so we will assume that the total time required for analysis, repair and movement is 100 seconds per protein molecule.

Warming the tissue before determining its molecular structure creates definite problems, everything will move around. A simple solution to this problem is to keep the tissue frozen until after all the desired structural information is recovered. In this case the analysis will take place at a low temperature. Whether or not subsequent operations should be performed at the same low temperature is left open. A later section considers the various approaches that can be taken to restore the structure after it has been analyzed.

In practice, most molecules will probably be intact, they would not have to be either disassembles or reassembled. This should greatly reduce repair time. On a more philosophical note, existing biological systems generally do not bother to repair macromolecules (a notable exception is DNA - a host of molecular mechanisms for the repair of this molecule are used in most organisms). Most molecules are generally used for a period of time and then broken down and replaced. There is a slow and steady turnover of molecular structure - the atoms in the roast beef sandwich eaten yesterday are used today to repair and replace muscles, skin, nerve cells, etc. If we adopted nature’s philosophy we would simply discard and replace any damaged molecules, greatly simplifying molecular repair.

#23 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:28 AM

Carried to its logical conclusion, we would discard and replace all the molecules in the structure. Having once determined the type, location and orientation of a molecule in the original (frozen) structure, we would simply throw that molecule out without further examination and replace it. This requires only that we be able to identify the location and type of individual molecules. It would not be necessary to determine if the molecule was damaged, nor would it be necessary to correct any damage found. By definition, the replacement molecule would be taken from a stock-pile of structurally correct molecules that had been previously synthesized, in bulk, by the simplest and most economical method available.

Discarding and replacing even a few atoms might disturb some people. This can be avoided by analyzing and repairing any damaged molecules. However, for those who view the simpler removal and replacement of damaged molecules as acceptable, the repair process can be significantly simplified. For purposes of this letter, we will continue to use the longer time estimate based on the premise that full repair of every molecule is required. This appears to be conservative. (Those who feel that replacing their atoms will change their identity should think carefully before eating their next meal!)

We shall assume that the repair time for other molecules is similar per unit. mass. That is, we shall assume that the repair time for the lipids (which each weigh about 500 amu, 100 times less than a protein) is about 100 times less than the repair time for a protein. The repair time for one lipid molecule is assumed to be 1 second. We will neglect water molecules in this analysis, assuming that they can be handled in bulk.

We have assumed that the time required to analyze and synthesize an individual molecule will dominate the time required to determine its present location, the time required to determine the appropriate location it should occupy in the repaired structure, and the time required to put it in this position. These assumptions are plausible but will be considered further when the methods of gaining access to and of moving molecules during the repair process are considered.

This analysis accounts for the bulk of the molecules -it seems unlikely that other molecular species will add significant additional repair time.

Based on these assumptioni, we find that we require 100 seconds x 1.2 x 10(21) protein molecules + 1 second times 2 x. 10(23) lipids, or 3.2 x 10(23) repair machine seconds. This number is not as fundamental as the number of molecules in the brain. It is based on the (probably conservative) assumption that repair of 50,000 amu requires 100 seconds. Faster repair would imply repair could be done with fewer repair machine, or in less time.

If we now fix the total time required for repair, we can determine the number of repair devices that must function in parallel. We shall rather arbitrarily adopt 10(8) seconds, which is very close to three years, as the total time in which we wish to complete repairs.

If the total repair time is 10(8) seconds, and we require 3.2 x 10(23) repair - machine seconds, then we require 3.2 x 10(15) repair machines for complete repair of the brain. This corresponds to 3.2 x 10(15) / (6.02 x 10(23) or 5.3 x 10(9) moles, or 5.3 nanomoles of repair machines. If each repair device weighs 10(9) to 10(10) amu, then the total weight of all the repair devices is 53 to 530 grams, a few ounces to just over a pound, and, thus, the weight of the devices required to repair each every molecule in the brain, assuming the repair devices operate no faster than current biological methods, is about 4% to 40% of the total mass of the brain.

By way of comparison, there are about 10(14) cells in the human body and each cell has about 10 ribosomes giving 10(21) ribosomes. So you’re talking about six orders of magnitude more ribosomes in the human body than the number of repair machines we estimate are required to repair the human brain.

It seems unlikely that either more or larger repair devices are inherently required. However, it is comforting to know that errors in these estimates of even several orders of magnitude can be easily tolerated. A requirement for 530 kilograms of repair devices (1.000 to 10.000 time more than we calculate is needed) would have little practical impact on feasibility. Although repair scenarios that involve deployment of the repair devices within the volume of the brain could not be used if we required 530 kilograms of repair devices, a number of other repair scenarios would still work. Given that nanotechnology is feasible, manufacturing costs for repair devices will be small. The cost of even 530 kilograms of repair devices should eventually be significantly less than a few hundred dollars. The feasibility of repair down to the molecular level is insensitive to even large errors in the projections given here.

We now turn to the physical deployment of these repair devices. That is, although the raw number of repair devices is sufficient, we must devise an orderly method of deploying these repair devices so they can carry out the needed repairs.

We shall broadly divide repair scenarios into two classes. On-board and off-board. In the on-board scenarios, the repair devices are deployed within the volume of the brain. Existing structures are disassembled in place, their component molecules examined and repaired, and rebuilt on the spot. (We here class as "on-board" those scenarios in which the repair devices operate within the physical volume of the brain, even though there might be substantial off-board support. That is, there might be a very large computer outside the tissue directing the repair process, but we would still refer to the overall repair approach as “on-board”). The on-board repair scenario has been considered in some detail by Drexler. we will give a brief outline of the on-board repair scenario here, but will not consider it in any depth. For various reasons, it is quite plausible that on- board repair scenarios will be developed before off-board repair scenarios.

The first advantage of on-board repair is an easier evolutionary path from partial repair systems deployed in living human beings to the total repair systems required for repair of the more extensive damage found in the person who has been cryonically suspended. That is a simple repair device for finding and removing faulty deposits blocking the circulatory system could be developed and deployed in living humans, and need not deal with all the problems involved in total repair more complex damage (perhaps identifying and killing cancer cells) again within a living human. Once developed, there will be continued pressure for evolutionary improvements in on-board repair capabilities which should ultimately lead to repair of virtually arbitrary damage. This evolutionary path should eventually produce a device capable of repairing frozen tissue.

It is interesting to note that MITI’s Agency of Industrial Science and Technology (AIST) will submit a budget request for 30 million to launch a “microrobot” project next year, with the aim of developing tiny robots for the internal medical treatment and repair, of human beings. MITI is planning to pour 170 million into the microrobot project over the next ten years. Iwao Fujimasa said their objective is a robot less than .04 inches in size that will be able to travel through veins and inside organs, while substantially larger than the proposals considered here, the direction of future evolutionary improvements should be clear.

A second advantage of on-board repair is emotional. In on-board repair, the original structure is left intact at the macroscopic and even light microscopic level. The disassembly and reassembly of the component molecules is done at a level smaller than can be seen, and might therefore prove less troubling than other forms of repair in which the disassembly and reassembly processes are more visible. Ultimately, though correct restoration of the structure is the overriding concern.

A third advantage of on-board repair is the ability to leave functional structures intact. That is in on-board repair we can focus on those structures that are damaged, while leaving working structures alone. If minor damage has occurred, then an on-board repair system need make only minor repairs.

The major drawback of on-board repair is the increased complexity of the system. As discussed earlier, this in only a drawback when the design tools and the resources available for the design are limited. We can reasonably presume that future design tools and future resources will greatly exceed present efforts. Developments in computer aided design of complex systems will put the design of remarkably complex systems within easy grasp.

In on-hoard repair, we might first logically partition the volume of the brain into a matrix of cubes, and then deploy each repair device in its own cube. Repair devices would first get as close as possible to their assigned cube by moving through the circulatory system (we presume it would be cleared out as a first step) and would then disassemble the tissue between them and their destination. Once in position each repair device would analyze the tissue in its assigned volume and perform any repairs required.

The second class of repair sceanarios, the off-board scenarios, allow the total volume of repair devices to greatly exceed the volume of the human brain.

The primary advantages of off-hoard repair is conceptual simplicity. It employs simple brute force to insure that a solution is feasible and to avoid complex design issues. As discussed earlier, these are virtues in thinking about the problem today but are unlikely to carry much weight in the future when an actual system is being designed.

The other advantages of this approach are fairly obvious. Lingering concerns about volume and heat dissipation can he eliminated. If a ton of repair devices should prove necessary, then a ton can be provided. Concerns about design complexity can be greatly reduced. Off-board repair scenarios do not require that the repair devices he mobile simplifying communications and power distribution, and eliminating the need for locomotor capabilities and navigational abilities. The only previous paper on off-board repair scenarios was by Merkle.

Off-board repair scenarios can be naturally divided into three phases. In the first phase, we must analyze the structure to determine its state. The primary purpose of this phase is simply to gather information about the structure although in the process the disassembly of the structure into its component molecules will also take place. Various methods of gaining access to and analyzing the overall structure are feasible. In this letter we shall primarily consider one approach.

We shall presume that the analysis phase takes place while the tissue is still frozen. While the exact temperature is left open, it seems preferable to perform analysis prior to warming. The thawing process itself causes damage and, once thawed, continued deterioration will proceed unchecked by the mechanisms present in healthy tissue, This cannot be tolerated during a repair time of several years. Either faster analysis or some means of blocking deterioration would have to be used if analysis were to take place after warming.

#24 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 16 November 2003 - 01:30 AM

Bill, the information you are providing in this thread I'm sure is very useful. My last post in this thread asked you what the reference was to the post that was two above it beginning with "Although how long is unclear, New York University Scientists recently announced the development of a machine made out of a few strands of DNA, representing the first step toward building nanorobots capable of repairing cell damage at the molecular level and restoring cells, organs and entire organisms to youthful vigor."

You emailed me and thanked me for bringing it to your attention and that you edited it. As far as I can see the edit was a deletion of "Although how long is unclear" which would render my last post ambiguous to anyone who is visiting this thread for the first time.

Can you clarify why you are posting articles that are clearly not your work without the appropriate referencing?

Jace

#25 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:45 AM

Bill, the information you are providing in this thread I'm sure is very useful. My last post in this thread asked you what the reference was to the post that was two above it beginning with "Although how long is unclear, New York University Scientists recently announced the development of a machine made out of a few strands of DNA, representing the first step toward building nanorobots capable of repairing cell damage at the molecular level and restoring cells, organs and entire organisms to youthful vigor."

You emailed me and thanked me for bringing it to your attention and that you edited it. As far as I can see the edit was a deletion of "Although how long is unclear" which would render my last post ambiguous to anyone who is visiting this thread for the first time.

Can you clarify why you are posting articles that are clearly not your work without the appropriate referencing?

Jace


The above and below posts are a combination of notes draws from many books and research papers I have read, some which are refrenced above, and this written long before I ever intended to post it on the web.

#26 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 16 November 2003 - 01:48 AM

Then why didn't you just say that instead of the attempt at deceit?

#27 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:50 AM

Bill, the information you are providing in this thread I'm sure is very useful. My last post in this thread asked you what the reference was to the post that was two above it beginning with "Although how long is unclear, New York University Scientists recently announced the development of a machine made out of a few strands of DNA, representing the first step toward building nanorobots capable of repairing cell damage at the molecular level and restoring cells, organs and entire organisms to youthful vigor."

You emailed me and thanked me for bringing it to your attention and that you edited it. As far as I can see the edit was a deletion of "Although how long is unclear" which would render my last post ambiguous to anyone who is visiting this thread for the first time.

Can you clarify why you are posting articles that are clearly not your work without the appropriate referencing?

Jace



The best I could do is give a partial list which is listed above.
Sources include Engines of Creation by K. Eric Drexler. “Nanotechnoloey: Wherein Molecular Computers Control Tiny Circulatory Submarines”, “Foresight Update”, a publication of the Foresight Institute, “Scanning Tunneling Microscopy: Application to Biology and Technology, “Molecular manipulation using a tunnelling microscope. Molecular Engineering: An Approach to the Development of General Capabilities for Molecular Manipulation.” by K. Eric Drexler, “Rod Logic and Thermal Noise in the Mechanical Nanocomputer,” Proceedins of the Third International Symposium on Molecular Electronic Devices. “Machines of Inner Space’ Yearbook of Science and the Future. “A Small
Revolution Gets Underway”, by Robert Pool, “Positioning Single Atoms with a Scanning Tunnelling Microscope”, by D.M. Eigler. “Nonexistent technology gets a hearing.” by I. Amato Science News. “The Invisible Factory,” Nanosystems, Molecular Machinery, Manufacturing and Computation, John Wiley. Atom by Atom, Scientist Build Invisible Machines of the Future, Andrew Pullack “Theoretical Analysis of a Site-Specific Hydrogen Abstraction Tool” by Charles Musgrave and William A. Goddard III. Nanotechnology, Jason Perry. Nanotechnology Research and Perspectives, B.C. Crandall and James Lewis. “Self Replicating Systems and Molecular Manufacturing” by Ralph C. Merkle. “Computational Nanotechnology” by Ralph C. Merkle. “NASA and Self Replicating Systems” also by Ralph C. Merkle.
Nanotechnology 1991. special issue on Molecular manufacturing.

Edited by thefirstimmortal, 16 November 2003 - 02:43 AM.


#28 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 01:57 AM

Bill, the information you are providing in this thread I'm sure is very useful. My last post in this thread asked you what the reference was to the post that was two above it beginning with "Although how long is unclear, New York University Scientists recently announced the development of a machine made out of a few strands of DNA, representing the first step toward building nanorobots capable of repairing cell damage at the molecular level and restoring cells, organs and entire organisms to youthful vigor."

You emailed me and thanked me for bringing it to your attention and that you edited it. As far as I can see the edit was a deletion of "Although how long is unclear" which would render my last post ambiguous to anyone who is visiting this thread for the first time.

Can you clarify why you are posting articles that are clearly not your work without the appropriate referencing?

Jace



Where I have detailed notes that reference the authors, those names are tied to the quotes. ie

Shepherd in “Neurobiology” said: “The concept that brain functions are mediated by cell assemblies and neuronal circuits has become widely accepted, as will be obvious to the reader of this book, and most neurohiologists believe that plastic changes at synapses are the underlying mechanisms of learning and memory.”

Kupfermann in “Principles Neural Science” said: “Because of the enduring nature of memory, it seems reasonable to postulate that in some way the changes must be reflected in long-term alterations of the connections between neurons.”

Eric P. Kandel in “Principles of Neural Science” said: “Morphological changes seem to he a signature of long-term process. These changes do not occur with short-term memory. Moreover, the structural changes that occur with the long-term process are not restricted to the growth. Long-term habituation leads to the opposite change - a regression and pruning of synaptic connections. With long-term habituation, where the functional connections between the sensory neurons and motor neurons are inactivated, the number of terminals per neuron is correspondingly reduced by one-third and the proportion of terminals with active zones is reduced from 40% to 10%.”

Squire in “Memory and Brain” said: “The most prevalent view has been that the specificity of stored information is determined by the location of synaptic changes in the nervous system and by the pattern of altered neuronal interactions that these changes produce. This idea is largely accepted at the present time, and will be explored further in this and succeeding chapters in the light of current evidence.”

Lynch in “Synapses, Circuits, and the Beginnings of Memory” said: “The question of which components of the neuron are responsible for storage is vital to attempts to develop generalized hypothesis about how the brain encodes and makes use of memory. Since individual neurons receive and generate thousands of connections and hence participate in what must he a vast array of potential circuits, most theorists have postulated a central role for synaptic modifications in memory storage.”

Turner and Greenough said: “Two nonmutually exclusive possible mechanisms of brain information storage have remained the leading theories since their introduction by Ramon V Cajal and Tanzi. The first hypothesis is that new synapse formation, or selected synapse retention, yields altered brain circuitry which encodes new information. The second is that altered synaptic efficacy brings about similar change.”

Greenough and BaileY in “The Anatomy of a Memory:
Convergence of results across a diversity of tests say: “More recently it has become clear that the arranagement of synaptic connections in the mature nervous system can undergo striking changes even during normal functioning. As the diversity of species and plastic processes subjected to morphological scrutiny has increased, convergence upon a set of structurally detectable phenomena has begun to emerge. Although several aspects of synaptic structure appear to change with experience, the most consistent potential substrate for memory storage during behavioral modification is an alteration in the number and/or pattern of synaptic connections.”

#29 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 16 November 2003 - 02:03 AM

That's fine. I simply didn't understand why my misunderstanding wasn't simply followed with a, "This is my work, Jace. You are mistaken," instead of the suspect tactic of deception.

#30 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 02:17 AM

That's fine. I simply didn't understand why my misunderstanding wasn't simply followed with a, "This is my work, Jace. You are mistaken," instead of the suspect tactic of deception.


In much the same way I construct a legal brief, I compile other data. To give you a perfect example, the last case I tried, I read several hundred court cases, read over 50 books (not all legal), compiled the data, took notes, then constructed a legal brief. While all of the case law was cited (ie:Amos v. Mosley, 74 Fla. 555; 77 So), all of the notes were constructed into arguments and not referenced. I suppose if I were writing a book, I would pay more attention to that, and have pages of footnotes, but I'm not that organized.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users