• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
* * * * * 1 votes

Frozen Assets


  • Please log in to reply
37 replies to this topic

#31 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 02:38 AM

Jace here is an example, below is a small section of notes from my constitution challenge. In these note you will find case sites (which do get referenced in my brief), but in addition to that, I have notes on newspaper articles, books, thoughts I wrote down, opinions of people I talked to, Legislative committee meeetings and so forth. The brief has been constructed from this compiled data base.

and it proves that I'm not that organized.



Dismiss in the Interest of Justice The purpose of a motion to dismiss in the interest of justice is to allow justice to prevail over the strict letter of the law so as to prevent a miscarriage of justice. (See, People v Stern, 83 Misc.2d 935.) In entertaining such a motion, the court must scrutinize the merits of defendant's application and weigh the respective interests of the defendant, the complainant, and the community at large. (See, People v Clayton, 41 A.D.2d 204.)

We accordingly issue the writ and strike the challenged portion of City ordinance 97-4019. The order of the circuit court is quashed.

Case questions city noise ordinance  In an unrelated ruling, a state Supreme Court justice finds the law unconstitutional.  Court Justice John Brunetti noted that Syracuse's noise ordinance was just as unconstitutionally vague as a similar law in Poughkeepsie that was struck down for the same reason by the Court of Appeals 20 years ago New York State Constitution." James Johnson New York

BOROUGH ORDINANCE FOUND UNCONSTITUTIONAL

Penn State Changes Speech Policy

On February 11, 1999, Julian Heicklen and Diane Fornbacher were arrested for using a battery-powered bullhorn at the weekly Marijuana Smoke Out in downtown State College, PA. They were handcuffed and carried away for arraignment. On March 3, 1999, they were bound over for trial for violating the State College Borough Noise Ordinance and for disorderly conduct.

Through their respective attorneys, Simon Grill and Ron Rojas, Heicklen and Fornbacher filed writs of habeas corpus to have the cases dismissed. A hearing was held on May 21, 1999. On July 19, 1999, Judge

Thomas King Kistler found the municipal noise ordinance unconstitutional on its face and dismissed all charges.

In his order Judge Kistler stated: "The prohibition against the use of sound amplification devices found in Section 103(b)(2) is an absolute prohibition of such devices and does not contain reasonable manner, place, or time regulations. Presumably, the use of amplification is prohibited even if a person uses such a device to emit sounds in a mere 'whisper' level. Such an absolute prohibition is an impermissible restraint on free speech and is not a reasonable regulation according to Guess, supra, and

Saia, supra. Consequently, this Court holds the Borough of State College Noise Ordinance, Section 103(b)(2), unconstitutional on its face."

(Crockett Promotion v City of Charlotte, 706 F.2d 486, 493 [1983].)


content-based). Here, on the other hand, Section 2079-A contains no such exception and is clearly content neutral.
other hand, a statute that interferes with the right of free speech requires “a more stringent vagueness test.” Id. at 499.

~ The defendant also argued that the ordinance was unconstitutionally vague. That aspect of Kovacs is discussed in Section II, below.

Several cases from the Supreme Court and the Law Court illustrate the application of the standards discussed above to regulations restricting noise. In Kovacs v. Cooper, 336 U.S. 77 (1949), a city ordinance prohibited the use on public streets of sound trucks emitting “loud and raucous noises.” The Court found that while “loud and raucous” are “abstract words, they have through daily use acquired a content that conveys to any interested person a sufficiently accurate concept of what is forbidden.” Id. at 79. Thus, the Court held that the ordinance was not unconstitutionally vague.

Another illustrative case is Grayned v. City of Rockford, 408 U.S. 104 (1972). There, an “antinoise ordinance” prohibited persons adjacent to any school from making, while the school is in session, ‘any noise or diversion which disturbs or tends to disturb the peace or good order” of the school session. Id. at 107-08. The Court concluded that the statute was not impennissibly vague. Id. at 109. The Court noted that “although the prohibited quantum of disturbance is not specified in the ordinance, it is apparent from the statute’s announced purpose that the measure is whether normal school activity has been or is about to be disrupted.” Id. at 112. Thus, the Court found that the ordinance gave “fair warning as to what is prohibited.” Id. at 114.

Another relevant - and arguably controlling - case is Town of Baldwin v. Carter, 2002 ME 52, 794 A.2d 62. There, a town ordinance prohibited allowing a dog to “unnecessarily annoy or disturb any person by continued or repeated barking, howling, or other loud or unusual noises anytime day or night.” Carter, 2002 ME at ¶ 2, 794 A.2d at 64.    The defendant argued that the ordinance was unconstitutionally vague because “it did not provide objective standards for determining if a violation had occurred.” Carter, 2002 ME at ¶ 11, 794 A.2d at 67. The court rejected this argument. The court began by ,, “interpreting “any person as any reasonable person” and thus construing the statute as prohibiting only continuous or repeated dog barking that is unreasonable. Carter, 2002 ME at ¶ 12, 794 A.2d at 68. It then noted that “‘reasonableness’ is a well defined concept under the common law” and ‘is not an unconstitutionally vague concept.”’ Carter, 2002 ME at ¶ 13, 794 A.2d at 68 (quoting Tn -State Rubbish, Inc. v. Town of New Gloucester, 634 A.2d 1284, 1287 (Me. 1993)). After noting that the ordinance provided that the barking must also be “continued or repeated” and “unnecessary” to be actionable - and required the town to give a warning before taking legal action - the court held that the statute was not unconstitutionally vague. Carter, 2002 ME at ¶ 14, 794 A.2d at 68-69; see also State v. Singer, 945 P.2d 359 (Ariz. Ct. App. 1997) (ordinance prohibiting keeping a dog “which is in the habit of barking or howling or disturbing the peace and quiet of any person” was not unconstitutionally vague); State v. Taylor, 495 S.E.2d 413 (N.C. Ct. App. 1998)(ordinance prohibiting keeping an animal that “habitually or repeatedly makes excessive noises that tend to annoy, disturb, or frighten” a person was not unconstitutionally vague).

A final noteworthy case - and one relied upon by the Carter court - is State v. Sylvain, 344 A.2d 407 (Me. 1975). There, a motor vehicle statute provided that “no signaling device shall be unnecessarily sounded nor any braking or acceleration unnecessarily made so as to cause a harsh, objectionable or unreasonable noise.” Id. at

408. Defendant, who was alleged to have “squealed” his tires while accelerating, argued that the statute was unconstitutionally vague. The court rejected this argument. The

13 court found that while the statute “does not set exact decibel limitations, its proscriptions are framed in words of common use and understanding.” Id. at 409. It noted that “only such noises harsh and loud enough to offend the sensibilities of the hearing public to an unreasonable degree are prohibited.” Id. The court concluded: “We have no doubt that the familiar language employed in the statute conveys a sufficiently accurate concept of what is forbidden.” IdP

Under the relevant standards, as illustrated by the cases discussed above, Section 2079-A is not unconstitutionally vague. The statute provides two separate objective standards by which one can determine whether the volume of a car stereo is excessive. First, it is excessive if it “is audible at a distance of greater than 25 feet” and “exceeds 85 decibels.” 29-A M.R.S.A. § 2079-A. Clearly, this standard is objective and can be readily determined. Alternatively, the volume is excessive if it “is greater than reasonable with due regard to the location of the vehicle and the effect on persons in proximity to the vehicle.”10 While this standard is less objective than the first, it is sufficiently definite to survive a vagueness challenge.11 Indeed, it survives attack for the same reason that the statutes in Carter and Sylvain survived - it adopts an objective reasonableness” standard. Thus, the volumes of car stereos are measured not by what an individual police officer or driver might consider an appropriate level, but what a Courts in other states have generally rejected vagueness challenges to regulations limiting muffler and other vehicle noise. See, e.g., St. Louis County v. McClune, 762 S.W.2d 91 (Mo. Ct. App. 1988)(ordinance prohibited vehicles from making “excessive and unnecessary noises”); People v. Byron, 215 N.E.2d 345 (N.Y. 1966)(statute prohibited excessive or unusual muffler noise); State v. Olsson, 895 P.2d 867 (Wash. Ct. App. 1995)(same); County of Jefferson v. Renz, 588 N.W.2d 267 (Wis. Ct. App. 1998) (same), rev’d on other grounds, 603 N.W.2d 541 (1999). But see Meisner v. State, 907 S.W.2d 664 (Tex. Ct. App. 1995)(ordinance prohibiting “unnecessary noise” when accelerating was unconstitutionally vague).
The State notes that

The statue provides perhaps a third standard, stating that it is a “prima facie viola~on... if the vehicle is located near buildings and the buildings or windows in the buildings are shaken or rattled by the sound of the sound system.” Again, this is a readily-determined objective standard.


Broaderick v. Oklahoma, 413 U.S. 601, 612-615 (1973); Town of Kittery v. Campbell, 455 A.2d 30, 31-32 (Me. 1983) (litigant may bring facial First Amendment challenge even when his or her own actions were not protected by First Amendment); Gabriel v. Town of Old Orchard Beach, 390 A.2d 1065, 1068 (Me. 1978)(same).

Speech and thought were enduring values, essential to the excavation of truth. And what sort of example was set by a government that resorted to criminality to enforce its laws?

~ The State is attempting to interpret Mr. O’Rights pro se motion as fairly as possible, but, at times, the motion makes little sense. For example, Mr. Q’Rights repeatedly argues that Section 2079-A prohibits “objectional” [sic] or “annoying” noises and that such standards are unconstitutional. Motion to Dismiss, 4-5. In fact, the statue makes no reference to “objectionable” or “annoying” noises and instead speaks only to volume levels without regard to content.

How such a group of privileged eighteenth-century aristo-crats, oligarchs, monarchists, lawyers and businessmen and bankers were led by reason and experience to understand that their lives and interests were best protected in a democracy-a democracy that the future would have to perfect-is the story of this book.

, the noise ordinance in no way allows for arbitrary or discriminatory enforcement. (Crockett Promotion v City of Charlotte, 706 F.2d 486, 493 [1983].)

MARTIN v. STRUTHERS, 319 U.S. 141 (1943)
A municipal ordinance forbidding any person to knock on doors, ring doorbells, or otherwise summon to the door the occupants of any residence for the purpose of distributing to them handbills or circulars, held - as applied to a person distributing advertisements for a religious meeting - invalid under the Federal Constitution as a denial of freedom of speech and press. Pp. 142, 149.

  The right of freedom of speech and press has broad scope. The authors of the First Amendment knew that novel and unconventional ideas might disturb the complacent, but they chose to encourage a  freedom which they believed essential if vigorous enlightenment was ever to triumph over slothful ignorance.[fn3] This freedom embraces the right to distribute literature, Lovell v. Griffin, 303 U.S. 444, 452, and necessarily protects the right to receive it. The privilege may not be withdrawn even if it creates the minor nuisance for a community of cleaning litter from its streets. Schneider v. State, 308 U.S. 147, 162.

"The only security of all is in a free press. The force of public opinion cannot be resisted, when permitted freely to be expressed. The agitation it produces must be submitted to. It is necessary to keep the waters pure." Jefferson to Lafayette,
Writings of Thomas Jefferson, Washington ed., v. 7, p. 325.

Regulations like this simply pave the way for more repressive legislation; they are a harbinger for the future, a future where all "noise" is regulated, with those that are "offensive to local sensibilities" banned and the performers jailed. By excluding sporting events illuminates the selective enforcement of this law and shows the type of "noise" they are truly worried
about--music!

The case of Ward v. Rock Against Racism, 491 U.S. 781 (1989), in which the Supreme Court held that a regulation limiting the volume of outdoor concerts did not violate the First Amendment, illustrates the application of this test. In Ward, the City of New York regulated the volume at which music could be played at a bandshell in Central Park. Id. at 784. The sponsor of a rock concert brought a First Amendment challenge to the regulation, but the Supreme Court held that it was a reasonable regulation of the place and manner of protected speech and thus rejected the challenge. Id. at 803

Distinguish case

“The legislature apparently added this second standard in recognition of the fact that most police officers do not carry decibel meters. Legis. Rec. S-437 (Apr. 11, 2001); Legis. Rec. S-467 (Apr. 24, 2001). ‘LD 497, Page 1 - 120th Legislature, FIRST REGULAR Session Page 1 of2

“The legislature apparently added this second standard in recognition of the fact that most police officers do not carry decibel meters. Legis. Rec. S-437 (Apr. 11, 2001); Legis. Rec. S-467 (Apr. 24, 2001). ‘LD 497, Page 1 - 120th Legislature, FIRST REGULAR Session Page 1 of2

Divided Report
Majority Report of the Committee on TRANSPORTATION reporting Ought to Pass as Amended by Committee Amendment “A” (S-33) on Bill “An Act to Reduce Noise Pollution”
(S.P. 153) (L.D. 497)
Signed:
Senators:
SAVAGE of Knox
O’GARA of Cumberland
GAGNON of Kennebec
Representatives:
MARLEY of Portland
McNEIL of Rockland
COLLINS of Wells
WHEELER of Eliot
WHEELER of Bridgewater
FISHER of Brewer
BOUFFARD of Lewiston
McKENNEY of Cumberland
BUNKER of Kossuth Township
Minority Report of the same Committee
to Pass on same Bill.
Signed:
Representative:
PARADIS of Frenchville
Came from the Senate with the Majority OUGHT TO PASS
AS AMENDED Report READ and ACCEPTED and the Bill
PASSED TO BE ENGROSSED AS AMENDED BY COMMITTEE
AMENDMENT “A” (5-33). READ.
Representative FISHER of Brewer moved that the House ACCEPT the Majority Ought to Pass as Amended Report.
The SPEAKER: The Chair recognizes the Representative from Frenchville, Representative Paradis.
Representative PARADIS: Mr. Speaker, Men and Women of the House. This is highly reminiscent of Twelve Angry Men standing alone. The reason I am opposing this is that I would like to see a comprehensive bill on noise pollution. If that had been included in such a bill, I would have joined the majority. We have many sources of noise pollution. There are some that are supported by very strong lobbyists. The laws are not enforced adequately. I think this is singling out one group, the young people, maybe it is the teacher in me. I don’t like to see that, but somewhere along the line I hope there is a bill introduced that would call for a study of noise pollution and more stringent enforcement of the laws. Anybody who wishes to join me in the red column, I would appreciate it. Thank you.
The SPEAKER: The Chair recognizes the Representative
from Wells, Representative Collins.
Representative COLLINS:    Mr. Speaker, Ladies and Gentlemen of the House. I seldom rise to speak to you, but this morning I feel obligated to speak to you. I think we have all heard the cars with the deep base on the radios. They are noise polluters. You wonder sometimes how the operator of that vehicle can hear emergency sirens coming up behind him to move out of the way. You question where the motive is to interfere with our constituents when they are trying to sleep at night during the summer months. I have had a vehicle come by my house and it seems as though I could hear it a quarter of a mile before it even got to my house with this deep base tone to the radio. I have constituents on a number of occasions say to me, Ron, there ought to be a law. This is a nuisance. I was reluctant to even present legislation, but fortunately a member of
the other body did. I am a member of the Transportation Committee and we supported this, not unanimously, however, a majority supported this legislation. I ask that you would also. Thank you.
The SPEAKER: The Chair recognizes the Representative from Brewer, Representative Fisher.
Representative FISHER: Mr. Speaker, Men and Women of the House. I saw Twelve Angry Men also and as my good friend from Aroostook, he was right like Henry Fonda. This bill is considerably narrower than the gentleman from Aroostook would like it to be. It specifically aims itself at the boom box heavy base songs emanating from vehicles today. It gives us a tool to kind of tone things down in the neighborhoods. The Representative from Wells mentioned our former colleague who is down at the other end of the building now, the good Senator from York. He did a great presentation. We are going to name him an honorary choirboy for his rendition of the noises that come from cars. If you see him in the halls, I would ask to request to hear him do his song again.
The Chair ordered a division on the motion to ACCEPT the Majority Ought to Pass as Amended Report.
A vote of the House was taken. 80 voted in favor of the same and 32 against, and accordingly the Majority Ought to Pass as Amended Report was ACCEPTED.
The Bill was READ ONCE. Committee Amendment “A” (S-33) was READ by the Clerk and ADOPTED. The Bill was assigned for SECOND READING Wednesday, April 4, 2001.


Majority Report of the Committee on STATE AND LOCAL

.



An Act to Reduce Noise Pollution
S.P. 153 L.D. 497
(C “A” S-33)

THE PRESIDENT PRO TEM: The Chair recognizes the Senator from Aroostook, Senator Martin.

Senator MARTIN: Mr. President, members of the Senate. The
title sort of intrigued me and I would ask you to take L.D. 497 with the amendment and take a look at the bill. “An Act to Reduce Noise Pollution.” I was involved with a similar idea a number of years ago dealing with a problem we had in Aroostook County on agricultural noise. When we got through it, we realized that in order to enforce anything dealing with noise, someone needed to have a noise meter. What this bill does is to say that you can’t be riding a vehicle, and the amendment is amended to say “on a public way at a volume that is audible at a distance of greater than 25 feet and exceeds 85 decibels.” So as you’re riding your vehicle, or someone is but it won’t be me as I don’t like to hear the sound that loud to begin with but some of the teenagers that we all know do, someone is going to stop them and say it’s too loud. Now someone’s going to need a decibel meter. State Police do not carry those. Are they then going to cease the car at the noise level that it’s at and then go find a decibel meter and then test it? I think we’re asking for trouble. The fine sponsor is a member of this body, the Senator from York, and I’d like him or members of the Transportation Committee to explain how we’re going to be able to enforce it, what the cost will be of the decibel meters that we’re going to have to provide to Municipal and State Police, and whether or not this potentially becomes a method of harassment for Maine’s young people. So I hate to pose those
S-437
LEGISLATIVE RECORD - SENATE, WEDNESDAY, APRIL 11,2001



#32 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 02:48 AM

The second phase of off-board repair is determination of the healthy state. In this phase, the structural information derived from the analysis phase is used to determine what the healthy state of the tissue had been prior to suspension and any preceding illness. This phase involves only computation based on the information provided by the analysis phase.

The third phase is repair. In this phase, we must restore the structure in accordance with the blue-print provided by the second phase, the determination of the healthy state.

Repair methods in general start with frozen tissue, and end with healthy tissue. The nature of the intermediate states characterizes the different repair approaches. In off-board repair the tissue undergoing repair must pass through three highly characteristic states.

The first state is the starting state, prior to any repair efforts. The tissue is frozen (unrepaired).

In the second state, immediately following the analysis phase, the tissue has been disassembled into its individual molecules. A detailed structural data base has been built which provides a description of the location, orientation, and type of each molecule, as discussed earlier. For those who are concerned that their identity or “self” is dependent in some fundamental way on the specific atoms which compose their molecules, the original molecules can be retained in a molecular “filing cabinet”. While keeping physical track of the original molecules is more difficult technically, it is feasible and does not alter off-board repair in any fundamental fashion.

In the third state the tissue is restored and fully functional.

By characterizing the intermediate state which must be achieved during the repair process, we reduce the problem from “start with frozen tissue and generate healthy tissue” to “start with frozen tissue and generate a structural data base and a molecular filing cabinet. Take the structural data base and the molecular filing cabinet and generate healthy tissue.” It is characteristic of off-board repair that we disassemble the molecular structure into its component pieces prior to attempting repair.

As an example, suppose we wish to repair a car. Rather than try and diagnose exactly what’s wrong, we decide to take the car apart into its component pieces. Once the pieces are spread out in front of us. We can easily clean each piece. and then reassemble the car. Of course. we’ll have to keep track of where all the pieces go so we can reassemble the structure, but in exchange for this book keeping track we gain a conceptuallv simple method of insuring that we actually can get access to everything and repair it. While this is a rather extreme method of repairing a broken carburetor, it certainly is a good argument that we should be able to repair even rather badly damaged cars, so too, with off-board repair. While it might be an extreme method of fixing any particular form of damage, it provides a good argument that damage can be repaired under a wide range of circumstances.

Regardless of the initial level of damage, regardless of the functional integrity or lack thereof any or all of the frozen structure, regardless of whether easier and less exhaustive techniques might or might not work, we can take any frozen structure and convert it into the canonical state so described.

Further, this is the best that we can do. Knowing the type, location and orientation of every molecule in the frozen structure under repair and retaining the actual physical molecules (thus avoiding any philosophical objections that replacing the original molecules might somehow diminish or negate the individuality of the person undergoing repair) is the best that we can hope to achieve. We have reached some sort of limit with this approach, a limit that will make repair feasible under circumstances which would astonish most people today.

One particular approach to off-board repair is divide-and-conquer. This method is one of the technically simplest approaches.

Divide-and-conquer is a general purpose problem-solving method frequently used in computer science and elsewhere. In this method, if a problem proves too difficult to solve it is first divided into sub-problems, each of which is solved in turn. Should the sub-problems prove too difficult to solve, they are in turn divided into sub-subproblems. This process is continued until the original problem is divided into pieces that are small enough to be solved by direct methods.

If we apply divide-and-conquer to the analysis of a physical object - such as the brain - then we must be able to physically divide the object of analysis into two pieces and rescursively apply the same method to the two pieces. This means that we must be able to divide a piece of frozen tissue, whether it be the entire brain or some smaller part, into roughly equal halves. Given that tissue at liquid nitrogen temperatures is already prone to fracturing, it should require only modest effort to deliberately induce a fracture that would divide such a piece into two roughly equal parts. Fractures made at low temperatures (when the material is below the glass transition temperature) are extremely clean, and result in little or no loss of structural information. Indeed, freeze fracture techniques are used for the study of synaptic structures. Hayat says, ‘Membranes split during freeze-fracturing along their central hydrophobic plane, exposing intramembranous surfaces. The fracture plane often follows the contours of membranes and leaves bumps or depressions where it passes around vesicles and other cell organelles. The fracturing process provides more accurate insight into the molecular architecture of membranes than any other ultrastructural methods.” It seems unlikely that the fracture itself will result in any significant loss of structural information.

The freshly exposed faces can now be analyzed by various surface analysis techniques. Work with STM’s supports the idea that very high resolution is feasible. For example. optical absorption microscopy generates an absorption spectrum of the surface with a resolution of 1 nanometer. “Kumar Wickramasinghe of IBM’s T.J. Watson Research center said. “We should be able to record the spectrum of a single molecule” on a surface. Williams and Wickramasinghe said. “The ability to measure variations in chemical potential also allows the possibility of selectively identifying subunits of biological macromolecules either through a direct measurement of their chemical-potential gradients or be decorating them with different metals. This suggest a potentially simple method for sequencing DNA.” While current devices are large, the fundamental physical principles on which they rely do not require large size. Many of the devices depend primarily on the interaction between a single atom at the tip of the STM probe and the atoms on the surface of the specimen under analysis. Clearly, substantial reductions in size in such devices are feasible.

High resolution optical techniques can also be employed. Near field microscopy, employing light with a wave length of hundreds of nanometers, has achieved a resolution of 12 nanometers (much smaller than a wave length of light). To quote the abstract of a recent review article on the subject. “The near-field optical interaction between a sharp probe and a sample of interest can be exploited to image, spectroscopically probe, or modify surfaces at a resolution (down to 12 nm) inaccessible by traditional far-field techniques. Many of the attractive features of conventional optics are retained including noninvasiveness reliability and low cost. In addition, most optical contrast mechanisms can be extended to the near-field regime, resulting in a technique of considerable versatility. This versatility is demonstrated by several examples, such as the imaging of nanometric - scale features in mammalian tissue sections and the creation of ultrasmall, magnet-optic domains having implications for high-density data storage. Although the technique may find uses in many diverse fields, two of the most exciting possibilities are localized optical spectroscopy of semiconductors and the fluorence imaging of living cells.’

Another article said, “Our signals are currently of such magnitude that almost any application originally conceived for far field optics can now be extended to the near-field regime, including; dynamical studies at video rates and beyond, low noise, high resolution spectroscopy (also aided by the negligible auto-fluorencence of the probe minute differential absorption measurements magnetoptics and super resolution lithography.”

The division into, halves continues until the pieces are small enough to allow direct analysis by repair devices. If we presume that division continues until each repair device is assigned its own piece to repair. then there will be both 3.2 x 10(15) repair devices and pieces. If the 1350 cubic centimeter volume of the brain is divided into this many cubes, each such cube would be about .4 microns (442 nanometers) on a side. Each cube could then be directly analyzed (disassembled into its component molecules) by a repair device during our three year repair period.

One might view these cubes as the pieces of a three-dimensional jig-saw puzzle, the only difference being that we have cheated and carefully recorded the position of each piece. Just as the picture on a jig-saw puzzle, is clearly visible despite the fractures between the pieces, so too the three-dimensional “picture” of the brain is clearly visible despite its division into pieces.

#33 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 02:49 AM

There are a great many possible methods of handling the mechanical problems involved in dividing and moving the pieces. It seems unlikely that mechanical movement of the pieces will prove an insurmountable impediment, and therefore we do not consider it in detail. However, for the sake of correctness, we outline one possibility. Human arms are about 1 meter in length, and can easily handle objects from 1 to 10 centimeters in size (.01 to .1 times the length of the arm). It should be feasible, therefore, to construct a series of progressively shorter arms which handle pieces of progressively small size. If each set of arms were ten times shorter than the preceding set, then we would have devices with arms of, 1 meter, 1 decimeter,.1 centimeter, 1 millimeter. 100 microns. 10 microns. I micron. and finally .1 microns or 100 nanometers.

So your talking about us needing, to design B different sizes of manipulators. At each succeeding size the manipulators would be more numerous, and so would be able to deal with the many more pieces into which the original object was divided. Transport and mechanical manipulation of an object would be done by arms of the appropriate size. As objects were divided into smaller pieces that could no longer be handled by arms of a particular size, they would be handed to arms of a smaller size.

If it requires about three years to analyze each piece then the time required both to divide the brain into pieces and to move each piece to an immobile repair device can reasonably be neglected. It seems unlikely that moving the pieces will take a significant fraction of three years.

The information storage requirements for a structural data-base that holds the detailed description and location of each major molecule in the brain can be met by projected storage methods. DNA has an information storage density of about 10(21) bits / cubic centimeter. Conceptually similar but somewhat, higher density molecular “tape” systems that store 10(220 bits / cubic centimeter should be quite feasible. If we assume that every lipid molecule is “significant” but that water molecules, simple ions and the like are not, then the number of significant molecules is roughly the same as the number of lipid molecules (3) (the
number of protein molecules is more than two orders of magnitude smaller, so we will neglect it in this estimate). The digital description of these 2 x 10(23) bits (assuming that 50 bits are required to encode the location and description of each molecule). This is about 1.000 cubic centimeters (1 liter, roughly a quart) of “tape” storage. If a storage system of such capacity strikes you as infeasible, consider that a human being has about 10(14) cells and that each cell stores 10(10) bits in its DNA.

So your talking about every human that you see is a device which (among other things) has a raw storage capacity of 10(24) bits and human beings are unlikely to be optimal information storage devices.

A simple method of reducing storage requirements by several orders of magnitude would be to analyze and repair only a small amount of tissue at a times This would eliminate the need to store the entire 10(25) bit description at one time. A smaller memory could hold the description of the tissue actually under repair, and this smaller memory could then be cleared and reused during repair of the next section of tissue.

The computational power required to analyze a data base with 10(25) bits is well within known theoretical limits. It has been seriously proposed that it might be possible to increase the total computational power achievable within the universe beyond any fixed bound in the distant future. More conservative lower bounds to nearer term future computation capabilities can be derived from the reversible rod-logic molecular model of computation, which dissipates about 10(23) joules per gate operation when operating at 100 picoseconds at room temperature. A wide range of other possibilities exist. Likharev proposed a computational element based on Jesephson junctions which operates at 4k sand in which energy dissipation per switching operation is 10(24) joules with a switching time of 10(9) seconds. Continued evolutionary reductions in the size and energy dissipation of properly designed NMOS and CMOS circuits should eventually produce logic elements that are both very small (though significantly larger than Drexlers mechanical proposals) and which dissipate extraordinarily small amounts of energy per logic operation. Extrapolation of current trends suggest that energy dissipations in the 10(23) joule range will be achieved before 2030. There is no presently known reason to expect the trend to stop or even slow down at that time.

Energy costs appear to be the limiting factor in rod logic (rather than the number of gates, or the speed of operation of the gates). Today, electric power costs about 10 cents per kilowatt hour. Future costs of power will almost certainly be much lower. Molecular manufacturing should eventually sharply reduce the cost of solar cells and increase their efficiency to close to the theoretical limits. With a manufacturing cost of under 10 cents per kilogram the cost of a one square meter solar cell will be less than a penny. As a consequence the cost of solar power will be dominated by other costs, such as the cost of the land on which the solar cell is placed. While solar cells can be placed on the roofs of existing structures or in otherwise unused areas, we will simply use existing real estate prices to estimate costs. Low cost land in the desert southwestern United States can be purchased for less than $1.000 per acre. (This price corresponds to about 25 cents per square meter, significantly larger than the projected future manufacturing cost of a one square meter solar cell.) Land elsewhere in the world (arid regions of the Australian outback. for example) is much cheaper. For simplicity and conservatism, though, we’ll simply adopt the $ 1.000 per acre price for the following calculations. Renting an acre of land for a year at an annual price of l0% of the purchase price will cost $100. Incident sunlight at the earths surface provides a maximum of 1.353 watts per square meter, or 5.5 x 10(6) watts per acre. Making allowances for inefficiencies in the solar cells. atmospheric losses, and losses cause by the angle of incidence of the incoming light reduces the actual average power production by perhaps a factor of 15 to about 3.5 x 10(5) watts, over a year, this produces 1.1 x 10(13) joules or 3.1 x 10(6) kilowatt hours. The land cost $ 100. so the cost per joule is .9 nanocents and the cost per kilowatt hour is 3.3 millicents. Solar power, once we can make the solar cells cheaply enough, will be over several thousand times cheaper than electric power is today. We’ll be able to buy over 10(15) joules for. under $ 10.000.

While the energy dissipation per logic operation estimated by Drexler is about 10(23) joules. we’ll content ourselves with the higher estimate of 10(22) joules per logic operation. Our 10(15) joules will then power 10(30) gate operations. 10(12) gate operations for each bit in the structural data base or $ x 10(13) gate operations for each of the 2 x 10(23) lipid molecules present in the brain.

It should be emphasized that in off-board repair warming of the tissue is not an issue because the overwhelming bulk of the calculations and hence almost all of the energy dissipation takes place outside the tissue. Much of the computation takes place when the original structure has been entirely disassembled into its component molecules.

Is this enough computational power? We can get a rough idea of how much computer power might be required if we draw an analogv from image recognition. The human retina performs about 100 “operations” per pixel, and the human brain is perhaps 1.000 to 10.000 times larger than the retina. This implies that the human image recognition system can recognize an object after devoting some 10(5) to 10(6) “operations” per pixel. (This number is also in keeping with informal estimates made by individuals expert in computer image analysis). Allowing for the fact that such “retinal operations” are probably more complex than a single “gate operation” by a factor of 1.000 to 10.000. we arrive at 10(9) to 10(10) gate operations per pixel - which is well below our estimate of 10(12) operations per bit or 5 x 10(13) operations per molecule.

To give a feeling for the computational power this represents, it is useful to compare it to estimates of the raw computational power of the human brain. The human brain has been variously estimated as being able to do 10(130. 10(15) or 10(16) operations a second (where “operation” has been variously defined but represents some relatively simple and basic action). The 10(37) total logic operations will support 10(24) logic operations per second for three years, which is the raw computational power of something like 10(13) human beings (even when we use the high end of the range for the computational power of the human brain). This is 10 trillion human beings, or some 2,000 times more people than currently exist on the earth today. By present standards, this is a large amount of computation power. Viewed another way, if we were to divide the human brain into tiny cubes that were about 5 microns on a side (less than the volume of a typical cell). each such cube could receive the full and undivided attention of a dedicated human analyst for a full three years.

The next paragraph analyzes memory costs, and can be skipped without loss of continuity.

This analysis neglects the memory required to store the complete state of these computations. Because this estimate of computational abilities and requirements depends on the capabilities of the human brain, we might require an amount of memory roughly similar to the amount of memory required by the human brain as it computes. This might require about 10(16) bits (10 bits per synapse) to store the “state” of the computation, (we assume that an exact representation of each synapse will not be necessary in providing capabilities that are similar to those of the human brain. At worst, the behavior of small groups of cells could be analyzed and implemented by the most efficient method, e.g.. a “center surround” operation in the retina could be implemented as efficiently as possible. and would not require detailed modeling of each neuron and synapse. In point of fact, it is likely that algorithms employed in the human brain will prove to be the most efficient for this rather specialized type of analysis, and so our use of estimates derived from low level parts count from the human brain are likely to be conservative). For 10(13) programs each equivalent in analytical skills to a single human being, this would require 10(29) bits. At 100 cubic nanometers per bit, this gives 10.000 cubic meters. Using the cost estimates provided by Drexler this would be an uncomfortable 1,000,000. We can, however, easily reduce this cost by partitioning the computation to reduce memory requirements. Instead of having 10(13) programs each able to “think” at about the same speed as a human being, we could have 10(10) programs each able to “think” at a speed 1.000 times faster than a human being. Instead of having 10 trillion dedicated human analysts working for 3 years each, we would have 10 billion dedicated human analysts working for 30000 virtual years each. The project would still be completed in 3 calendar years, for each computer ‘analyst” would he a computer program running 1,000 times faster than an equally skilled human analyst, instead of analyzing the entire brain at once, we would instead logically divide the brain into 1,000 pieces each of about 1.4 cubic centimeters in 51Z5, and analyze each such piece fully before moving on to the next piece.

This reduces our memory requirements by a factor of 1 5000 and the cost of that memory to a manageable $ 1 ,000.

It should he emphasized that the comparisons with human capabilities are used only to illustrate the immense capabilities of 10(37) logic operations. It should not be assumed that the software that will actually be used will have any resemblance to the behavior of the human brain.

I will now argue that even more computational power will in fact be available, and so our margins for error are much larger.

Energy loss in rod logic - in Likharev’s parametric quantron, in properly designed NMOS and CMOS circuits, and in many other proposals for computational devices, is related to speed of operation. By slowing down the operating speed from 100 picoseconds to 100 nanoseconds or even 100 microseconds we should achieve corresponding reductions in energy dissipation per gate operation, This will allow substantial increases in computational rower for a fixed amount of energy (10( 15) joules). We can both decrease the energy dissipated per gate operation (by operating at a slower speed) and increase the total number of gate operations (by using more gates). Because the gates are very small to start with, increasing their number by a factor of as much as 10( 10) (to approximately 10(27) gates) would still result in a total volume of 100 cubic meters (recall that each gate plus overhead is about 100 cubic nanometers). This is a cube less than 5 meters on a side. Given that manufacturing costs will eventually reflect primarily material and energy costs, such a volume of slowly operating gates should be economical and would deliver substantially more computational power per joule.

We will not pursue this approach here for two main reasons. First, published analysis use the higher 100 picosecond speed at operation and 10( 22) joules of energy dissipational Second, operating at 10(22) :ioules at room temperature implies that most logic operations must be reversible and that less than one logic operation in 30 can be irreversible. Irreversible logic operations (which erase information) must inherently dissipate at least KT in for fundamental thermodynamic reasons. The average thermal energy of a single atom or molecule at a temperature T (measured in degrees K) is approximately KT where K is Boltzmann ‘s constant. At room temperature, KT is about 4 x 10(21) joules. Thus, each irreversible operation will dissipate almost 3 x io( 21 ) joules. The number of such operations must be limited if we are to achieve an average energy dissipation of 10(22) joules per logic operation. While it should be feasible to perform computations in which virtually all logic operations are reversible (and hence need not dissipate any fixed amount of energy per logic operation), current computer architectures might require some to this style of operation. By contrast, it should be feasible to use current computer architecture while at the same time performing a major percentage (e.g. 99t or more) of their logic operations in a reversible fashion.

Various electronic proposals show that almost all of the existing combinational logic in present computers can be replaced with reversible logic with no charge in the instruction set that is executed. Further, while some instructions in current computers are irreversible and hence must dissipate at least KT in joules for each bit of information erased, other instructions are reversible and need not dissipate any fixed amount of energy if implemented correctly. Optimizing compilers could then avoid using the irreversible machine instructions and favor the use of the reversible instructions. Thus, without modifying.the instruction set of the computer, we can make most logic operations in the computer reversible.

Further work on reversible computation can only lower the minimum energy expenditure per basic operation and increase the percentage of reversible logic operations. Much greater reductions in energy dissipation might be feasible. While it is at present unclear how far the trend towards lower energy dissipation per logic operation can go, it is clear that we have not yet reached a limit and that no particular limit is yet visible.

We can also expect further decreases in energy costs. By placing solar cells in space the total incident sunlight ‘per square meter can be greatly increased (particularly if the solar cell is located closer to the sun) while at the same time the total mass of the solar cell can be greatly decreased. Most of the mass in earth-bound structures is required not for functional reasons but simply to insure structural integrity against the forces of gravity and the weather. In space both these problems are virtually eliminated. As a consequence a very thin solar cell of relatively modest mass can have a huge surface area and provide immense power at much lower costs that estimated here.

If we allow for the decreasing future cost of energy and the probability that future designs will have lower energy dissipation than 10(22) joules per logic operation, it seems likely that we will have a great deal more computational power than required. Even ignoring these more than likely developments, we will have adequate computational power for repair of the brain down to the molecular level.

Another issue is the energy involved in the complete disassembly and reassembly of every molecule in the brain. The total chemical energy stored in the proteins and lipids of the human brain is quite modest in comparison with 10(15) joules. When lipids are burned, they release about a kilocalories per gram. (Calories conscious dieters are actually counting “kilocalories” so a “300 calorie diet dinner” really has 300,000 calories or 1.254,000 joules). When protein is burned, it releases about 4 kilocalories per gram. Given that there are 100 grams of protein and 175 grams of lipid in the brain, this means there is almost 2,000 kilocalories of chemical energy stored in the structure of the brain, or about B x to(s) joules. This much chemical energy is over 10(8) times less than the 10(15) joules that one person can reasonably purchase in the future. It seems unlikely that the construction of the human brain must inherently require substantially more than 10(7) joules and even more unlikely that it could require over 10(15) joules. The major energy cost in repair down to the molecular level appears to be in the computations require to “think” about each major molecule in the brain and the proper relationships among those molecules.

In the second phase of the analysis, determination of the healthy state, we determine what the repaired (healthy) tissue should look like at the molecular level. That is, the initial structural data base produced by the analysis phase describes unhealthy (frozen) tissue.

In determination of the healthy state, we must generate a revised structural data base that describes the corresponding healthy (functional) tissue. The generation of this revised data base requires a computer program that has an intimate understanding of what healthy tissue should look like, and the correspondence between unhealthy (frozen) tissue and the corresponding healthy tissue. As an example, this program would have to understand that healthy tissue does not have fractures in it, and that if any fractures are present in the initial data base (describing the frozen tissue) then the revised data base (describing the resulting healthy tissue) should be altered to remove them. Similarly, if the initial data base describes tissue with swollen or non-functional mitochondria, then the revised data base should be altered so that it describes fully functional mitochondria. If the initial data base describes tissue which is infected (viral or bacterial infestations) then the revised data base should be altered to remove the viral or bacterial components.

While the revised data base describes the healthy state of the tissue that we desire to achieve, it does not specify the methods to be used in restoring the healthy structure. There is in general no necessary implication that restoration will or will not be done at some specific temperature, or will not be done in any particular fashion. Any one of a wide variety of methods could be employed to actually restore the specified structure. Further, the actual restored structure might differ in minor details from the structure described by the revised data base.

The complexity of the program that determines the healthy state will vary with the quality of the suspension and the level of damage prior to suspension. Clearly, if cryonic suspension almost works” then the initial data base and the revised data base will not greatly differ. Cryonic suspension under favorable circumstances preserves the tissue with good fidelity down to the molecular level. If , however, there was significant pre-suspension injury then deducing the correct (healthy) structural description is more complex. However, it should be feasible to deduce the correct structural description even in the face of significant damage. Only if the structure is obliterated beyond recognition will it be infeasible to deduce the undamaged state of the structure.

A brief philosophical aside is in order. Once we have generated an acceptable revised structural data base, we can in fact pursue either of two distinctly different possibilities. The obvious path is to continue with the repair process, eventually producing healthy tissue. An alternative path is to use the description in the revised structural data base to guide the construction of a different but “equivalent” structure (e.g. an “artificial brain”). This possibility has been much discussed, and has recently been called “uploading” (or “down loading”). Whether or not such a process preserves what is essentially human is often hotly debated. but it has advantages wholly unrelated to personal survival. As an example, the knowledge and skills of an Einstein or Turing need not be lost, they could be preserved in a computational model. On a more commercial level, the creative skills of a Spielberg (whose movies have produced a combined revenue in the billions) could also be preserved. Whether or not the computational model was viewed as having the same essential character as the biological human after which it was patterned, it would indisputably preserve that person's mental abilities and talents.

It seems likely that many people today will want complete physical restoration (despite the philosophical possibilities considered above) and will continue through the repair planning and repair phases.

#34 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 16 November 2003 - 02:50 AM

I appreciate the clarification, Bill. Thank you.

Jace

#35 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 02:51 AM

In the third phase of repair we start with an atomically precise description (the revised data base) of the structure that we wish to restore, and a filing cabinet holding the molecules that will be needed during restoration. Optionally, the molecules in the filing cabinet can be from the original structure. This deals with the concerns of those who want restoration with the original atoms. Our objective is to restore the original structure with a precision sufficient to support the original functional capabilities. Clearly, this would be achieved if we were to restore the structure with atomic precision. Before discussing this most technically exacting approach, we will briefly mention the other major approaches that might be employed.

We know it is possible to make a human brian, for this has been done by traditional methods for many thousands of years. If we were to adopt a restoration method that was as close as possible to the traditional technique for building a brain, we might use a “guided growth” strategy. That is, in simple organisms the growth of every single cell and of every single synapse is determined genetically. “All the cell divisions, deaths, and migrations that’ generate the embryonic, then the larval, and finally the adult forms of the roundworm Caenorhabditis Elegans have now been traced.” “The embryonic lineage is highly invariant, as are the fates of the cells to which it gives rise”. The appendix says, “Parts list: Caenorhabditis elegans (Bristol) Newly Hatched Larva. This index was prepared by condensing a list of all cells in the adult animal, then adding comments and references. A complete listing is available on request.” The adult organism has 959 cells in its body, 302 of which are nerve cells.

Restoring a specific biological structure using this approach would require that we determine the total number and precise growth patterns of all the cells involved. The human brain has roughly 10(12) nerve cells, plus perhaps ten times as many glial cells and other supports cells. While simply encoding this complex a structure into the genome of ‘a single embryo might prove to be overly complex, it would certainly be feasible to control critical cellular activities by the use of on-board nanocomputers. That is, each cell would be controlled by an on-board computer, and that computer would in turn have been programed with a detailed description of the growth pattern and connections of that particular cell. While the cell would function normally in most respects, critical cellular activities, such as replication, mobility, and synapse growth, would be under the direct control of the on-board computer. Thus, as in C. Elegans but on a larger scale, the growth, of the entire system would be “highly invariant.” Once the correct final configuration had been achieved, the on-board nanocomputers would terminate their activities and be flushed from the system as waste.

This approach might be criticized on the grounds that the resulting person was a mere duplicate,” and so “self” had not been preserved. Certainly, precise atomic control of the structure would appear to be difficult to achieve using guided growth, for biological systems do not normally control the precise placement of individual molecules. While the same atoms could be used as in the original, it would seem difficult to guarantee that they would be in the same places.

Concerns of this sort lead to restoration methods that provide higher precision. In these methods, the desired structure is restored directly from molecular components by placing the molecular components in the desired locations. A problem with this approach is the stability of the structure during restoration. Molecules might draft away from their assigned locations, destroying the structure.

An approach that we might call “minimal stabilization” would involve synthesis in liquid water. with mechanical stabilization’ of the various lipid membranes in the system.’ A Three-dimensional grid or scaffolding would provide a framework that would hold membrane anchors in precise locations. The membranes themselves would thus be prevented from drifting too far from their assigned locations. To prevent chemical deterioration during restoration. it would be necessary to remove all reactive compounds (e.g. oxygen).

In this scenario, once the initial.membrane “framework” was in place and held in place by the scaffolding, further molecules would be brought into the structure and put in the correct locations. In many instances, such molecules could be allowed to diffuse freely within the cellular’ compartment into which they had been introduced. In some instances, further control would be necessary. For example, a membrane-spanning channel protein might have to be confined to a specific region of a nerve cell membrane, and prevented from diffusing freely to other regions of the membrane. One method of achieving this limited kind of control over further diffusion would be to enclose a region of the membrane by a diffusion barrier (much like the spread of oil on water can be prevented by placing a floating barrier on the water).

While it is likely that some further cases would arise where it was necessary to prevent or control diffusion, the emphasis in this method is in providing the minimal control over molecular position that is needed to restore the structure.

While this approach does not achieve atomically precise restoration of the original structure, the kinds of changes that are introduced (diffusion of a molecule within a cellular compartment, diffusion of a membrane protein within the membrane) would be very similar to the kinds of diffusion that would take place in a normal biological system. Thus. the restored result would have the same molecules with the same atoms, and the molecules would be in similar (though not exactly the same) locations they had been in prior to restoration.

To achieve even more precise control over the restored structure we might adopt a “full stabilization” strategy. In this strategy, each major molecule would be anchored in place, either to the scaffolding or an adjacent molecule. This would require the design of a stabilizing molecule for each specific type of molecule found in the body. The stabilizing molecule would have a specific end attached to the specific molecule, and a general end ‘attached either to the scaffolding or to another stabilizing molecule. Once restoration was complete the stabilizing molecules would release the molecules that were being stabilized and normal function would resume. This release might be triggered by the simple diffusion of an enzyme that attacked and broke down the stabilizing molecules. This kind of approach was considered by Drexler.

Finally, we might achieve stability of the intermediate structure by using low temperatures. If the structure were restored at a sufficiently low temperature, a molecule put in a certain place would simply not move. We might call this method “low temperature restoration.”

In this scenario, each new molecule would simply be stacked (at low temperature) in the right location. This can be roughly likened to stacking bricks to build a house. A hemoglobin molecule could simply be thrown into the middle of the half-restored red blood cell. Other molecules whose precise position was not critical could likewise be positioned rather inexactly. Lipids in the lipid bi-layer forming the cellular membrane would have to be placed more precisely (probably with an accuracy of several angstroms). An individual molecule having once been positioned more or less correctly on a lipid bi-layer under construction, would be held in place (at sufficiently low temperatures) by van der waals forces. Membrane bound proteins could also be “stacked” in their proper locations. Because biological systems make extensive use of self-assembly it would not be necessary to achieve perfect accuracy in the restoration process. If a biological macromolecule is positioned with reasonable accuracy, it would automatically assume the correct position upon warming.

Large polymers, used either for structural or other purposes, pose special problems. The monomeric units are covalentlv bonded to each other, and so simple “stacking” is inadequate. If such polymers cannot be added to the structure as entirely pre-formed units. then they could be incrementally restored during assembly from their individual monomers using the techniques discussed earlier involving positional synthesis using highly reactive intermediates. Addition of monomeric units to the polymer could then be done at the most convenient point during the restoration operation.

The chemical operations required to make a polymer from its monomeric units at reduced temperatures are unlikely to use the same reaction pathways that are used by living systems. In particular, the activation energies of most reactions that take place at 310k (98.6 degrees fahrenheit) can not be met at 77k, most conventional compounds don’t react at that temperature. However as discussed earlier, assembler based synthesis techniques using highly reactive intermediates in near perfect vacuum with mechanical force providing activation energy will continue to work quite well. even if we assume that thermal activation energv is entirely absent (e.g.. that the system is close to 0 Kelving).

An obvious problem with low temperature restoration is the need to re-warm the structure without incurring further damage. Much “freezing” injury takes place during rewarming, and this would have to be prevented.

Generally, the revised structural data base can be further altered to make restoration easier, while certain alterations to the structural data base must’ be banned (anything that might damage memory, for example), many alterations would be quite safe. One set of safe alterations would be those that correspond to real-world changes that are non-damaging. For example, moving sub-cellular organelles within a cell would be safe - such motion occurs spontaneously in living tissue. Likewise, small changes in the precise physical location of cell structures that did ‘not alter cellular topology would also be safe. Indeed, some operations that might at first appear dubious are almost certainly safe. For example, any alteration that produces damage that can be repaired by the tissue itself once it is restored to a functional state is in fact safe - though we might well seek to avoid such alterations (and they do not appear necessary). While the exact range of alterations that can be safely applied to the structural data base is unclear, it is evident that the range is fairly wide.

An obvious modification which would allow us to re-warm the structure safely would be to add cryoprotectants. Because we are restoring the frozen structure with atomic precision. we could use different concentrations and different types of cryoprotectants in different regions, thus matching the cryoprotectant requirements with exquisite accuracy to the tissue type. This is not feasible with present technology because cryoprotectants are introduced using simple diffusive techniques.

Extremely precise control over the heating rate would also be feasible, as well as very rapid heating. Rapid heating would allow less time for damage to take place. Rapid heating however, might introduce problems of stress and resulting fractures. Two approaches for the elimination of this problem are (1) modify the structure so that the coefficient of thermal expansion is very small and (2) increase the strength of the structure.

One simple method of insuring that the volume occupied before and after warming was the same (i.e., of making a material with a very small thermal expansion coefficient) would be to disperse many small regions with the opposite thermal expansion tendency throughout the material. For example. if a volume tended to expand upon warming the initial structure could include “nanovacuoles.” or regions of about a nanometer in diameter which were empty. Such regions would be stable at low temperatures but would collapse upon warming. By finely dispersing such nanovacuoles it would be possible to eliminate any tendency of even small regions to expand on heating. Most materials expand upon warming, a tendency which can be countered by the use of nanovacuoles.

Of course ice has a smaller volume after it melts. The introduction of nanovacuoles would only exacerbate its tendency to shrink upon melting. In this case we could use vitrified H20 rather than the usual crystalline variety. H20 in the vitreous state is disordered (as in the liquid state) even at low temperatures. and has a lower volume than crystalline ice. This eliminates and even reverses its tendency to contract on warming. Vitrified water as low temperature is denser than liquid water at room temperature increasing the strength of the material can be done in any of a variety of ways. A simple method would be to introduce long polymers in the frozen structure. Proteins are one class of strong polymers that could be incorporated, into the structure with minimal tissue compatibility concerns. Any potential fracture plain would be criss-crossed by the newly added structural protein, and so fractures would be prevented. By also including an enzyme to degrade this artificially introduced structural protein, it would be automatically and spontaneously digested immediately after warming. Very large increases in strength could be achieved by this method.

By combining (1) rapid, highly controlled heating. (2) atomically precise introduction of cryoprotectants. (3) the addition of small nanovacuoles and the use of’vitrified H20 to reduce or eliminate thermal expansion and contraction, and the addition of structural proteins to protect against any remaining thermally induced stresses, the damage that might otherwise occur during rewarming should be completely avoided.

#36 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 02:58 AM

I appreciate the clarification, Bill. Thank you.

Jace


No problem Jace,
As a side note, I spend an unusual amount of time reading and writing every winter. Sometimes I don't leave the house for days. I write too much to really spend much time tracking it all. You live in a warmer climate, correct?

#37 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 16 November 2003 - 03:00 AM

If I were to bet on it. I would say that significantly increased lifetimes are well within our reach, and I will live to see it. I would further bet that cryonic suspension can transport a terminally’ill person to the future. and that the damage done by current freezing methods is likely to be reversible at some point in the future.
In general, for cryonics to fail, one of the following “failure criteria” must be met.
1). Pre-suspension and suspension injury would have to be sufficient to cause information theoretic death. In the case of the human brain, the damage would have to obliterate the structures encoding human memory and personality beyond recognition.
2). Repair technologies that are clearly feasible in principle based on our current understanding of physics and chemistry would have to remain undeveloped in practice, even after several centuries.

An examination of potential future technologies supports the argument that unprecedented capabilities are likely to be developed. Restoration of the brain down to the molecular level should eventually prove technically feasible. Off- board repair utilizing devide-and-conquer is a particularly simple and powerful method which illustrates some of the principles that can be used by future technologies to restore tissue. Calculations support the idea that this method, if implemented, would be able to repair the human brain within about three years. For several reasons, better methods are likely to be developed and used in practice.

Off-board repair consists of three major steps. (1) Determine the coordinates and orientation of each major molecule. (20 Determine a set of appropriate coordinates in the repaired structure for each major molecule. (3) Move them from the former location to the latter. The various technical problems involved are likely to be met by future advances in technology. Because storage times in liquid nitrogen literally extend for several centuries, the development time of these technologies is not critical.

A Broad range of technical approaches to this problem are feasible. The particular form of off-board repair that uses divide-and-conquer requires only that (1) tissue can be divided by some means (such as fracturing) which does not itself cause significant loss of structural information, (2) the pieces into which the tissue is divided can be moved to appropriate destinations (for further division or for direct analysis). (3) a sufficiently small piece of tissue can be analyzed. (4) a program capable of determining the healthy state of tissue given the unhealthy state is feasible, (5) that sufficient computational resources for execution of this program in a reasonable time frame are available, and (6) that restoration of the original structure given a detailed description of that structure is feasible.

It is impossible to conclude based on present evidence that either failure criterion is likely to be met.

Further study of cryonics by the technical community is needed. At present, there is a remarkable paucity of technical papers on the subject. As should be evident from this letter, multidisciplinary analysis is essential in. evaluating its feasibility, for specialists in any single discipline have a background which is too narrow to encompass the whole. Given the life-saving nature of cryonics, it would be tragic if it were to prove feasible but was little used by our society.

#38 thefirstimmortal

  • Topic Starter
  • Life Member The First Immortal
  • 6,912 posts
  • 31

Posted 20 November 2003 - 04:11 PM

Bob Ettinger says, most are cold, but few are frozen. There are very few people currently in the deep freeze, although there are some persons who would be, except that they haven’t yet happened to die.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users