• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
* * * * * 1 votes

uploading - still you or just a copy?


  • Please log in to reply
75 replies to this topic

#31 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 12:53 AM

Cryonics and uploading are perfectly realistic.


Forgive me, but I am not entirely sure how uploading is perfectly realistic whereas the philosopher's zombie is a flying spaghetti monster! :)

#32 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 November 2008 - 01:08 AM

That they are all still considered sci-fi? Interesting, compelling, but unrealistic?

Cryonics and uploading are perfectly realistic.

My annoyance is the fact that...

The popular opinion is a very poorly defined, non-sensical philosophical position based on supernatural phenomena! It is accepted dogmatically without any basis in facts, physics, or any evidence at all!

... and everything else said above by me and in the links to Yudkowsky, when this is such a fundamental philosophical hurdle to things that are so important like cryonics, and so philosophically basic like reductionism.

One thing about the Philosopher's Zombie I appreciate is that it is a thought experiment that is non-quantitative and yet it has become kind of well known in many circles and also has the potential to advance the frontiers of science. Pretty cool.

It has no potential to advance the frontiers of science in any way. It offers no testable hypotheses, there is no associated evidence, or anything scientific about it at all. It is a purely philosophical construct exactly like flying spaghetti monsters.

There is little reliance on philosophy in the post modern age to advance scientific knowledge

That's not true. Philosophy is integral in any field of science. Most fields have matured enough that the philosophical problems have been long settled. Other fields are still young enough that philosophy is unsettled to varying degrees, such as in evolution, or artificial intelligence.

Imagine, a genuine advance in applied science via pure philosophy!

That's hilarious because it makes absolutely no sense, and it really further highlightes the absurdity of zombie-ism.

from wikipedia: "Science"

Science (from the Latin scientia, meaning "knowledge" or "knowing") is the effort to discover, and increase human understanding of how the physical world works. Through controlled methods, scientists use observable physical evidence of natural phenomena to collect data, and analyze this information to explain what and how things work...


You get nowhere with "pure philosophy" in science.

For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.

--Eliezer Yudkowsky, Twelve Virtues of Rationality



Forgive me, but I am not entirely sure how uploading is perfectly realistic whereas the philosopher's zombie is a flying spaghetti monster!

well... that's exactly the whole point of this thread, isn't it?

generally, if you accept that zombie-ism is bunk, uploading is perfectly realistic.

Edited by Savage, 14 November 2008 - 01:14 AM.


sponsored ad

  • Advert

#33 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 01:10 AM

For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.

I can see that I deeply and fundamentally disagree with this person, and I have a real life example to demonstrate it!- A rarity in my rarefied, false-happy mindscape:

I am employed in telecom: I do site acquisition at almost every level from title examination to zoning, permitting, RF compliance, coverage areas, etc., and I recently had to determine whether a zoning variance was required for a rooftop installation.

Darn, I don't have the attention span to explicate this matter entirely, but, in brief:
Believe it or not (and cant let the telecom carrier know this! :) ), I rested back in bed and astrally traveled to the rooftop in order to make the determination.

I later visited the site, but learned nothing, was unable to access the rooftop, and was more muddled afterward than anything:
The abstraction and vividness of psychic travel is often impossible with physical travel, at first, anyway.
Yes, some basic archetypes and universals are at first necessary: I need to know what a town or city is, what are its components, but, thenceforth, may fly and conjure, create and collect at the fanciful whim.

I just laughed out loud thinking of what the project manager would say about this method of verification.
Well, since the analysis was correct, he may not care: show me the money, he would likely say.

In learning a field anew, psychic journeys are as meaningful and important as quantitative facts, imho.

Note: astral travel, psychic journey, and similar phrasings are not meant to imply a new-age, mysticism or gullibility.
They are just convenient ways to express learning from within only that does not involve deduction, logic, or anything more exact.

Edited by paulthekind, 14 November 2008 - 01:10 AM.


#34 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 November 2008 - 01:19 AM

For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.

Note: astral travel, psychic journey, and similar phrasings are not meant to imply a new-age, mysticism or gullibility.
They are just convenient ways to express learning from within only that does not involve deduction, logic, or anything more exact.

The point is that if you must draw lines that correspond to physical reality

What you said was not a deep or fundamental disagreement, unless you actually are implying new-age mysticism or other supernatural/psychic phenomena.

When you draw lines that have no correspondence to any physical reality, you are creating a flying spaghetti monster. So when you assert that zombies are possible without having any basis in facts, physics, or any evidence at all, you are drawing lines with no correspondence to any physical reality, and thus I assert that zombies are exactly like flying spaghetti monsters.

Edited by Savage, 14 November 2008 - 01:24 AM.


#35 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 01:25 AM

Employing the lengthy thought example of replacing one neuron at a time as a means of 'proving' uploading hardly seems more convincing than utilizing the zombie example as a means of 'proving' epiphenomenalism.

What am I failing to see? They seem equally likely or unlikely, equally persuasive or unpersuasive.

#36 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 01:27 AM

For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.

Note: astral travel, psychic journey, and similar phrasings are not meant to imply a new-age, mysticism or gullibility.
They are just convenient ways to express learning from within only that does not involve deduction, logic, or anything more exact.

The point is that if you must draw lines that correspond to physical reality

What you said was not a deep or fundamental disagreement, unless you actually are implying new-age mysticism or other supernatural/psychic phenomena.

When you draw lines that have no correspondence to any physical reality, you are creating a flying spaghetti monster. So when you assert that zombies are possible without having any basis in facts, physics, or any evidence at all, you are drawing lines with no correspondence to any physical reality, and thus I assert that zombies are exactly like flying spaghetti monsters.

And what separates the evidence for uploading from this dustbin?

#37 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 November 2008 - 01:29 AM

Employing the lengthy thought example of replacing one neuron at a time as a means of 'proving' uploading hardly seems more convincing than utilizing the zombie example as a means of 'proving' epiphenomenalism.

What am I failing to see? They seem equally likely or unlikely, equally persuasive or unpersuasive.


And what separates the evidence for uploading from this dustbin?


I don't know what you are asking.

Edited by Savage, 14 November 2008 - 01:29 AM.


#38 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 01:34 AM

I don't know what you are asking.


So when you assert that zombies are possible without having any basis in facts, physics, or any evidence at all, you are drawing lines with no correspondence to any physical reality, and thus I assert that zombies are exactly like flying spaghetti monsters.

I may be missing something, but from what I know thus far, there is the same lack of any basis in fact and physics whatsoever for uploading as there is for zombies, the same lack of any correspondence to physical reality.

Where is the quantitative difference?
What actual evidence of any sort exists for uploading?
It seems as purely speculative as the zombie, as purely philosophical, with perhaps a tinge of sci-fi added for good measure, mostly due to Star Trek and Isaac Asimov, I would suppose.

#39 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 November 2008 - 01:43 AM

I don't know what you are asking.


So when you assert that zombies are possible without having any basis in facts, physics, or any evidence at all, you are drawing lines with no correspondence to any physical reality, and thus I assert that zombies are exactly like flying spaghetti monsters.

I may be missing something, but from what I know thus far, there is the same lack of any basis in fact and physics whatsoever for uploading as there is for zombies, the same lack of any correspondence to physical reality.

Where is the quantitative difference?
What actual evidence of any sort exists for uploading?
It seems as purely speculative as the zombie, as purely philosophical, with perhaps a tinge of sci-fi added for good measure, mostly due to Star Trek and Isaac Asimov, I would suppose.

Brains, neurons, and all of biology exists within physics. If you don't buy into the quantum mind theories, then uploading is perfectly possibly within physical reality, and not speculative (in the sense of supernatural zombies) or purely philosophical at all.

Check out the 2008 technical report Whole Brain Emulation by Anders Sandberg and Nick Bostrom.

This is from pages 13-14.

Table 2: Levels of emulation
Level
1 Computational
module
“Classic AI”, high level representations of information and information
processing.
2 Brain region
connectivity
Each area represents a functional module, connected to others according
to a (species universal) “connectome” (Sporns, Tononi et al., 2005).
3 Analog network
population model
Neurons populations and their connectivity. Activity and states of
neurons or groups of neurons are represented as their time‐averages. This
is similar to connectionist models using ANNs, rate‐model neural
simulations and cascade models.
4 Spiking neural
network
As above, plus firing properties, firing state and dynamical synaptic states.
Integrate and fire models, reduced single compartment models (but also
some minicolumn models, e.g. (Johansson and Lansner, 2007)).
5 Electrophysiology As above, plus membrane states (ion channel types, properties, state), ion
concentrations, currents and voltages. Compartment model simulations.
6 Metabolome As above, plus concentrations of metabolites in compartments.
7 Proteome As above, plus concentrations of proteins and gene expression levels.
8 States of protein
complexes
As above, plus quaternary protein structure.
9 Distribution of
complexes
As above, plus “locome” information and internal cellular geometry.
10 Stochastic behaviour
of single molecules
As above plus molecule positions, or a molecular mechanics model of the
entire brain.
11 Quantum Quantum interactions in and between molecules.


WBE assumptions
Philosophical assumptions
Physicalism (everything supervenes on the physical) is a convenient but not necessary
assumption, since some non‐physicalist theories of mental properties could allow them to
appear in the case of WBE. Success criterion 6b emulation assumes multiple realizability
(that the same mental property, state, or event can be implemented by different physical
properties, states, and events). Sufficient apparent success with WBE would provide
persuasive evidence for multiple realizability. Generally, emulation up to and including level
6a does not appear to depend on any strong metaphysical assumptions.


Computational assumptions
Computability: brain activity is Turing‐computable, or if it is uncomputable, the
uncomputable aspects have no functionally relevant effects on actual behaviour.

Non‐organicism: total understanding of the brain is not required, just component parts and
their functional interactions.

Scale separation: at some intermediary level of simulation resolution between the atomic and
the macroscopic there exists one (or more) cut‐offs such that meeting criterion 2 at this level is
sufficient for meeting one or more of the higher criteria.

Component tractability: the actual brain components at the lowest emulated level can be
understood well enough to enable accurate simulation.

Simulation tractability: simulation of the lowest emulated level is computationally tractable
with a practically realizable computer.

Neuroscience assumptions
Brain‐centeredness: in order to produce accurate behaviour only the brain and some parts of
the body need to be simulated, not the entire body.

WBE appears to be a way of testing many of these assumptions experimentally. In acquiring
accurate data about the structure and function of the brain and representing it as emulations
it should be possible to find major discrepancies if, for example, Computability is not true.


Edited by Savage, 14 November 2008 - 01:57 AM.


#40 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 02:09 AM

Reproduction, recreation, duplication...certainly. This report is a development.

But, I was under the assumption that the crux of the entire matter, and indeed of most of consciousness studies, and of the zombie, is whether the 'uploaded' or duplicated entity would be the same 'me' as the original entity, and, in this respect, Table 2 and the related assumptions, though a useful summary, offer no evidence.

The technical report undoubtedly borrows from mechanical and software engineering in its fashioning a possible method for uploading. But, engineering is generally process and lacks metaphysical transcendence.

#41 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 November 2008 - 02:15 AM

You asked whether uploading had any basis in physical reality, and I'm telling you it does.

Zombies have no basis in physical reality, by definition.

The whole point of zombies is that you can have a person who is exactly the same in all physical respects, yet does not have some specific supernatural quality, and thus they are a zombie.

Hence my statement earlier that a zombie is exactly like a flying spaghetti monster.

Edited by Savage, 14 November 2008 - 02:20 AM.


#42 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 02:17 AM

In fact, from the little I gathered thus far, the authors of the report willfully or innocently cloak a huge gap in knowledge and realizability with exact description of what is known thus far. And, upon closer, Heisenbergian examination, the 'exactness' of the various levels is really really threadbare and ungrounded, especially at the admittedly hazy scale separation areas.

Well, I guess it is useful to describe what is known hitherto so that the problem junction gleams brighter via contrast.

But, this is a commendable report. It must be said. A great intro for someone looking to get up to speed on a bunch of exciting fields of mind science. :)

#43 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 02:28 AM

You asked whether uploading had any basis in physical reality, and I'm telling you it does.

Zombies have no basis in physical reality, by definition.

The whole point of zombies is that you can have a person who is exactly the same in all physical respects, yet does not have some specific supernatural quality, and thus they are a zombie.

I see.
Then I am in agreement.
Uploading is possible if described thus.

But, I would like to say: please do not too easily give in to those monists and materialists who try to diminish the importance of consciousness: saying it is not a mystery, that it is obviously a brain byproduct.
They are in well over their heads and have nothing to support their claims. They just sound skeptical and, therefore, convincing to some.
But, in truth, they are as clueless and uncertain as any fortune teller.

Consciousness is THE mystery. Anyone who proposes to take the mystery out of it with dismissive, devaluing remarks just does not get it.
It has so many perturbations that dwelling on any too long can cause one to doubt all.

I personally believe that the mystery of consciousness will not be solved one neuron at a time via some genome project-like mapping, but, rather, will be solved subjectively, qualitatively, if it is ever solved.
But, to reassert: I believe I am seeing eye to eye at the moment in some respects.

#44 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 November 2008 - 03:05 AM

You asked whether uploading had any basis in physical reality, and I'm telling you it does.

Zombies have no basis in physical reality, by definition.

The whole point of zombies is that you can have a person who is exactly the same in all physical respects, yet does not have some specific supernatural quality, and thus they are a zombie.

I see.
Then I am in agreement.
Uploading is possible if described thus.

But, I would like to say: please do not too easily give in to those monists and materialists who try to diminish the importance of consciousness: saying it is not a mystery, that it is obviously a brain byproduct.
They are in well over their heads and have nothing to support their claims. They just sound skeptical and, therefore, convincing to some.
But, in truth, they are as clueless and uncertain as any fortune teller.

Consciousness is THE mystery. Anyone who proposes to take the mystery out of it with dismissive, devaluing remarks just does not get it.
It has so many perturbations that dwelling on any too long can cause one to doubt all.

...

But, to reassert: I believe I am seeing eye to eye at the moment in some respects.

If consciousness is not produced by the brain, then by what? The toes?
Or are you just throwing an unspecified, unobservable, supernatural quality back into this mix, in exactly the same manner as zombie-ism and flying spaghetti monsters?

I think you are putting too much value in "Grand Mysteries".

Mysteriousness exists in your mind, not out in reality.

read Yudkowsky on Mysterious Answers to Mysterious Questions

To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.


"That which can be destroyed by the truth should be" - P. C. Hodgell, oft quoted by Eliezer Yudkowsky

To understand something, or to even suppose it is possible to understand something, is not to diminish it's importance or value in any way.

Ah.. there is another terrific quote here that I have forgotten. It says something to the effect of "Do the stars shine less brightly because we now understand nuclear physics?" (yes I butchered that one, but that's the idea. I wish I could find the real quote here...)

But I do agree that consciousness is quite possibly the greatest intellectual challenge humanity will ever face. More have been driven insane than have produced something of use on the subject. Consciousness is extremely hard to understand and should not be underestimated.

Edited by Savage, 14 November 2008 - 03:55 AM.


#45 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 06:24 AM

If consciousness is not produced by the brain, then by what? The toes?
Or are you just throwing an unspecified, unobservable, supernatural quality back into this mix, in exactly the same manner as zombie-ism and flying spaghetti monsters?

I think you are putting too much value in "Grand Mysteries".

Possibly. I think that what consciousness directly experiences is somehow brain-field generated, and not the physical universe, which I do not think consciousness experiences directly at all.
The eminent Mr. Yudkowsky acknowledged this partially, without agreeing, by epiphenomenally describing consciousness as the feckless listener and not the speaker, I believe.

Mysteriousness exists in your mind, not out in reality.

True, but some realities are so far removed from our ability to grasp that they are grand mysteries: relatively speaking, of course.

read Yudkowsky on Mysterious Answers to Mysterious Questions

I never heard of this person before today, but it appears that he is a rock star in the Uploading and consciousness areas.

To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.

Tell that to John Keats, Samuel Taylor Coleridge, Tesla and Goya...but, that might not sway Mr. Yudkowsky who probably says on his own self-perceived intelligence: 'let me put it to you this way. ever hear of plato, aristotle, socrates?...morons.' (Princess Bride quote, btw!)

Ah.. there is another terrific quote here that I have forgotten. It says something to the effect of "Do the stars shine less brightly because we now understand nuclear physics?" (yes I butchered that one, but that's the idea. I wish I could find the real quote here...)

Nice! An implied appreciation of qualia here, of the experience of redness as opposed to its description. This person sounds like a dualist fully aware of the hard problem. He sees the experience of a star's brightness as a constancy, as beauty, separate from quantitative knowledge: pure dualism.

But I do agree that consciousness is quite possibly the greatest intellectual challenge humanity will ever face. More have been driven insane than have produced something of use on the subject.

This may be one of those fields of study that turns quacks into meaningful contributors, because such a field really needs the wild, playful, ridiculous stuff that only quacks come up with to jar the it into forward motion.

Cool stuff.

#46 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 November 2008 - 01:06 PM

Possibly. I think that what consciousness directly experiences is somehow brain-field generated, and not the physical universe, which I do not think consciousness experiences directly at all.

Ok, so we are right back where we started:

just throwing an unspecified, unobservable, supernatural quality back into this mix, in exactly the same manner as zombie-ism and flying spaghetti monsters



#47 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 14 November 2008 - 01:55 PM

Ok, so we are right back where we started:

Wouldn't have it any other way! :)
Circular reasoning rules.

#48 Heliotrope

  • Guest
  • 1,145 posts
  • 0

Posted 20 November 2008 - 03:57 PM

i guess it's a connected copy, similar to clone

#49 Clifford Greenblatt

  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 01 December 2008 - 10:02 AM

I think that a fundamental assumption in mind uploading is that the mind is an information system, and preservation of information is what counts in the preservation of the person. What is missed is the significance of sentience to a person’s identity.

I see sentience as a most profound phenomenon that is associated with and yet distinct from the data processes of the mind. There is a great diversity of information processes in a person’s mind, but I see sentience as possessing a fundamental unity throughout all of its associations with that diversity of information processes. This is analogous to a universe having the same physical constants throughout its spacetime despite its vast diversity of processes. Just as two different universes, with two different sets of physical constants, could have some similar physical processes going on in them, so could two different persons have some similar data processes going on in their minds but have a fundamental difference in their sentient identity.

Unity of sentience within a person is not something I can prove but is something that I perceive through strong intuition, like I perceive myself as a sentient being through strong intuition. If all the information in a mind is uploaded, but the identity of the person’s sentience is not preserved, then the person is lost. If the identity of a person’s sentience is preserved, then the person is preserved, even if a great deal of information is changed.

#50 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 December 2008 - 02:20 PM

I think that a fundamental assumption in mind uploading is that the mind is an information system, and preservation of information is what counts in the preservation of the person.



I agree with this observation but I suggest that when you say:

What is missed is the significance of sentience to a person’s identity.


It is you that is missing the significance by addressing the issue of information as limited to memory and what amounts to *data*.

Software is a form of information, which is not only determined by its content (form) but by its function as well. Identity and *sentience* are more analogous to an operating system than just memory alone. It is not the preservation of just the memory of experience but how that memory has contributed to the organization of a *being* in the form of its *OS* that is organized uniquely based on its specific memory.

I see sentience as a most profound phenomenon that is associated with and yet distinct from the data processes of the mind. There is a great diversity of information processes in a person’s mind, but I see sentience as possessing a fundamental unity throughout all of its associations with that diversity of information processes. This is analogous to a universe having the same physical constants throughout its spacetime despite its vast diversity of processes. Just as two different universes, with two different sets of physical constants, could have some similar physical processes going on in them, so could two different persons have some similar data processes going on in their minds but have a fundamental difference in their sentient identity.


DNA for example is not as deterministic as most view it and as such forms a biological equivalent of a *plastic* OS that continues its organization as it grows and assimilates experience, hence the view of sentience as information for both these aspects (function and form of information) and perhaps more that are not yet fully understood. The difficult issue for many strict materialists is that it introduces a new form of duality they philosophically reject or it forces strict materialists to develop a new definition of material that better addresses the paradigm of information.

Unity of sentience within a person is not something I can prove but is something that I perceive through strong intuition, like I perceive myself as a sentient being through strong intuition. If all the information in a mind is uploaded, but the identity of the person’s sentience is not preserved, then the person is lost. If the identity of a person’s sentience is preserved, then the person is preserved, even if a great deal of information is changed.


I think we are in basic accord with respect to what you identify as a "unity of sentience" but what we may yet disagree on is what defines the different aspects of that *unity*. The perspective I present above also depends on such a unity but can still be defined as sufficiently about information alone to make uploading possible in theory. The issue then becomes one of transcription methodology more than fundamental impossibility.

The real debate is how much of the biology (DNA and brain) is essential to that sentience as opposed to what part is merely wetware versus software. That is unless what you are trying to appeal to is the third option of an immaterial *soul* as the core of identity. That is a debate I am not sure would be fruitful.

I also want to add how good it is too see you contributing here again Clifford. It has been a log time since your last visit.

Edited by Lazarus Long, 01 December 2008 - 04:52 PM.
clarification and some added ideas


#51 squ1d

  • Guest
  • 18 posts
  • 0

Posted 01 December 2008 - 07:33 PM

Making a small parenthesis:

I believe consciousness to be the generator of wave functions that constantly collapse into thought. What is it? Probably the whole brain acts as a quantum computer with some *it* that makes everyone think differently. Can we capture this *it*? Can we move consciousness into another quantum computer?

I don't know.

But I wouldn't be the one to try that. I'd rather live infinitely with my own brain and body. At least brain anyways.


And by the way, the argument of changing every neuron to nano units that do the same job is not valid. All that you know is that your brain will be mechanical and work the same way. Not that your consciousness will be intact afterwards.


PS: A fascinating read on the subject is Penroses' books. Fascinating, and in my opinion right on the target. Especially "The Emperor's New Mind".

Edited by squ1d, 01 December 2008 - 07:38 PM.


#52 Vgamer1

  • Guest, F@H
  • 763 posts
  • 39
  • Location:Los Angeles

Posted 01 December 2008 - 08:20 PM

Before even knowing the term "uploading" I always thought that the only way to preserve a consciousness for an indefinite period is to basically constantly repair and/or replace its parts.

Scanning a brain, making a computer program copy, and then vaporizing the old you would effectively mean your death even though a being would continue to exist with your memories and beliefs.

Yes there are two ways that I could see preserving a brain (and the consciousness associated with it) happening. The first would be to have some kind of nanotechnology living in your skull constantly keeping your brain healthy by repairing any damaged tissue and removing foreign agents.

Of course there would be cases in which a brain would be "damaged beyond repair" by severe injury in which the consciousness would be permanently destroyed. Now how much damage is too much damage? I don't think we currently have a good enough understanding of the brain and consciousness to answer that question.

The other option would be to gradually replace your organic brain with compatible inorganic parts. We lose brain cells every day and I don't think any of us is worried that our original consciousness has been lost in the process. Replacing a single neuron with an artificial one would not disturb the continuity that I think we want. So the neurons would have to be replaced very "slowly" in order for the same consciousness to be preserved. Again I don't know exactly what "slowly" means, but I think time will reveal that answer.

Of course both of these solutions are completely theoretical at this point and would require a much greater understanding of the human brain before they could be implemented. That's why I'm a Cognitive Science major of course! I also think that these two answers might be one and the same. We could end up repairing our brain tissue with artificial/inorganic material or we could end up gradually replacing our old new neurons with "real" organic neurons.

I also heard one other idea on this forum that might make uploading a possibility. That is creating a computer copy of your brain, then linking your new and old consciousnesses together through some virtual interface and then destroying the old organic brain. Now this one makes less sense to me, but that may be because I'm not familiar with it. It seems "scarier" to take that kind of all-or-nothing approach.

Either way I won't personally upload until I'm sure that I'll come out as me on the other side. I have a feeling that a lot of people are going to be stupid about it and I don't want to be one of them.

#53 Clifford Greenblatt

  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 08 December 2008 - 12:31 AM

It is you that is missing the significance by addressing the issue of information as limited to memory and what amounts to *data*.


When writing my opinion that mind uploading assumes the mind is an information system, I did not mean that the mind is just a memory bank, but I meant the system as a whole, including system architecture, firmware, software, operating system, peripherals, configuration, and provision for continual self-reconfiguration. A computer could be made so plastic as to fit Daniel Dennett's Joycean Machine model for consciousness, but this would still fail to address the issue of sentience. Much progress is being made in AI, but science is still very much in the dark about sentience.

There may be some confusion here caused by different meanings of the term sentience in the literature. What I have in mind is what David Chalmers calls the hard problem of consciousness. Both David Chalmers and Daniel Dennett speculate that sentience is associated with just any information system, including something as simple as a thermostat. Such speculation is just a matter of hand waving, not being based on any scientific evidence about sentience. I suspect that sentience is associated only with certain organic systems that meet rather strict requirements.

To a large extent, the operation of the mind can be explained as an information system. Sentience is very much associated with that information system, but it is definitely distinct from it. The association between sentience and the information system of the mind is so mysterious that some think that sentience is an effect that does not in turn affect anything. This cannot be true, because we could never talk or write about sentience if it were without effect. Sentience does have very real physical effects, but the mechanics of those effects continue to be invisible to scientific inquiry. I do not claim that it is impossible to do a scientific investigation of sentience, but to this day there has been nothing but speculation.

The real debate is how much of the biology (DNA and brain) is essential to that sentience as opposed to what part is merely wetware versus software. That is unless what you are trying to appeal to is the third option of an immaterial *soul* as the core of identity. That is a debate I am not sure would be fruitful.


I am not making any claims about whether sentience is material or immaterial. However, I do claim that it continues to be a deep mystery to science. The failure of contemporary science to explain sentience may be analogous to the problem of classical science being unable to explain quantum phenomena. If sentience were not such a profound phenomenon, we might have no need to bother viewing the mind as anything more than an information system. However, sentience is such a profound phenomenon that it would be terribly negligent to ignore its importance. Analogously, classical physics may have been fine for nineteenth century science and technology, but we would never have the solid-state electronics we have today if we never moved on to quantum physics. If science ever advances to the point of being able to explain sentience, then there could be a discovery that the sentience of each person has some unifying intrinsic properties that distinguish one person from another. I do not present this as a theory, but as a hypothesis. My reason for making this hypothesis is that it is consistent with my intuition that I have one, unique, personal sentience that is associated with a wide variety of information processes in my mind, including hearing, vision, smelling, feeling, emotion, and etc.

I will now illustrate my hypothesis with an analogy. One of Victor Stenger's pet theories is his multiverse theory. He claims that the intrinsic physical constants of our universe froze out about one microsecond from its beginning. He also claims that our universe is part of a whole multiverse of universes. Each universe in the multiverse has its own set of intrinsic physical constants that froze out extremely early in its history. Whether multiverse theory is correct or not, it does provide for an excellent analogy to my hypothesis about sentience. I place the intrinsic physical constants of a universe in analogy with the intrinsic properties of sentience within a person.

Just as the intrinsic physical constants of a universe are associated with a vast variety of processes within it, so would sentience of particular intrinsic properties be associated with a vast variety of information processes in the mind. Just as different universes in the multiverse have different intrinsic physical constants, so would sentience in a different person have different intrinsic properties. In universe U1, there may be a process X1 in some place/time and there may be another process Y1 in some other place/time. In universe U2 there may be a process X2 in some place/time. Process X1 may be very similar to X2 and very different from Y1. However, the same intrinsic physical constants that are associated with X1 are also associated with X2 and they are different from the intrinsic physical constants that are associated with Y1. Likewise, sentience may be associated with information process V1 at some place/time in person P1 and with information process W1 in another place/time in person P1. Sentience may be associated with information process V2 in some place/time in person P2. Information process V1 may be very similar to information process V2 but very different from W1. With my hypothesis, sentience in person P1 has exactly the same intrinsic properties in its association with V1 that it has in its association with W1. However, the intrinsic sentience properties of person P1 associated with information process V1 is definitely different from the intrinsic sentience properties of person P2 associated with V2. This difference is of profound importance because sentience is the most profound of all phenomena.

For purposes of argument and debate, I could present a rival hypothesis, which I do not accept, but which is consistent with the views of reductionists, such as Derek Parfit. In the rival hypothesis, sentience has no intrinsic properties that differ from one person to another. With this rival hypothesis, the key to personal survival is found in the particular details of the information system, not in any sentience associated with it. With my hypothesis, the key to survival is found not in the particular details of the information system, but in particular intrinsic properties of a person’s sentience. This does not mean that a person’s sentience can exist apart from an information system, but it does mean that personal survival depends on particular intrinsic properties of sentience rather than on details of the information system with which sentience is associated.

I also want to add how good it is too see you contributing here again Clifford. It has been a log time since your last visit.


I am happy to get back into the Immortality Institute forums. My long absence was due to time management issues. I desire to continue in the forums, but I will have to limit myself to threads that can tolerate long delays between responses.

#54 Guest_advancdaltruist_*

  • Lurker
  • 0

Posted 08 December 2008 - 02:24 AM

It is you that is missing the significance by addressing the issue of information as limited to memory and what amounts to *data*.


When writing my opinion that mind uploading assumes the mind is an information system, I did not mean that the mind is just a memory bank, but I meant the system as a whole, including system architecture, firmware, software, operating system, peripherals, configuration, and provision for continual self-reconfiguration. A computer could be made so plastic as to fit Daniel Dennett's Joycean Machine model for consciousness, but this would still fail to address the issue of sentience. Much progress is being made in AI, but science is still very much in the dark about sentience.

There may be some confusion here caused by different meanings of the term sentience in the literature. What I have in mind is what David Chalmers calls the hard problem of consciousness. Both David Chalmers and Daniel Dennett speculate that sentience is associated with just any information system, including something as simple as a thermostat. Such speculation is just a matter of hand waving, not being based on any scientific evidence about sentience. I suspect that sentience is associated only with certain organic systems that meet rather strict requirements.

To a large extent, the operation of the mind can be explained as an information system. Sentience is very much associated with that information system, but it is definitely distinct from it. The association between sentience and the information system of the mind is so mysterious that some think that sentience is an effect that does not in turn affect anything. This cannot be true, because we could never talk or write about sentience if it were without effect. Sentience does have very real physical effects, but the mechanics of those effects continue to be invisible to scientific inquiry. I do not claim that it is impossible to do a scientific investigation of sentience, but to this day there has been nothing but speculation.

The real debate is how much of the biology (DNA and brain) is essential to that sentience as opposed to what part is merely wetware versus software. That is unless what you are trying to appeal to is the third option of an immaterial *soul* as the core of identity. That is a debate I am not sure would be fruitful.


I am not making any claims about whether sentience is material or immaterial. However, I do claim that it continues to be a deep mystery to science. The failure of contemporary science to explain sentience may be analogous to the problem of classical science being unable to explain quantum phenomena. If sentience were not such a profound phenomenon, we might have no need to bother viewing the mind as anything more than an information system. However, sentience is such a profound phenomenon that it would be terribly negligent to ignore its importance. Analogously, classical physics may have been fine for nineteenth century science and technology, but we would never have the solid-state electronics we have today if we never moved on to quantum physics. If science ever advances to the point of being able to explain sentience, then there could be a discovery that the sentience of each person has some unifying intrinsic properties that distinguish one person from another. I do not present this as a theory, but as a hypothesis. My reason for making this hypothesis is that it is consistent with my intuition that I have one, unique, personal sentience that is associated with a wide variety of information processes in my mind, including hearing, vision, smelling, feeling, emotion, and etc.

I will now illustrate my hypothesis with an analogy. One of Victor Stenger's pet theories is his multiverse theory. He claims that the intrinsic physical constants of our universe froze out about one microsecond from its beginning. He also claims that our universe is part of a whole multiverse of universes. Each universe in the multiverse has its own set of intrinsic physical constants that froze out extremely early in its history. Whether multiverse theory is correct or not, it does provide for an excellent analogy to my hypothesis about sentience. I place the intrinsic physical constants of a universe in analogy with the intrinsic properties of sentience within a person.

Just as the intrinsic physical constants of a universe are associated with a vast variety of processes within it, so would sentience of particular intrinsic properties be associated with a vast variety of information processes in the mind. Just as different universes in the multiverse have different intrinsic physical constants, so would sentience in a different person have different intrinsic properties. In universe U1, there may be a process X1 in some place/time and there may be another process Y1 in some other place/time. In universe U2 there may be a process X2 in some place/time. Process X1 may be very similar to X2 and very different from Y1. However, the same intrinsic physical constants that are associated with X1 are also associated with X2 and they are different from the intrinsic physical constants that are associated with Y1. Likewise, sentience may be associated with information process V1 at some place/time in person P1 and with information process W1 in another place/time in person P1. Sentience may be associated with information process V2 in some place/time in person P2. Information process V1 may be very similar to information process V2 but very different from W1. With my hypothesis, sentience in person P1 has exactly the same intrinsic properties in its association with V1 that it has in its association with W1. However, the intrinsic sentience properties of person P1 associated with information process V1 is definitely different from the intrinsic sentience properties of person P2 associated with V2. This difference is of profound importance because sentience is the most profound of all phenomena.

For purposes of argument and debate, I could present a rival hypothesis, which I do not accept, but which is consistent with the views of reductionists, such as Derek Parfit. In the rival hypothesis, sentience has no intrinsic properties that differ from one person to another. With this rival hypothesis, the key to personal survival is found in the particular details of the information system, not in any sentience associated with it. With my hypothesis, the key to survival is found not in the particular details of the information system, but in particular intrinsic properties of a person's sentience. This does not mean that a person's sentience can exist apart from an information system, but it does mean that personal survival depends on particular intrinsic properties of sentience rather than on details of the information system with which sentience is associated.

I also want to add how good it is too see you contributing here again Clifford. It has been a log time since your last visit.


I am happy to get back into the Immortality Institute forums. My long absence was due to time management issues. I desire to continue in the forums, but I will have to limit myself to threads that can tolerate long delays between responses.


Artificial Mysterious Intelligence

#55 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 08 December 2008 - 04:05 AM

It is you that is missing the significance by addressing the issue of information as limited to memory and what amounts to *data*.


When writing my opinion that mind uploading assumes the mind is an information system, I did not mean that the mind is just a memory bank, but I meant the system as a whole, including system architecture, firmware, software, operating system, peripherals, configuration, and provision for continual self-reconfiguration. A computer could be made so plastic as to fit Daniel Dennett's Joycean Machine model for consciousness, but this would still fail to address the issue of sentience. Much progress is being made in AI, but science is still very much in the dark about sentience.


Why would it fail?

You appear to be presuming a quality of sentience predicated on being 'self aware,' a factor that is not inherently beyond the ability of a complex synthetic consciousness. You know yourself and hence that is distinct from all other sentient beings that 'know' themselves to be distinct from you. This a machine can do.

If you are not describing 'self awareness' then could you better describe this distinct characteristic of sentience that cannot be incorporated into a synthetic substrate?

I think the problem is that sentience is both more complex AND simpler than we are granting the meaning to be. Its most extreme complexity is rooted in human semantics and memetics and how we have imbued the idea with notions of uniqueness that come from a 'selfish genetic' perspective AND from a religious one (souls). We mix ideas of mythical merit in to explain the unknown and we have done so for tens of thousands of years. It is not just a hard habit to break, it is even intertwined with our immune system through our brains.

As a species we really do function better with "faith and reason" together rather than by either alone and in a sense we really have mystified the subject of sentience somewhat in order to better preserve the mystery. We have also exploited the complexity of being for sentience in order to develop grand social constructs that promote thought and organization on a global level; religions, economies, and ethnicities.

Anyway we all love a good mystery, it is a great incentive to our species to continue to advance. It is just that we may have to look farther afield for one soon if we do cross this threshold. I alternatively suggest that why we need to operate on faith can be adequately described though evolutionary psychology. We not only began a naming and remembering meme structure to social evolution as our minds evolved we also began a process of trying to understand our experience. Without the tools we invented ideas, however from those ideas we came to invent real tools (physical tools AND language)

More importantly a mind that became so preoccupied with a problem that it could not act in a decisive manner to defend and promote its own survival would not long survive. In order to act on insufficient information our evolving organic information systems developed faith and reason as a means of moderating the fight/flight/feed and f*ck “programs” that we associate with the classic 4F's of essentially “instinctive” behavior.

Rigid machine type information systems cannot invent answers that determine behavioral choices without sufficient data to work with even though they will work with incorrect data and produce GI-GO (garbage in garbage out). Sentient minds however will invent a reason, and act. Being able to do that enhances survival probability; hence intelligence and more importantly sentience, forms a survival advantage in the competition for resources described by Natural Selection.

However, these may become distracting complexities, not really germane to better understanding sentience. I realize you have not included them but they are not insurmountable qualia to define and address; well the evolutionary psychology part that is. We will not make this discussion work in short order if we make it theological or about the evolutionary memetics of social development and the mind.

So how about I go for the simple part?

The simple part is that the evolution of organic intelligence didn't just make humans sentient, it makes all such creatures sentient so long as they achieve sufficient complexity of neurological function. Clearly when addressing the self awareness test of sentience we share this characteristic with many of our primate cousins and other mammalian species like dolphin and orca.

They too can recognize themselves as distinct from others and identify in social groups. In fact I would be willing to include social behavior as a quality of sentience, although that can get difficult too.

Sentience is simple in the sense that it is clearly a result of cumulative complexity for information processing AND behavioral determination. Note I did not say that all who are sentient must think alike, only that sentience appears to be found on a scalar relationship in nature, starting with the minimal neural activity of a single celled organism's nucleus to the human brain. Interestingly enough the actual process of cumulative construction all hinges on a few unique molecules, adenine, guanine, thymine and cytosine.

It has even been demonstrated that we can make a synthetic organic computer out of just DNA and it will perform mathematical computations.
http://expertvoices..../dna-computing/

I suggest that sentience is not a unique characteristic of organic processing, it just has had a few billion years longer to advance itself than we have had with our technology. Our rate of progress is really quite phenomenal in geological terms. :-D


There may be some confusion here caused by different meanings of the term sentience in the literature. What I have in mind is what David Chalmers calls the hard problem of consciousness. Both David Chalmers and Daniel Dennett speculate that sentience is associated with just any information system, including something as simple as a thermostat. Such speculation is just a matter of hand waving, not being based on any scientific evidence about sentience. I suspect that sentience is associated only with certain organic systems that meet rather strict requirements.

To a large extent, the operation of the mind can be explained as an information system. Sentience is very much associated with that information system, but it is definitely distinct from it. The association between sentience and the information system of the mind is so mysterious that some think that sentience is an effect that does not in turn affect anything. This cannot be true, because we could never talk or write about sentience if it were without effect. Sentience does have very real physical effects, but the mechanics of those effects continue to be invisible to scientific inquiry. I do not claim that it is impossible to do a scientific investigation of sentience, but to this day there has been nothing but speculation.



You and I are reflecting the Dennett-Chalmers debate and I hope you don't mind if we just get past the preliminaries and I refer to this unique quality of consciousness you identify as sentience a “qualia”?

I do not consider sentience as 'distinct from that organic based information system' nearly so much as an emergent property of that process. In essence I am describing a biological behavior that is analogous to quantum behavior but not based on quantum mechanics.

So instead of qualia for sentience I would propose it as a “threshold” characteristic of intelligence. However if you want to see sentience as “very much associated with that information system, but... definitely distinct from it” then you risk falling into the trap of explaining that duality and demonstrating it in tangible terms. There may however be another way to address the issue of the property of sentience, and that is with “thresholds” of intelligence, analogous to how we speak of quantum levels. It does not require: “.. that sentience is associated only with certain organic systems that meet rather strict requirements.”

What we have not done well yet with intelligence is create a taxonomy of it describing distinct threshold levels where in the qualitative characteristics of intelligence are profoundly different from what it evolved from. I suspect when we look at the evolutionary record from this perspective we can determine some clear examples of this idea at work.

Once seen in this light, some glaring examples begin to “emerge;” eukaryote behavior for example or human minds at the other extreme. However, all of this is predicated on being able to combine quantitatively more functional DNA into a combination of physical and informational processing behaviors (cognitive processing ability) that address the survival and general advantage for individual members of a species. The “qualitative”difference is the "quantity" of DNA the species is working with but also its physical organization. Hence sentience can be clearly seen as an emergent property of increasingly complex DNA processing power, not just “more DNA" but better uses of it, in more complex neural nets, with better wetware. Thus the “certain organic systems” are not distinct “properties” apart from the information processing but a direct result of evolved complexity from a neural net with greater utility, faster processing power, deeper more complex memory ability, and cognitive flexibility. One critically important characteristic of this evolution is the evolution of language, which as a tool is becoming more complex, and thus is better able to cope with the basic fundamentals of the concept of sentience, or consciousness.

Increasingly more complex language is both a better tool in itself (i.e. defined descriptors, meme labels or basically data definition and precision) AND it functions internally for the brain to provide greater cognitive flexibility or "processing power" analogous to an Operating System upgrade at *demonstrable thresholds* that I suspect can be identified by re-examining the evolution record in light of what we have learned from the evolution of computing power.
This approach could even provide a better model of future computing advances in a manner that operates like Moore's law and thus becomes more accurately predictive of what it takes and when we will cross that threshold for true hard AI. This approach would also improve our understanding of our own consciousness.

Clifford Greenblatt

The real debate is how much of the biology (DNA and brain) is essential to that sentience as opposed to what part is merely wetware versus software. That is unless what you are trying to appeal to is the third option of an immaterial *soul* as the core of identity. That is a debate I am not sure would be fruitful.


I am not making any claims about whether sentience is material or immaterial. However, I do claim that it continues to be a deep mystery to science. The failure of contemporary science to explain sentience may be analogous to the problem of classical science being unable to explain quantum phenomena. If sentience were not such a profound phenomenon, we might have no need to bother viewing the mind as anything more than an information system.


I need to clarify something Clifford, if something as important as sentience is also elegantly simple that would not diminish is importance and profundity one iota. In fact it would enhance it to my mind.

It is a mystery to us because we have not only lacked the tools and information to study sentience in a minute and grand scale till now, at the same time we have obscured our understanding of it with a significant amount of camouflage in the form of paradigms derived from our diverse cultural perspectives and relatively distinct social evolutionary experience. I will grant that it is still difficult to know which ideas to discard and which to hold close but it is clear that we do need a fresh start on the project of explaining sentience.

Clifford Greenblatt
However, sentience is such a profound phenomenon that it would be terribly negligent to ignore its importance.


I am not ignoring its importance nor am I trying to suggest we should. I am simply suggesting that if sentience is an emergent property of profoundly simple origin that does not diminish its beautiful elegance as a concept nor wonder at its experience because I think we are a lot closer to the truth of the matter than our hunter gatherer ancestors were when sitting around a campfire trying to explain the sky. Yes, I agree sentience is still mystery to science but perhaps not such an impossible one to contemplate anymore. It is a study which, as we understand sentience better, we inevitably reverse engineer our thinking machines to become cognitively complex and as they achieve sufficient complexity they may become sentient.

Here are some questions for you Clifford.

If we can create a self aware “sentient”machine is it alive?

Is the definition of “life”strictly organic or is the concept of a “being” also predicted on “sentience”?

Are consciousness and sentience synonymous for this discussion?

Are you falling into a tautological trap of presuming sentience dependent on an organic substrate because that is how it has evolved and is currently found?

Is sentience a unique property or characteristic apart from the utility of the information system known as the mind/brain and a separate product of that same organ in partnership with the body or is it a function that the mind/body serves to create and operate?

If for this last query you answer yes to the first then it is only a matter of changing the substrate for building a more complex informational processing machine out of one that is more organic and it becomes possible to create a sentience but not necessarily move who 'I am' into it, even if 'I' were copied down to the smallest subatomic particle. The new being is distinct from the old no matter how identical and functional.

However, if it is the second choice then this thread has merit and the idea of a sentient being can be seen as only a function of the mind/brain and may be a transferable, distinct entity onto itself. A complex amalgam of information (experience) learning, biological sensory memory, developmental perception, utility and cognitive organization, which once operating can adapt to alternative substrates, just as it can adapt to new environments if given sufficient means, with its pre-programmed imperative to survive.

#56 Clifford Greenblatt

  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 22 December 2008 - 05:18 AM

You appear to be presuming a quality of sentience predicated on being 'self aware,' a factor that is not inherently beyond the ability of a complex synthetic consciousness. You know yourself and hence that is distinct from all other sentient beings that 'know' themselves to be distinct from you. This a machine can do.

If you are not describing 'self awareness' then could you better describe this distinct characteristic of sentience that cannot be incorporated into a synthetic substrate?

I am certainly not defining sentience as self-awareness. I can imagine a nonsentient information system with more coherent self-awareness than some sentient persons. Daniel Dennet wrote that sentience was never given a proper definition. The problem with defining sentience is that any definition takes on the risk of confusing sentience with properties that could possibly be possessed by a nonsentient system. I have a very strong intuitive sense of what sentience is, but defining it in a way that avoids confusion is quite difficult. I will avoid using the word ineffable (as some do in defining qualia), because use of such a word in a definition shuts out inquiry. I will attempt to define sentience as follows. Sentience is a most profound phenomenon that is associated with but distinct from information processes of the mind.

I do not claim that sentience cannot be incorporated in some synthetic substrate. I suspect that this may not be possible, but I would not go so far as to make this a claim. We simply do not know enough about sentience to make a claim one way or another.

I think the problem is that sentience is both more complex AND simpler than we are granting the meaning to be. Its most extreme complexity is rooted in human semantics and memetics and how we have imbued the idea with notions of uniqueness that come from a 'selfish genetic' perspective AND from a religious one (souls). We mix ideas of mythical merit in to explain the unknown and we have done so for tens of thousands of years. It is not just a hard habit to break, it is even intertwined with our immune system through our brains.

In one sense, sentience is so simple that even a child can perceive it. On the other hand, it has so far eluded scientific inquiry. I am not making a point about how simple or complex sentience may be, but I am making a point about how it has eluded scientific inquiry to this day.

Rigid machine type information systems cannot invent answers that determine behavioral choices without sufficient data to work with even though they will work with incorrect data and produce GI-GO (garbage in garbage out). Sentient minds however will invent a reason, and act. Being able to do that enhances survival probability; hence intelligence and more importantly sentience, forms a survival advantage in the competition for resources described by Natural Selection.

I can imagine a nonsentient machine that could invent reasons and act on an ocean of fuzzy data with greater power that some sentient persons can. I think sentience in a person whose thought process is crippled by a preoccupation with some intense pain is much greater in its intensity than that of a highly gifted person who is solving an advanced problem with ease.

The simple part is that the evolution of organic intelligence didn't just make humans sentient, it makes all such creatures sentient so long as they achieve sufficient complexity of neurological function. Clearly when addressing the self awareness test of sentience we share this characteristic with many of our primate cousins and other mammalian species like dolphin and orca.

How do we know which organisms are sentient and which are not? Is there a continuum of sentience from the simplest of organisms to the most complex or is there a threshold that must be crossed before there is any sentience at all? Both David Chalmers and Daniel Dennett speculate that sentience is present in even the simplest of information systems, such as a thermostat. David Chalmers views sentience as another property of matter, like magnetic, electric, and mass properties. He describes his view as follows.

In particular, a nonreductive theory of experience will specify basic principles telling us how experience depends on physical features of the world. These psychophysical principles will not interfere with physical laws, as it seems that physical laws already form a closed system. Rather, they will be a supplement to a physical theory. A physical theory gives a theory of physical processes, and a psychophysical theory tells us how those processes give rise to experience. We know that experience depends on physical processes, but we also know that this dependence cannot be derived from physical laws alone. The new basic principles postulated by a nonreductive theory give us the extra ingredient that we need to build an explanatory bridge.

He also wrote the following, which can be found on page19 of Explaining Consciousness- the 'Hard Problem', The MIT press, edited by Jonathan Shear, 1997.

Although a remarkable number of phenomena have turned out to be explicable wholly in terms of entities simpler than themselves, this is not universal. In physics, it occasionally happens that an entity has to be taken as fundamental. Fundamental entities are not explained in terms of anything simpler. Instead, one takes them as basic, and gives a theory of how they relate to everything else in the world. For example, in the nineteenth century it turned out that the electromagnetic process could not be explained in terms of the wholly mechanical processes that previous physical theories appealed to, so Maxwell and others introduced electromagnetic charge and electromagnetic forces as new fundamental components of a physical theory.

We can measure electrical, magnetic, and mass properties, but I am unaware of any mainstream scientific methods for measuring sentience. If we do not have the ability to measure sentience, how can we make claims as to what is sentient and what is not? In my view, sentience is not associated with all information processes but only with certain processes under certain conditions. I do not present this view as a claim but as a hypothesis.

They too can recognize themselves as distinct from others and identify in social groups. In fact I would be willing to include social behavior as a quality of sentience, although that can get difficult too.

I do not identify sentience with self-awareness. I view self-awareness as a function that could possibly be realised in a nonsentient system with greater coherence that is found in some sentient persons.

What do you think of the possibility of a highly advanced mind in which there is no sentience at all? Both David Chalmers and I can imagine such a thing. However, I do not know how David Chalmers can reconcile his zombie idea with his hypothesis that there is some sentience (or what he calls conscious experience) in every information process. I do not like to use the term conscious experience because I can imagine a Joycean machine that is conscious in the Dennett sense. Such a machine would have experiences with regard to its consciousness, so it could be said to have conscious experiences. This is why I use the sentience term instead. However, even use of the term sentience can lead to confusion. Therefore, use of this term must be restricted by my definition of sentience--a most profound phenomenon that is associated with but distinct from the information processes of the mind. You may think that sentience emerges from sufficiently advanced or complex information processes, but this is only speculation as long as you have no scientific means to measure sentience.

David Chalmers’ zombie has no conscious experiences but is physically identical to a person with conscious experiences. I will avoid this kind of thought experiment, because I seriously doubt that it is possible for anything that is physically identical to a person with conscious experiences to not have conscious experiences. Instead, I prefer a thought experiment in which a machine has a highly coherent self-awareness and communicates with us fluently in our language but is not sentient. It is about this kind of machine that I ask you, "Do you think this could be possible?"

A Joycean machine is rather futuristic, but I can provide a practical, present day example for the problem of determining whether something is sentient. It is common practice to drop live lobsters into a pot of boiling water. For about three minutes, the lobsters struggle violently in a futile effort to escape. They are obviously in a state of severe pain. However, is that pain a sentient pain? We would need to know the answer to this question in order to determine whether the live boiling process is cruel to lobsters. If there is no sentience associated with the lobster's pain, then I see no cause to view the live boiling process as cruel. However, if there is sentience associated with the lobsters' pain, then I would have to say that the live boiling of lobsters is a case of cruel torture of sentient beings. With no scientific means of measuring sentience, how can we say whether a lobster in pain is sentient or not?

The hypothesis I presented in my previous post has to do with the role of sentience in survival of the person through any kind of mind/brain transformation. If we are not on the same page in our definitions of sentience, than it would be useless to proceed with further discussion of my hypothesis. Therefore, our present need is to come to an agreement as to what sentience is and is not. If dictionary definitions enter into the discussion process, then we may need to find or invent a new term in place of sentience to reference the essential meaning that I am attempting to communicate.

You and I are reflecting the Dennett-Chalmers debate and I hope you don't mind if we just get past the preliminaries and I refer to this unique quality of consciousness you identify as sentience a “qualia”?

The problem with identifying sentience with qualia is that qualia, as Daniel Dennett has shown, can simply be identified with our reactive dispositions. We perceive the redness of a red colour because viewing of certain visual inputs trigger certain information processes in our minds. I can imagine such reactive dispositions in a nonsentient information system. In this way, I could imagine a nonsentient information system having qualia.

I do not consider sentience as 'distinct from that organic based information system' nearly so much as an emergent property of that process. In essence I am describing a biological behavior that is analogous to quantum behavior but not based on quantum mechanics.

There are properties that emerge from processes. The same statistical distribution can emerge from a variety of processes, both physical and simulated. If sentience emerges from an organic information system, then any properties that sentience may have would supervene on properties of the organic information system. This would be contrary to my hypothesis and would be consistent with the rival hypothesis I mentioned in my last post. However, we must keep in mind that a property being associated with a structure or process does not necessarily mean that the property is emergent from the structure or process. Since sentience has so far eluded all scientific means of measurement, there is no scientific basis to make a claim for or against sentience being emergent from either organic or inorganic processes.

My hypothesis appeals to a higher level of reality than that which is understood in contemporary science. We can explain a wide variety of information systems without any appeal to a higher level of reality. However, this does not mean that there is no higher level of reality with significant relevance to us. Just as classical physics can explain many phenomena well but cannot explain other phenomena at all, so could contemporary science explain many phenomena well but fail to explain sentience, because it requires knowledge of a level of reality that contemporary science has never reached.

Attempting to explain sentience without appealing to a level of reality yet unexplored by contemporary science may be like ancient efforts to build astronomical models with a geocentric assumption. The models did have some excellent predictive power, but they were not useful for understanding the physical principles.

I will have to quote several of your questions here together before my replies to avoid exceeding the quote block limit.

1
If we can create a self aware “sentient”machine is it alive?

2
Is the definition of “life”strictly organic or is the concept of a “being” also predicted on “sentience”?

3
Are consciousness and sentience synonymous for this discussion?

4
Are you falling into a tautological trap of presuming sentience dependent on an organic substrate because that is how it has evolved and is currently found?

5
Is sentience a unique property or characteristic apart from the utility of the information system known as the mind/brain and a separate product of that same organ in partnership with the body or is it a function that the mind/body serves to create and operate?

1
Bacteria are alive, but I doubt they are sentient. If it is possible to create an inorganic machine that is sentient, the fact that it is sentient is of radically greater significance to me than the technicalities of defining whether it is alive or not.

2
Plastic materials are organic but not alive. Again, bacteria are alive, but I doubt they are sentient. I am much more concerned with what is sentient than with defining what is alive.

3
Daniel Dennett devoted a best selling book to explaining consciousness. I can imagine a machine possessing the consciousness explained in his book without sentience. I can also imagine a being with sentience having a consciousness that is less functional than the consciousness of some futuristic nonsentient machine.

4
Although I doubt that sentience can be a phenomenon or property of an inorganic machine, I would not include mention of any organic substrate in the definition of sentience and would not claim that sentience cannot be associated with anything but an organic system. We do not have a scientific basis to make a claim one way or the other.

5
I do not view sentience as separate from mind/brain utility but rather as integrated with it. However, I can imagine most of the same utility being accomplished without sentience. One thing that a nonsentient system would not have is any intuitive sense for what sentience is. I am sure that there are other things that nonsentient information systems would not likely do. However, I can imagine a population of nonsentient systems with superior intelligence and a highly advanced science, technology, and social order.

Neither do I view sentience as a function that the mind/body serves to create and operate. Rather, I view sentience as a property or phenomenon existing at a higher level of reality than anything science has yet been able to explore. As a limited analogy, it would make no sense to view a computer's hardware as a function that the computer's software serves to create and operate. Logical operation of the software could be explained without knowledge of the computer's hardware, but this does not mean that the hardware is created by or emergent from the software.

#57 drus

  • Guest
  • 278 posts
  • 20
  • Location:?

Posted 22 December 2008 - 05:07 PM

I think that a fundamental assumption in mind uploading is that the mind is an information system, and preservation of information is what counts in the preservation of the person.



I agree with this observation but I suggest that when you say:

What is missed is the significance of sentience to a person’s identity.


It is you that is missing the significance by addressing the issue of information as limited to memory and what amounts to *data*.

Software is a form of information, which is not only determined by its content (form) but by its function as well. Identity and *sentience* are more analogous to an operating system than just memory alone. It is not the preservation of just the memory of experience but how that memory has contributed to the organization of a *being* in the form of its *OS* that is organized uniquely based on its specific memory.

I see sentience as a most profound phenomenon that is associated with and yet distinct from the data processes of the mind. There is a great diversity of information processes in a person’s mind, but I see sentience as possessing a fundamental unity throughout all of its associations with that diversity of information processes. This is analogous to a universe having the same physical constants throughout its spacetime despite its vast diversity of processes. Just as two different universes, with two different sets of physical constants, could have some similar physical processes going on in them, so could two different persons have some similar data processes going on in their minds but have a fundamental difference in their sentient identity.


DNA for example is not as deterministic as most view it and as such forms a biological equivalent of a *plastic* OS that continues its organization as it grows and assimilates experience, hence the view of sentience as information for both these aspects (function and form of information) and perhaps more that are not yet fully understood. The difficult issue for many strict materialists is that it introduces a new form of duality they philosophically reject or it forces strict materialists to develop a new definition of material that better addresses the paradigm of information.

Unity of sentience within a person is not something I can prove but is something that I perceive through strong intuition, like I perceive myself as a sentient being through strong intuition. If all the information in a mind is uploaded, but the identity of the person’s sentience is not preserved, then the person is lost. If the identity of a person’s sentience is preserved, then the person is preserved, even if a great deal of information is changed.


I think we are in basic accord with respect to what you identify as a "unity of sentience" but what we may yet disagree on is what defines the different aspects of that *unity*. The perspective I present above also depends on such a unity but can still be defined as sufficiently about information alone to make uploading possible in theory. The issue then becomes one of transcription methodology more than fundamental impossibility.

The real debate is how much of the biology (DNA and brain) is essential to that sentience as opposed to what part is merely wetware versus software. That is unless what you are trying to appeal to is the third option of an immaterial *soul* as the core of identity. That is a debate I am not sure would be fruitful.

I also want to add how good it is too see you contributing here again Clifford. It has been a log time since your last visit.



i like the way you have addressed the question here and i totally agree with you!
in short, i believe that a copy of a person would think itself to be the original, but existentially from a 1st person perspective it would be a seperate being once created.

#58 VirtuaKess

  • Guest
  • 12 posts
  • 0

Posted 05 February 2009 - 12:11 AM

First off, I'm rather materialist. I don't believe in a soul, a quantum consciousness, or anything else of the form. It's my opinion that the traits we attribute to the mind, cognizance, original thought, self-awareness, and consciousness arise from the synaptic structure within the brain. I base this conclusion on the fact that simulated neural networks have shown the capacity for memory storage and pattern recognition and reconstruction. I believe that these basic properties of neural networks are the basis for the more 'human' qualities of original thought and cognizance. This is my belief right now, I have no empirical proof of such, but there are currently two projects taking place that will give answers to this- the Blue Brain project, and a project being cohandled by IBM and Darpa. Neither project is likely to generate sapience, but they will shed light as to what within the brain provides it. I do postulate, however, that if the mind resides beyond the brain as a soul or quantum consciousness, then AI is not possible as currently conceived (as an artificial neural net or coded using a symbolic language.)

With that said, there are a few thought problems we can use to gain an understanding as to what uploading is and it's relationship to uploading. The common conception of it is that a copy of the mind is made, and this is assumed to be the individual, and this is not so. Imagine having a portion of your brain replaced with a computer chip, like the occipotal lobe. It seamlessly integrates with the rest of your brain, communicating in such a way that the individual neurons don't realize this interloper is there. We can generally agree that in a purely materialistic view of the mind, there is no central brain structure containing the mind, and so a part of your 'mind' your awareness of self and cognizance lays partly in your meat matter and partly within this computer chip. Why is this?

The brain is composed of many parts, in this case I'm dividing the brain into it's solid matter- the actual neurons and synapses, and into the electrochemical systems that make the brain function. These two conceptual parts are intertwined, you can't have mind without both, and to remove one is to destroy the mind. The brain without electrochemical impulses is a wet paper weight, and the electrochemical impulses without the brain is so much ambient stench, and so both, the logical layer and the physical layer of the brain must be preserved. This can be related to computers of today (loosely) Where the processor, memory, et al are the physical layer, and the electrical signals the logical. You have to have both for the computer to operate, and you can't conceivably remove one to put someplace else. You can, however, provide a transitional method to allow the actual data to transfer seamlessly by providing a moment when the logical flow of information (the information within the electrochemical signals) can exist between both the old state (meat state) and the new state (silicon state) at the same time. The computer chip replacing the occipotal lobe is this in-between state, where signals are seamlessly translated between silicon and biological and back, allowing those components that make up the consciousness to operate on both layers at the same time.

Now what if over time, this chip replacing the occipital lobe grows, taking over additional sections of the mind, replicating the synaptic architecture it replaces, and interfacing with the neurons still left. The speed at which this takes place has an upper limit based on the firing time of neurons, and the speed at which a given signal can propagate through the brain. Exceed this speed, and you're merely copying the person's brain, but advance the transition below it, and it's a transferal of the flow of consciousness, represented in the information stored within those bioelectric signals, and given shape by the synaptic structure, to the new media. This is non-copying uploading.

The trick is the gradual replacement, to allow the transition of information that is the consciousness time to properly propagate across the new media. It hypothetically works because it has been proven through such oddities as alien hand syndrome, split-brain syndrome, etc that consciousness is not centralized, it's a asynchronous, decentralized effect of the structure and activity of our brains, and so replacing parts over time, allows the transition of our consciousness between media.

#59 zorba990

  • Guest
  • 1,602 posts
  • 315

Posted 21 February 2009 - 08:30 PM

The trick is the gradual replacement, to allow the transition of information that is the consciousness time to properly propagate across the new media. It hypothetically works because it has been proven through such oddities as alien hand syndrome, split-brain syndrome, etc that consciousness is not centralized, it's a asynchronous, decentralized effect of the structure and activity of our brains, and so replacing parts over time, allows the transition of our consciousness between media.



I basically agree with this. I consider the 'me' that is me to be the real-time running software. Not the hard drive, limbs, brain, whatever.
Another copy of the software running in another machine is not me. So take my existing, running software and give it access to a new machine.
It slowly learns to control the new machine, see through the eyes, control the limbs, regulate the body functions. 'Slowly' could be a relative term
that, may eventual only take milliseconds.

sponsored ad

  • Advert

#60 Taelr

  • Guest
  • 29 posts
  • 0
  • Location:Sunnyvale, CA

Posted 14 June 2009 - 02:48 AM

“uploading - still you or just a copy?”

Depends how “you” is defined. If you mean - identity, memories, personality, and emotional tendencies, then yes the upload will be you. It is these attributes that a successful upload must preserve.

But what does “just a copy” mean? If the copy is exact then there will be two of you. Now if the bio-you and the upload-you continue to exist simultaneously then as you both go your separate ways then you will both begin to gain difference experiences and each form new unique memories. But both will retain your original identity. There will now be two versions of you. Which is the real one? They both are but they will become increasingly different as time passes. How much weight should we place on experiences and memories to establish identity?

Identity will become a significant issue in an uploaded universe. The above will be temporary as it is assumed your bio-version will be terminated, or never regain consciousness from the upload process, or is destroyed as part of the scanning process, or is placed in cryo-sleep, or whatever. Whatever happens the intent is that you make a permanent transition from bio to upload and the bio ceases to take an active role in the universe.

Now at this point we begin some interesting scenarios.

As an essential part of the uploading process you will be stored as a data-image. This implies that multiple identical copies of you can be produced. Now we have some real practical difficulties. To conduct a meaningful life in most societies identity will be important, if for no other purposes than for taxation and property ownership. The effect of a successful upload should be considered similar to the birth of a new individual, which would need to be registered, assigned an SSN, etc. We will have chaos otherwise.

While we can see that theoretically an unlimited number of you-copies could be produced, in practice we must consider what you are being uploaded into as these will have cost implications and that will likely necessarily limit the actual number of copies produced.

The Shell.

The simplest to comprehend is that you will be uploaded into a brain-emulator-processing-engine (BEPE) housed in some form of mobile android style shell. These will be massively advanced computer systems and a piece of advanced mechanical engineering. None of that will be cheap until mass production takes place at which point the cost would probably be the equivalent to buying a luxury house. I suspect mortgages and the like will be common place, with perhaps 100 year plus repayment schemes. Yes you will likely still need to have a job. In the same way that most people do not own multiple homes you are unlikely to have multiple copies of these, at least not in the early stages.

A variation on this theme would be the ownership of multiple shells with a single BEPE that can be plugged into any shall. I’m assuming that the BEPE will be the truly expensive part. Note that your shell could take any form, not necessarily android-like.

The virtual city.

This scenario has your data plugged into a massive shared processing center with many other uploaded individuals and you exist in a virtual world or city. You would not be physically mobile but the virtual world will present you all the senses where you would not know the difference. You would rent the time in this world and only a single version of you would be permitted. The world would in some way need to generate an economy so that you can pay your way. I would imagine that if you become bankrupt you would likely be disconnected from the system and it would need a friend to resurrect you.

Travel and vacations.

Want to travel to another virtual city or perhaps visit another planet. No need to physically move, you are held as data. Just have your data transmitted electronically. In the case of other planets – yes someone must have gone there physically first to setup a receiver. Now we have the issue that you would coexist in two geographically different locations. The obvious answer here would be to have the transmitting end switch you off for the duration of your vacation and reactivated when your updated data from the trip returns. If you are a shell type then you would likely need to rent a shell and BEPE at the destination, much like you would rent a car for a vacation trip nowadays. Note that BEPEs are really computers and there are likely to be many versions, some better than others. You will likely always be looking to afford upgrades, or perhaps take a vacation and rent an advanced model and enjoy extra-super intelligence for a brief period.

Backups

You are digitally immortal only if you retain frequent backups and have a procedure in place that will restore you to a new BEPE if your own BEPE is fatally damaged in some accident or similar.

Clones

Want a companion? Buy/rent a new BEPE and upload your last backup into the new BEPE, and of course register a new identity with the authorities etc.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users