Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.

What Constitutes "me"?
#91
Posted 03 November 2004 - 02:06 AM
Don't necessarily ingrain yourself with the belief that we may not be able to grasp many of these fundemental questions within the realm of science. I'm leaving that door open unless something demonstrates otherwise.
Cryonics, at it's best, is a re-initiation of brain function after however long it has been without any activity. Do I think that person who is revived would be the same person as that who went into cryonic suspension? I do, as much as I think a person who has total cessation of brain activity for a moment would be the same person as the moment after when activity starts again. Or as much as a person is the same person when they go to sleep as when they wake up. Which in both cases is not entirely certain in my mind.
If consciousness is a unique semblance of neural activity and it is interrupted at many points in life, during sleep, during surgery, losing consciousness for various reasons, etc, then perhaps one could suggest at each "interruption" one consciousness dies out and another is coalesced. Following that one could propose that consciousness is an illusion and that there are much more complicated processes going on in their entirety that make up one person's "me"ness. If we assume those two assumptions are true, then a person who loses all brain activity temporarily before regaining activity may die and come back as a seperate being. However if we place a third assumption, that if the neural structure is retained in itself, ceasing and re-initiating brain activity would not "kill" the original being. At this point things become rather abstract, and one starts to wonder whether you can singularly identify me"ness" or if it's simply a semblance of brain structure and brain activity.
I'm not intending to push materialist views, it's conjecture of things that I do not know at this time and I don't hold materialist views as belief.
Suffice to say, we need more scientific understanding about how neural systems become sentient and conscious. Science won't be the, be all end all, of this discussion but it would greatly contribute to our mostly philosophical discussion.
#92
Posted 07 November 2004 - 07:18 PM
Fairly intimate bridges to the brain are now possible. Electronic devices can be directly connected to some neurons on the surface of the brain. Technology to replace the eyes with a pair of cameras that are directly interfaced to the brain may not be very far away. The brain would both receive images from the cameras and send commands to the cameras for focusing, aperture adjustment, and viewing direction. This kind of bridge to the brain would not require replacing one neuron at a time as is suggested for mind uploading. If every optical nerve connection is mapped out in advance, all natural connections could be cut before any of the artificial connections are made. If the camera system is designed sufficiently well, the person would have little or no adjustment to make to the new system.
Sudden changes in brain structure have been done without destroying core identity. Dr. Benjamin Carson performed brain surgery to deal with life threatening seizures. He did this by removing an entire hemisphere of a child’s brain. The child was in a coma for a few weeks after the surgery but recovered extremely well. I do not think the child woke up with a new core identity.
Although I do not think that neurons would have to be replaced one at a time to maintain core identity, I question whether sentience is possible with different substrates. I know that non sentient cognition can be done with a wide variety of substrates. It is actually possible to duplicate all the functions of the most advanced of computers with electromechanical relays. The speed of the process would be scaled down by eight orders of magnitude, peripheral events would have to be time scaled accordingly, and the size of the computer would be enormous, but the flow of data bits would be identical. I suspect that duplication of sentience is a much different issue from duplication of a non sentient cognition process. I am convinced that sentient cognition does, to a large extent, consist of a flow of data bits. However, I am also convinced that sentient cognition also involves something very essential in addition to the flow of data bits. Perhaps many functions of the brain could be done with a different substrate, but sentient cognition may not be possible with any different substrate. This would be the case if particular elements, such as carbon and hydrogen, have fine properties that are employed in very specific and essential ways for the sentient cognition process. Since scientific knowledge of sentience is still extremely primitive, this question is too difficult to answer at this time.
#93
Posted 08 November 2004 - 02:48 PM
It would not merely be a processing of zeroes and ones, however. It would be far more internetworked and relying on "fuzzy" logic, but to an even greater degree than we currently conceive of. I think that in 30 years we'll have sentient AIs which are as complex compared to today's neural networks as today's neural networks are to a full-adder made from binary transistors.
That said, I still have misgivings about whether "I" could be transferred into such a substrate. Part of this is my bias that there is something "unique" about "me" that is immaterial, that cannot be copied. In a purely material universe, there is nothing unique about a sentient human mind, and this goes against my core feelings. It's not a rational argument, and I can do my best to invent one that is rational and consistent, but... Well, like cosmos says, we'll have more answers with more time and research... Of course, those answers will probably only lead to more questions, but that's how science works...
Who knows, perhaps there is a "material" life essence that cannot be put into an inorganic substrate, something that makes our current minds non-transferrable. The best we could hope for then is to optimize the functionality of our organic neurons, and integrate nanotech for improved senses and memory, etc. But this is just speculation.
sponsored ad
#94
Posted 09 November 2004 - 03:35 AM
The brain appears as a computer when we consider it as a black box that sends and receives parallel data through a large but finite number of nerves that connect to it. Sentience is something that happens inside the black box. A non sentient black box could be designed to produce an identical input/output behaviour when connected to the same set of nerves. One thing interesting about the effect of sentience is that the fact of our writing about it is an indication that it does actually affect the input/output behaviour of the black box. A non sentient black box would need to have some kind of substitute for sentience within it to fool those outside of it into believing that it is really sentient. When I say that sentience is more than a flow of data bits, I am not attempting to prove that there is something about it that is immaterial or not natural; I am simply trying to point out that a flow of data bits is not sufficient to describe it. No amount of advanced computing can compute a proton into existence. Likewise, I do not believe that any amount of advanced computing can compute sentience into existence. Something more than a flow of data bits, whether deterministic or stochastic, is needed for sentience to be manifested.
#95
Posted 09 November 2004 - 07:29 AM
"Likewise, I do not believe that any amount of advanced computing can compute sentience into existence."
Perhaps, but consider that there may be a unique event that occurs in sufficiently complex systems that leads to their self-awareness. Let me pose a question to you, do you think that there is a possibility of a threshold complexity within a system (open or closed) that would lead to self-awareness? As far as I'm aware of, human brains are the most complex systems of their size, known to us. I may have brought this up before, bear with me if I have, it was not my intention.
#96
Posted 09 November 2004 - 09:53 AM
I am convinced that a sufficiently advanced computing system could fool an outside observer into beliving it is sentient because we do not observe each other's internal sentience when we communicate. All it would need is the right kind of input/output processing characteristics.Perhaps, but consider that there may be a unique event that occurs in sufficiently complex systems that leads to their self-awareness. Let me pose a question to you, do you think that there is a possibility of a threshold complexity within a system (open or closed) that would lead to self-awareness? As far as I'm aware of, human brains are the most complex systems of their size, known to us. I may have brought this up before, bear with me if I have, it was not my intention.
Possibly, a person's sentience could be duplicated by duplicating certain properties of the person's brain. I do not believe that a flow of data bits alone can manifest sentience, no matter how complex. However, sentience may become manifest when a certain kind of flow of data bits interacts with the atomic properties of a certain kind of substrate in a certain way.
#97
Posted 09 November 2004 - 10:50 PM
"However, sentience may become manifest when a certain kind of flow of data bits interacts with the atomic properties of a certain kind of substrate in a certain way."
This is where I think you may have a point. DNA computers and Quantum computers may indeed be the susbtrates that have the ability to develop sentience. However taking that further, these are more complex systems than conventional transistor based computer chips. Conventional CPUs have a limit to how small their circuitry can get before they begin parsing atoms so to speak. So if we acknowledge that, then perhaps there may be a threshold complexity at which sentience may arise.
#98
Posted 10 November 2004 - 01:45 PM
#99
Posted 10 November 2004 - 07:21 PM
If sentience can be manifested by exceeding some complexity threshold in a computing system, then sentience could be mainifested by a system of a sufficiently large number of transistors wired in the right way. Microscopic quantum states could be sumulated by macroscopic gates. The transistor based sentient computer may be several orders of magnitude larger than the brain and may need to operate an expanded time scale, but its sentient experience would be identical to that of a biological brain. The part that I question is the idea that a sufficiently complex computing system can manifest sentience with any substrate. A different substrate may be capable of producing the same flow of data bits as a sentient brain but those data bits would not have the same effect on the substrate in which they are based.
Well if you question whether any complex computing system can properly emulate a neural network as to allow sentience to arise then that would be a more fundemental criticism. You suggest a different substrate may be necessary, one that produces the "same flow of data as a sentient brain but those data bits would not have the same effect on the substrate in which they are based". So within that supposed substrate, would you not agree that it's system of managing data flow (for lack of a better word) would need to be sufficiently complex as to allow sentience to arise? I say this because some animals with similar but smaller and far less complex brains do not seem to demonstrate sentience. This differs from the example you gave where if indeed computers were capable of sentience it would have arisen but on a different time frame than that in humans.
I'd likely have more to say on this whole issue if I knew more about computer systems, their possible alternatives, and the way the brain deals with data.
#100
Posted 10 November 2004 - 08:40 PM
Perhaps the key of sentience versus a non-sentient simulation of sentience is to be found in quantum mechanics. In quantum mechanics, according to my very limited understanding of it (and I have a strong background in physics, but quantum mechanics is a very involved field to study), you cannot make more than one copy of a certain quantum state. As a simple example, if something can be in the state of 0 or 1, where 0 and 1 are arbitrary quantum states, then we could make a bunch of 0's, or a bunch of 1's. But if a particular particle were in state x, and we don't know what state x is, then we could not make two copies of that state and be absolutely sure that we didn't change the state and end up copying the wrong state. As I remember, we can only be sure that the new copies are indeed the same as the original 2/3rds of the time.
This is why quantum encryption cannot be spoofed, because you cannot copy the data stream. You can read it, which makes one copy, and there's nothing that prevents a single copy from being made. But preserving the original data stream means that you made two copies. By altering the original stream, the intended receiver knows that the signal has been compromised.
Now everything in this world obeys the rules of quantum dynamics, and yet we can copy bits of memory with seeming absolute precision. What's the deal? Well, in the case of the bits in computer memory, the state is actually stored in a lot of quantum states, and thus we can make an imperfect copy where we have high fidelity through redundancy. There's always that non-zero possibility of copying the bit incorrectly, but as the amount of redundancy increases (for example, in larger transistors), that non-zero probability is still effectively zero in the real world.
So what does this mean for sentience? Well, the brain can be simulated as a data processing system. But what if part of the brain's data processing happens on the quantum level? I don't think there's much evidence of this yet, as a lot of the literature I've been reading focusses on things at a much higher level than the quantum level (higher than atomic or even molecular level, in fact).
But all that means is that to simulate a sentience, we could simulate things in a silicon substrate (or whatever other non-human-brain substrate). But where do we draw the line between actual sentience and a simulation of sentience that is non-sentient? Perhaps the answer lies in quantum mechanics: the complexity which must be imposed on the substrate must go down to the quantum level. In other words, a classical computer, no matter how complex, could not possibly ever become truly sentient, even if were a simulation of a sentient human brain with "perfect" fidelity.
Another problem that would be solved, beyond the issue of sentience, is the problem of Duplicates. If a sentience is in a substrate that actively relies on quantum level data processing and/or storage, then only one perfect copy could exist. I could copy myself only by destroying myself, so that there is never ANY fear of being made a duplicate.
These ideas are not new, and they are not originally mine. I should say that what I mean is that, while I came up with these thoughts on my own, I am sure that I am not the first, nor the most original, nor the most creative, nor the most well thought out. However, I thought I would apply them to this idea of substrates: A classical computer simulation would not suffice to "transfer" myself into. However, quantum computing does hold some promise, but probably not for the better part of a century. It's not enough just to have quantum computers, which themselves will take a few decades to become available. Then the quantum engineers (electrical engineers using quantum computational components and theories) would have to collaborate with neurological scientists, etc., to figure out how to put them together. The invention of the integrated circuit didn't lead immediately to AI, and the invention of quantum computing, once realized, will not lead immediately to mind uploading. There will be a lot of lag time as we study the brain in the context of quantum computing, just as we are still currently studying the brain in the context of neural networks.
And when all is said and done, we still might not be in a position to upload our minds and hope for both sentience and avoidance of duplicates.
#101
Posted 11 November 2004 - 10:51 PM
So what does this mean for sentience? Well, the brain can be simulated as a data processing system. But what if part of the brain's data processing happens on the quantum level? I don't think there's much evidence of this yet, as a lot of the literature I've been reading focusses on things at a much higher level than the quantum level (higher than atomic or even molecular level, in fact).
But all that means is that to simulate a sentience, we could simulate things in a silicon substrate (or whatever other non-human-brain substrate). But where do we draw the line between actual sentience and a simulation of sentience that is non-sentient? Perhaps the answer lies in quantum mechanics: the complexity which must be imposed on the substrate must go down to the quantum level. In other words, a classical computer, no matter how complex, could not possibly ever become truly sentient, even if were a simulation of a sentient human brain with "perfect" fidelity.
I have made an effort to familiarize myself with QM in relation to consciousness. From what I gather, the relationship between the two is entirely speculative -- ie; there's no data to back up this position. In many ways, the appeal to quantum forces appears no different from the claims of an "immortal soul" or a "Cartesian reality", except (of course) that arguments for QM are not denying the inter-connected relationship between consciousness and physical processes, only that that final point in the chain of causality (where consciousness really *exists*) is outside the physical world. Although I have materialist leanings I can understand and sympathize with this position (I was not entirely satisfied with Dennett's CE), however for the rational/scientific minds among us...What ever happen to Occam's Razor? [huh] You see, this is really why I consider myself a Materialist. Because until it is sufficiently demonstrated that there is something *beyond the physical* I will continue to accept as my reality the simplest solution -- that consciousness is entirely the result of physical processes.
Jayd, is there something magical about the biological? What if I were to gradually replace my substrate with a silicon alternative. Would I be gradually turning myself into a simulation? [glasses]
#102
Posted 12 November 2004 - 12:06 AM
As for whether you would be turning yourself into a simulation, don't have time, I'll have to get back to you, but I do have an answer... Not a good one, but I have one...

#103
Posted 12 November 2004 - 01:12 AM
I cannot accept that belief with absolute certainty at this time. Failing that though, I can make a qualitative statement of likelyhood and take materialist leanings.
This whole argument about consciousness, sentience, and artificial substrates that realistically and faithfully emulate such processes, I suspect will come to a tee as the supposed technology that will allow all this to occur is realized. I'm not saying philosophy will removed be from the discussion, but there will be a convergence or overlapping of philosophy and scientific knowledge that may narrow this argument removing some of the unknowns and speculation.
#104
Posted 12 November 2004 - 03:36 AM
cosmos, you may want to take a look at contemporary physicalism. It encompasses even that which probably can't ultimately be explained with serial language or in general.cosmos ... I'm uncomfortable with using the far reaching assumption that all that is can/will eventually be explained in terms of matter and physical phenomena.
Yes, this is what's happening now per Quine.cosmos I'm not saying philosophy will removed be from the discussion, but there will be a convergence or overlapping of philosophy and scientific knowledge that may narrow this argument removing some of the unknowns and speculation.
#105
Posted 12 November 2004 - 05:07 AM
cosmos, you may want to take a look at contemporary physicalism. It encompasses even that which probably can't ultimately be explained with serial language or in general.
Again I find this too far reaching. I understand that physicalism has been reconstructed from materialism as a more modern view to encompass things like physical forces, or so I've been informed by your link. However the insistance that everything is of the physical and nothing exists outside of that is much too speculative, even if it is supported by what we've learned so far from physics. If I were to subscribe to this view I would have to hold it as belief, so as a result I would be arguing everything is of the physical and I cannot do that at this time. I would characterize myself as having physicalist leanings, but if there is something beyond the physical and it cannot be observed or by any means percieved, how can I be so presumptuous as to rule with certainty that nothing is there.
edit: The aforementioned may seem like a totally insignificant contention, but I think it's a valid point. Furthermore, there may be future developments/discoveries that contradict physicalism and that possibility has been entirely ruled out.
Edited by cosmos, 12 November 2004 - 05:35 AM.
#106
Posted 12 November 2004 - 05:54 AM
#107
Posted 12 November 2004 - 07:59 AM
Let me link to a more concise description of Physicalism just for the sake of argument in this post:
http://en.wikipedia....iki/Physicalism
Now if we look at the section under supervenience they have two examples relating to supervenience that according to the author of the article, one can be reasonably sure every physicalist agrees with.
-No two worlds could be identical in every physical respect yet differ in some other respect.
-No two people (or beings, or things) could be identical in every physical respect yet differ in some mental respect.
While I would tend to agree with these statements I cannot assuredly make those claims, refuting any possibility that contrary scenarios are possible. Also to be more specific, in what respect can they differ? What if they differ in some respect yet undescribed or unknown, would one presume that all the characteristics/properties are identical when the physical characteristics/properties are identical? In my case I would not make that leap of faith.
I'll add to this tomorrow after I've read your article and slept, perhaps I'm putting my foot in my mouth before understanding what I'm discussing.
#108
Posted 12 November 2004 - 01:42 PM
It is intellectually safe to have a measure of rational confidence when adopting a belief such as physicalism. It would not be a leap of faith. Essentially, a belief, its plausible alternatives, and their expected outcomes are compared against each other – sometimes quantitatively, sometimes qualitatively. But it is probably non-rational not allowing oneself to be able to hypothesize toward maximizing expected gain (here, “gain” does not mean the subject must necessarily be central).cosmos While I would tend to agree with these statements I cannot assuredly make those claims, refuting any possibility that contrary scenarios are possible.
I know you are still looking into it. No rush. I will just say that if a difference is probabilistically anticipated, this presupposes that there is some mind-in-general (a situationally optimized mind) that may, in principle, discern this difference. If so, the two worlds would not be identical in every physical respect (where physical means anything that would account for the difference).cosmos Also to be more specific, in what respect can they differ? What if they differ in some respect yet undescribed or unknown, would one presume that all the characteristics/properties are identical when the physical characteristics/properties are identical?
#109
Posted 12 November 2004 - 02:22 PM
Me and only me, you mean?
Nothing. You are my coincarnations. Every one of you.
(It is not a body, nor memories ... just a self averring routine, this is I. But they are the same at your place.)
#110
Posted 12 November 2004 - 06:19 PM

What do you mean you?
There are no *yous*, there is only me.
So East meets West and We are One. Only there is no *me* only a *we.*
Oh so sorry no *we*, just *me.*
What?
My God!
You folks have been saying the same damn crap since the Pre-Socratics; lighten up why don't you.
Actually I have two serious comments.
1. The problem of the computational complexity requirement for Super Intelligence and even functional *Consciousness* is sidestepped entirely by examining the ability of the human psyche to support more than one *me* through a technological augmentation of broad cerebral communicative exchange.
Impossible you say?
Too dangerous perhaps but not impossible as we already have people with *naturally* occurring severe multiple personality disorder. The reason this circumvents the argument on computational complexity is that the human mind/brain already exists and is proven operational so all that is required of technology is to *link* minds in a manner that allows the complex aspects of identity to be *shared*.
We might not be able to build a true copy so soon, or even a new synthetic substrate but we might be able to provide more than one consciousness the ability to merge with another technologically.
Before you say too difficult be very careful; it took a million or more years to learn to talk but it doesn't require a voice that is so loud now that it has to shout around the world to be heard. (a metaphor folks for the *literally* deprived)
*All* neurofunction data would not have to be exchanged to create a communicative *conscious link* and the ability to merge the simpler and subtler physiological information would generate the ability to literally *feel* what another person is experiencing. At some point the distinction of self as many of you try to define it may then become moot.
Second, a merged consciousness combined with augmented quantum computational ability will equal or trump a purely *synthetic* AI and it can do so much sooner than the ability of technology to even equal or exceed the human brain in a stand alone capacity. A Singularity event then would include not a single human mind but many and these can build the machine you seek in all likelihood as they would be a part of it too.
So you have all been asking the wrong question.
It is not about: What Constitutes me?
Hell Thomas it is not even: Do *you* exist?
As Thomas asserts I am confident *I* do the rest of you are all figments of my imagination [lol]
But If *I* & *We* exist can we step beyond a single brain in a bio-synthetic matrix?
What constitutes me exists within the brain, as the body piece by piece is expendable yet the brain really is not. You don't exist in all parts of your brain either, but our *self perception is a composite of experience, senses, and biofeedback from our bodies in the form of related rational and autonomic behavior.
Now if you want to talk of *souls* this discussion belongs in a different thread but those that favor that sort of things generally argue that none of the body is needed to include the brain but I am pointing out that is irrelevant to this discussion.
I exist as a consciousness supported by a brain that also supports my body and is supported by it. Thomas you do nothing for me except make me laugh, so I do not need you to exist and be self aware.
#111
Posted 13 November 2004 - 12:26 AM
Originally posted by Another God <-from bjklein.com
This is one of the oldest Philosophical questions, and I still don't think it is resolved.
as i was reading, this first statement caught my eye. let me first comment on it before stating my opinion. you stated this first comment as if you expect this question to be answered. i apologize if this was not your meaning, but if it was:
have you heard of Godel's Incompleteness Theorem? briefly, it states that every system (i.e. a computer program, the human mind, the universe, etc.) can generate a question that cannot be answered within itself, and must be answered from without. according to this theory, we, as part of the "system" of the universe, are capable of posing questions about our "system" that we will never be able to answer, due to the fact that we are a part of the system about which the question we posed are. if this is true, then the questions (at least some) posed by philosophy, seeing as they are the "difficult questions", could fall under the category of questions that we, as a part of the system of this universe, cannot answer ourselves. if this is true, then, although finding the answers is possible with the aid of a "part" from a system other than our own, the answers being found to such questions is an imbrobalility (but, of course, not impossible).
now, to give my perspective on the question at hand, i would first like to make the usual statement i make when answering such questions: it depends upon your perspective.
i would also like to ask in reference to the first question: how, exactually, do you define "me"? "me" is an entirely relative word. to the uploaded "you", "me" would describe it, whereas the non-uploaded "you" would inturn define itself as "me". you would be two seperate entities, both defined as "me" to your respective selves [now, if the actual question that you were meaning to ask was wether the uploaded "you" and the non-uploaded "you" were seperate entities then i, personally, believe that they are]. honestly, the more i think about the question, the more i find it to be a moot point. as i stated before, they are (granted, in my opinion) two sperate entities, each defining themselves as "me", and, if that is true, then why ask the question? granted, if you are asking from the perspective of the non-uploaded "you", then, based on the asumption that you are two sperate entites, the answer would be no, the up-loaded "you" could not be defined as "me" because it is seperate from you. it could be defined as "created" from you (at least the sum of your knowledge and experiance). but your parents do not define you as "me" (in reference to themselves), and you are "created" from them, so why should you concider the up-loaded "you" to be defined as "me", asuming of course that you see it as having been "created" from you? but, i digress.
as to the second question, i tired of the first so onward i move, assuming that you did, in fact, download your mind into a clone of yourself, based upon all of the babble that i wrote while answering the first, you could once again say no.
to the final question, i have to ask several questions (and make several statements) before i can give my opinion.
first statement (it is actually at set of related statements): your question is rather disjointed, at best (i mean that in the most pleasant of ways possible). your sentence structure makes it difficult for me to deside upon what you were actually asking. first you wrote: if i copy my brain, only one neuron at a time, replacing each neuron with that copy, over a long period of time, will i still be me at the end. this brings me to my first question: are you suggesting that you are coping a neuron and then replacing the "original" with a copy inside the original brain (i.e. the source of the neuron which was copied)? my second question: when you say "over a long period of time", do you mean you wait a long time before replacing the neuron (e.g. you make the copy and then wait siad "long period of time" and then replace it) or do you mean you make a copy and replace the "original" with its copy "immediately" and preform this process over a long period of time? the second question you posed along with the final question was: when will have the transition occured? i understand the sentence, but not the reasoning behind asking it. do you mean when was it completed, started, or what? granted, if asked in any other context this question would be perefctly logical, but here, i feel as if i am missing something. it [the manner in which this question was presented] is not clear as to the purpose of the question.
now, to actually answer is based upon my understanding of the question (i hesitated -which led to the perevious questions- to answer this question outright because, if i am, in fact, missunderstanding your meaning, then it would be a waste of breath to answer the question. the reason i am answering it now is: i am on a forum and its perticipants seem slow to respond, so i can give an answer, even if it is ill founded, without having to wait an extended amount of time for an answer to questions that i ask while attempting to answer another question), i would say that it depends upon your perspective (amazing, i AM perdictable...) on how the neuron and the human mind work. in all reality, i do not have a great enough grasp upon the inner workings of the mind to answer this question (which in turn means that i did infact waiste my breath, after all, i asked all those perivious questions only to say that i can't answer the question). In the cases of the perivous questions, you were suggestion the coping of the entire brain (and all that entitles), which as a whole, works in a way that i can understand, but only as a whole. when you break the mind up into its smaller components, i do not quite understand how they work in relation to the whole.
hm... well, that is my opinion. i apologize for the slightly abrasive manner in which i answered your question.
i am sure i have forgotten something i meant to say, or some other oddity...
#112
Posted 13 November 2004 - 03:00 AM
It is intellectually safe to have a measure of rational confidence when adopting a belief such as physicalism. It would not be a leap of faith. Essentially, a belief, its plausible alternatives, and their expected outcomes are compared against each other – sometimes quantitatively, sometimes qualitatively. But it is probably non-rational not allowing oneself to be able to hypothesize toward maximizing expected gain (here, “gain” does not mean the subject must necessarily be central).
This is not my post addressing Physicalism, but rather addressing the statement you made here. I can have rational confidence in a certain view without adopting it as belief. I can make an argument for a certain position without holding that view as absolute. I say this because I rarely lay down certainties when it comes to positions, even when they are well reasoned. Does this mean I can hold few if any positions with confidence? No. Instead those views that I have confidence in, I would place qualitative or quantative likelyhoods in their favour. In place of supposed certainties in many (but not necessarily all) cases, I can make the claim of infinitesimal probability of contradiction(s) to such position.
edit: If this seems somehow non-sensical, point it out to me. I have very little formal education in philosophy, mostly smatterings of knowledge from philosophical areas I'm interested in.
Sorry for sidestepping Physicalism for the moment, but I'm in a medicated cream induced stupor. I'll be back in a bit to continue this discussion.
#113
Posted 13 November 2004 - 11:46 AM
Nice to talk with you again.
You find it funny, as I understand. Well it isn't.
What we are, is just a self referenced computation running inside some hominid heads.
Use to that.
#114
Posted 13 November 2004 - 01:27 PM
I would first like to say a little about the importance of the substrate in a system. In some applications, a great deal of variation in substrate choice is quite acceptable. For music, many storage substrates are acceptable but many transmission substrates are not. Music can be stored mechanically, optically, or magnetically. However, substrates for transmission of music cannot be regarded in the same way. A modulated light source could transmit exactly the same flow of music data as a sound source. The room would be filled with the same stream of information but the impact on a listener would be radically different. I do not think very many people could appreciate a concert played from a light emitting diode as much as they could appreciated a concert played from a speaker. This is why I suspect that more than a certain flow of data bits is essential for sentience. I suspect that the substrate in which the data flows, such as a biological system of neurons, must also meet exacting requirements.
Concerning the role of quantum states in sentience, the primitive state of scientific knowledge in this area leaves much room for speculation. Duplication may be a problem because any attempt to duplicate quantum states would disturb them. However, the quantum states are definite and finite, even if very large in number. Even if dynamic quantum states are crucial to sentient activity, the data for a cryonically suspended person would seem to be entirely static. A cryonically suspended person may have a potential for being restored to sentience but is not sentient during any of the time of cryonic suspension.
#115
Posted 13 November 2004 - 01:32 PM
Yes, you are right if “belief” is taken to mean the acceptance of something with 100 percent certainty. The meaning behind my usage was not as strong, more like an hypothesis. When a belief is being appraised along with other plausible beliefs of similar latitude, and the set of beliefs is incoherent, this renders each belief an hypothesis rather than something taken on faith. The belief with the strongest appraisal would be given the highest rational confidence. Since it is possible to have more than one investigation in progress, two or three beliefs may have high rational confidences even if they are inconsistent as a whole.cosmos I can have rational confidence in a certain view without adopting it as belief. I can make an argument for a certain position without holding that view as absolute.
#116
Posted 13 November 2004 - 05:28 PM
I see now that Physicalism is too large a subject for me to outright say I have a high degree of confidence in it, particularly without sufficiently considering the possible contradictory evidence. At this time I can only say I have physicalist leanings, the same position I had before I read that entire article.
#117
Posted 14 November 2004 - 01:58 PM
I would first like to say a little about the importance of the substrate in a system. In some applications, a great deal of variation in substrate choice is quite acceptable. For music, many storage substrates are acceptable but many transmission substrates are not. Music can be stored mechanically, optically, or magnetically. However, substrates for transmission of music cannot be regarded in the same way. A modulated light source could transmit exactly the same flow of music data as a sound source. The room would be filled with the same stream of information but the impact on a listener would be radically different. I do not think very many people could appreciate a concert played from a light emitting diode as much as they could appreciated a concert played from a speaker. This is why I suspect that more than a certain flow of data bits is essential for sentience. I suspect that the substrate in which the data flows, such as a biological system of neurons, must also meet exacting requirements.
The LED does not need to be of the same complexity as the music, the substrate only needs to be able to TRANSLATE the information of the music into a transmittable form of *signal information* and back again into the *original form.*
I am glad you have chosen this example Clifford because it focuses the debate on the most critical aspect of the entire discussion, not *what is the substrate* but what is the essence of it.
If the essence of consciousness is something tangible (physicality) then it can be quantified. If it is something *insubstantial* then no substrate other than the one we are familiar with might do as *vessels for these souls.* This last point assumes too much ignorance I am afraid to support a fully rational discussion because it also lacks a *rational measure* but if one were discovered then it is no longer exemplary of something *insubstantial* and too complex to be quantified.
You see many of us suspect the essence of consciousness is likely to be found to be a mix of tangible and something only a little less (or more depending on your perspective) than substantial; information. Information is complex but reducible to a universal pure language form we understand as mathematical logic.
ALL information then can be translated into form that can be copied, transmitted, and received. The key is not merely by what *substrate* it can be *contained* but how is it converted. The wax disk or CD does not have to be as complex as Beethoven to reproduce those works. The language of *life* is information, and our psyches are a complex amalgam of experience, sensation, and genetic programming; all forms of information encrypted in our minds, which together define how we perceive our individual selves.
I would also suggest that this is in part why memes are not only so powerful to our species but to all life, as memes provide a medium for the quantum extension of the more primitive genetic information, which promotes faster and more effective adaptability than biological evolution (extending Natural Selection to a more complex imperative but fulfilling the biological imperatives at the same time) and this is the profound counter argument to Fukuyama et al that see biological evolution as the pinnacle of the process of progress rather than merely one step. OK I admit this part is really a separate discussion.
Information as encrypted data is a question of the complexity of the math, not the substrate, except in that as a universal language the math IS the substrate. Next comes a very different question and that is how we could use such a sufficiently complex mathematical language?
Now as to the human code, it turns our to be far more basic than complex and this is the error of association that many people make. As in all engineering projects KISS (keep it simple stupid) is critical to survival and Deux Machina Man is no exception. The machine is complex in its integrated external function (evolution, survival, behavior) but not in HOW it necessarily internally operates. Any system of substrate too complex would create too many errors of operation and system failures resulting in extinction. The internal body language of operation is only as complex as required to maintain the biological feedback loops and controls, no more, and no less.
Now let me point out this is not to diminish the real complexity of our biological systems. I do respect and recognize how truly wonderful and complex those systems are but it is not true to say they are beyond our ability to understand in their fullest and that is an aspect of their beauty too. There is no eternal mystery to them beyond our ability to quantify and appreciate yet so much of your argument appears to rest on the assumption that there is a level of complexity beyond our grasp now that will always remain so.
My point about sidestepping the discussion is to offer the alternative test that if we can convert the information of *me* into an alternative form then it becomes a question of how to convert it back and by what medium and means we can accomplish this. The human brain is a proven vessel for such an essence (the information of *me*).
We are already a social species. We are already capable (albeit as a malady) of containing multiple personalities within a brain, and we are already a species capable of the complex communication of information and in a sense this is simply a quantum leap in the complexity of information but it is not beyond the scope of credibility. Yes, the requirements are great but they are not impossible and insurmountable.
The ability to rationally cope with multiple personalities could be overcome if the individuals doing so were sufficiently compassionate, sympathetic, empathetic, intelligent, and aware of what was happening to them, then it becomes a matter of how much stress can the biological and psychological system of the *self* sustain. The question of selfish versus selflessness and ego versus fear are all relevant challenges but not in themselves insurmountable.
It now becomes a question of how complex the *total* information and the conduit for such translated communication of the self for such a joining would need to be. The process of making this happen begins with basic communication and existential sharing. That process can be done with far less risk than you are suggesting because this is not an all or nothing process that I am describing but one that can be tested, and sampled a few bits at a time for one function at a time until the language of the brain is fully deciphered and translated.
I would agree (a shared suspicion or belief) a priori that some of that language will be encrypted genetically but even if it is that doesn't mean that memories that are encrypted biologically as prions couldn't also be decrypted without destroying the originals and thus shared. It also does not logically preclude the possibility of simultaneous experience such that two (or more) individuals linked in the manner I am describing wouldn't be able to perceive and share multiple sources of sensory information, thus become a larger whole (shared) experience, and yet preserve the identity of self. This *Synthetic Super Self* then begins to exist in its own right but the individual members of it can link or detach themselves if desired but when they return to that *Artificial Intelligent Entity* composed of merged intellects they could upload their separate experience and as such it is both preserved and can be perhaps put back into a sufficiently complex *substrate* once such is amde available.
This form of Artificially Intelligent Entity can also be augmented with purely synthetic intelligence in the form of quantum computational ability but such quantum computers do not have to be more complex than human brains to assist human minds in becoming vastly more complex than they presently are. This is however why I call this merged consciousness a bio-synthetic construct as its links are artificial (like what I call techlepathy) as well as an ability to access and be enhanced by machine intelligence.
Too much of your argument appears to be predicated on the idea that something exists beyond our ability to grasp and also that this system that works is the only one imaginable that could work.
By observing the brain/mind while in direct communication and function we will inevitably decrypt its internal *Operating System* and this will allow us to create that more complex substrate that can stand alone, a substrate that is far more indestructible than the one we have inherited from 3 1/2 billion years of evolution. It is not logical to claim that it is impossible to create this even with the level of information we currently have.
Edited by Lazarus Long, 15 November 2004 - 02:30 AM.
#118
Posted 15 November 2004 - 12:23 AM
#119
Posted 15 November 2004 - 04:42 AM
With scientific knowledge of sentience being so primitive, it would be very difficult to say what requirements a system would need to meet to support sentience.
A key point.
It was said before but it's worth repeating.
#120
Posted 15 November 2004 - 01:56 PM
The human brain is already proven to work. What I am asking is can we tap a sufficient level of information within it and then translate and transmit that information successfully between brains to share the experience?
This is building on a natural skill we already possess; social communication. If two individuals can share a common experience then this goes beyond the limits of what can sustain consciousness. Because we aren't changing the substrate, we are only altering the manner in which we access and utilize the substrate.
Also the science of sentience is still relatively nascent without doubt but it is accelerating at a profound rate as a tremendous welath of valid data is being acquired daily. It has moved beyond para and pseudo science finally. What we came to understand about the body over the last five centuries was more than doubled in the last fifty, and what we are coming to understand about the brain/mind in the last ten years is equivalent to the last ten thousand.
We are finally acquiring real data and mapping functions at a molecular level. We are integrating the knowledge of genetics with neurophysiology and yes it will require time to associate all the data with all the theoretical models but the profound difference is the quality and quantity of the data.
In addition there have never before been such precise tools for accomplishing this task in all known history and the *Will to Know* exists as well. The level knowledge we have is more than cumulative, as it is advancing faster now than our current understanding and what we learn is rapidly modifying what we thought we knew synergistically. It may primitive but it is no longer neolithic and we are far away from the dawn of this watershed as much of the basics are mapped out accurately to work with and the refinement of those maps and the models of neuro function are being computer assisted in their modeling, advancing daily at a rate equivalent to decades in the past. We also have more people, in more places working and sharing what they learn and know than at any time before.
The days of the *mystery of the mind* are numbered and that number is shrinking at an accelerating rate.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users