• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Does intelligence have intrinsic value?


  • Please log in to reply
81 replies to this topic

Poll: On which would you have more compassion? (26 member(s) have cast votes)

On which would you have more compassion?

  1. The sentient entity (14 votes [70.00%])

    Percentage of vote: 70.00%

  2. The intelligent entities (6 votes [30.00%])

    Percentage of vote: 30.00%

Vote Guests cannot vote

#61 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 21 May 2006 - 08:17 PM

Clifford, I think you're viewing the philosophical zombie thought experiment in the "wrong" light. The idea behind a philosophical zombie is that it has all the physical parameters in place, and hence is physically indistinguishable from a real sentient. However, it is assumed (e.g., by Chalmers) that the physical does not have to entail things such as qualia, even if they happen to arise from the physical (this is the "Hard Problem"). Since the qualia are not entailed, it is logically consistent* that a philosophical zombie could have all the physical characteristics and yet not have qualia.

The result is that a philosophical zombie will always claim to have qualia, and in fact, it would be indistinguishable from a sentient being by any physical test. Hence, there's no way to know whether anyone but yourself actually is sentient or just a zombie, though of course one could put a probability on such possible facts.

However, this strict sense of a philosophical zombie (the one referred to by Don and Dennett) isn't the only useful such zombie for philosophy experiments. Your version certainly has its uses, though it's quite far removed from the "standard" philosophical zombie, and people like Don and Dennett will laugh at it (hence the "THEY'RE ALL GONNA LAUGH AT YOU!" comments).

* = there is a bit of disagreement over whether such a philosophical zombie is actually logically consistent. Don (at least at one point) held that such a zombie would NOT be logically consistent, and Dennett has strongly argued this point as well.

BTW Don, I purchased about three books by Dennett last year, when we were going at it with gusto, and I still haven't had a chance to read them (well, I read the first four or five chapters of Consciousness Explained). My new job will soon afford me free time, so after the forum upgrade, I may actually get back to this. Of course, we'll likely have to start from scratch, since it's been a year and I suspect we're both in somewhat different places by now.

#62 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 22 May 2006 - 01:35 AM

jaydfox wrote:

Clifford, I think you're viewing the philosophical zombie thought experiment in the "wrong" light. The idea behind a philosophical zombie is that it has all the physical parameters in place, and hence is physically indistinguishable from a real sentient. However, it is assumed (e.g., by Chalmers) that the physical does not have to entail things such as qualia, even if they happen to arise from the physical (this is the "Hard Problem"). Since the qualia are not entailed, it is logically consistent* that a philosophical zombie could have all the physical characteristics and yet not have qualia.

This is helpful, because the brief reference I saw about philosophical zombies mentioned that they behave like a conscious person but are not conscious. That reference failed to mentioned that all physical parameters for a conscious person are in place.

jaydfox wrote:

The result is that a philosophical zombie will always claim to have qualia, and in fact, it would be indistinguishable from a sentient being by any physical test. Hence, there's no way to know whether anyone but yourself actually is sentient or just a zombie, though of course one could put a probability on such possible facts.

I could see physical tests being unable to distinguish a philosophical zombie from a sentient person if physics influence sentience but sentience does not influence physics. However, the idea of people physically communicating ideas about sentience with each other, no matter how vague, with sentience having no influence on physics, is patently absurd. Sentience must have at least a weak influence on physics for us to be able to talk about sentience at all.

jaydfox wrote:

However, this strict sense of a philosophical zombie (the one referred to by Don and Dennett) isn't the only useful such zombie for philosophy experiments. Your version certainly has its uses, though it's quite far removed from the "standard" philosophical zombie, and people like Don and Dennett will laugh at it (hence the "THEY'RE ALL GONNA LAUGH AT YOU!" comments).

In my version, the philosophical zombie would require some feature that would be clever enough to trick any tester into registering the criteria for sentience being present. A simple analogy (not example) of such a feature would be a pair of wavelengths of light that the eye cannot distinguish from a single wavelength of light. However, in this analogy, the physical parameters of the light are different. Since the standard model for the philosophical zombie specifies identical physical parameters, then I find that model logically absurd, for reasons mentioned above.

Edited by Clifford Greenblatt, 22 May 2006 - 02:38 AM.


#63 tomjones

  • Guest
  • 36 posts
  • 0
  • Location:Internet

Posted 22 May 2006 - 03:23 AM

I would have more compassion for the intelligent entities

-Jack

sponsored ad

  • Advert

#64 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 May 2006 - 04:39 AM

Clifford, at this point, revisiting my post here might help clear things up (or not...):
http://www.imminst.o...T&f=3&t=7745&s=

When I first heard of philosophical zombies, I quite directly took them as an attack on functionalism. But whereas Dennett seems to be talking of an unconscious human being, identical in every physical detail with a conscious human being, I was taking functionalists at their word. That statement might seem somewhat pedantic, but I suppose the answers in the details.

The functionalists, or at least those of the strong AI variety, claim that we don't need biological neurons. Neurons perform a computable function, whether it be of the simple variety (when the sum of inputs in a given window of time reaches a certain threshold, then fire) or the complex variety (using a weighted sum of inputs, including decay of signal strength, and internal computation performs by sub-neuronal structures {pixie dust  or otherwise}), and this computable function can be simulated, whether by a non-biological piece of hardware, i.e. a synthetic neuron, or by software. The collection of all such neurons, laid out as they are in a human mind, would act indistinguishably from a human, at least on tests that don't probe within the confines of the skull (i.e. a PET scan would be a dead giveaway, but any standard behavioral test would be passed with flying colors).

Under this notion of strong AI, the assumption is made that, since it functionally acts like a human, it must subjectively experience everything that a human can experience, and in fact, if it were a copy of a real human, that person's consciousness would jump to the program or inorganic brain just as surely as it would jump to a duplicate.

Hence, we see the tie between zombies and the duplicates problem. It's hard to imagine a zombie in an atom-for-atom, electron-for-electron copy of a human being. Not without dualism, anyway. On the other hand, it's quite easy to imagine a zombie in a function-for-function copy of a human being, even if that copy is a piece of software running on a computer. And since the flow of time is all relative anyway, it's very easy to imagine a zombie in a function-for-function copy of a human being, running as a piece of "software" on a really big Analytical Engine of the Babbage variety.

...

But once we go down the functionalists' (or the subset of functionalists championing strong AI) road, we no longer have the luxury of saying that dualism must exist for the zombie to be logically possible. Now the functionalists must consider the possibility, and this makes them uncomfortable. Now it's their opinion versus mine, and I have intuition on my side. Furthermore, regardless of the physical possibility of zombies, the logical possibility now prevails. Thought experiments can no longer be declared invalid on general principle, but actually require analysis (heaven forbid!).

Now I got off on a bit of a side tangent, but I did have a point I wanted to make. It relates to back to this notion of mixing physics and philosophy. You see, I got mixed up on the definition of a zombie, because I assumed, rather obviously in fact, that PET scans and fMRIs were obviously off limits. We were talking about whether anything that could pass all so-called Turing tests, in the behavioral, external sense, would by definition have to have qualia. In other words, we have two people, biologically human in every detail, except that one has had his brain surgically replaced with a computer running software. We can't tap on their heads and see which one sounds hollow, and we can't use a strong electric field in the hopes of scrambling the computer's circuits, and we can't use an x-ray machine to figure it out. But we could run all the standard behavioral and sensory tests, testing whether each observes standard optical illusions, etc., etc.

You'll see that I struggled with philosophical zombies as well, since I entered the debate from a different starting point: functionalism. Under functionalism, a computer simulation of a human brain would be sentient. However, under a more strict interpretation of physicalism, which requires more than just functional equivalence, it is logically possible for two brains to be functionally equivalent, but only one to be sentient. It was from this angle that I approached philosophical zombies.

Imagine a brain simulated with silicon neurons (but otherwise being physically structured the same as biological brain). Physically, they are quite different, even though they might appear to function identically. Now here's the kicker: the silicon brain, by acting functionally identical to the biological brain (short of intrusive tests such as PET scans of fMRI's, etc.), would be the functionalist's equivalent of a philosophical zombie. Indistinguishable by function, yet non-sentient (well, this point can be argued, but I'm discussing the possibility, not the "fact"). But of course, physically, we could tell the two brains apart: one is predominately silicon, while the other is predominately carbon, hydrogen, oxygen, and nitrogen, etc.

For me, I'm not interested in going all the way down to the atomic level (not for this thought experiment; for the "duplication" problem, we must go all the way down...). I'm merely interested in whether something can be *functionally* identical to a human, yet not sentient. That would be a basis for defining intelligence in the absense of sentience. Sentience is being *assumed* by some people to be inseparable from intelligence, which is making it well nigh impossible for those people to have a rational discussion here, because they use the words synonymously. They *might* be intertwined, and you can even argue that they are *almost definitely* intertwined, but they are separate concepts and we need to be careful to separate them when discussing them.

However, there was another quirk that I got at in my litany: the idea that our actions are so shaped by past precedent that, were we to shut off our sentience somehow, we would be slow to show any symptoms. Rather than saying that my verbal report of seeing the color red is directly caused by that sentient experience, or that it is not caused by it, we can qualify the statement: the verbal report is influenced, but to a degree so small that it hardly seems to register. The concept isn't difficult to understand. Ben Goertzel had an interesting theory on a separate but related topic: free will. The "conscious" part of our brains tries its best to predict our actions, and it's right so often that it begins to believe that it's calling the shots, when at best it only merely influences them slightly.

With sentience (or the portion code-named qualia), my idea was similar: our qualia do slightly influence our behaviors, such that, over a period of time, our consistently experiencing "red" when asked to give a verbal report eventually leads to our responding with the word "red". At some point, we begin to believe that our experiencing red (in the sentient sense) is what causes us to say "red", when in fact, it's largely caused by subconscious (non-sentient) functionality in the brain:

But this got me to thinking about the basic objection to zombies in the first place: Why would a zombie claim to see red, and in fact have a very strong conviction of having seen red, if he isn't in fact "seeing" red? The answer should have been obvious, but it actually took a fair amount of reading in Dennett's material to point it out.

You see, people like to attach meaning to things. And we like to think that the meaning we give things is actually the thing's "meaning". Of course, a computer doesn't know the meaning of things, other than what we tell it. We program a computer to think that such-and-such is really such-and-such, and not this-or-that. But the computer doesn't know the meaning.

Connectionist models of AI helped underscore the idea that the computer could learn the meaning, in that it could make a predictive model that's right most of the time, so that without being taught the meaning of things, it stumbles upon and learns the meaning. It's only "sure" that the meaning is correct, insofar as it's right most of the time.

People, it would seem, learn meanings in much the same way. ...

...

And so it is, I think, with qualia. Well, yes, yes, this much is obvious. We learn that the color we perceive of as "red" is in fact called "red" through a learning process.

But I want to go a step further. We develop beliefs, ingrained by repetitive use and example, reinforced by correct behavior based on that belief, so that it becomes rather perfunctory. We don't have to defend the belief with rational arguments every time the belief is called up, whether subconsciously or consciously.

And I suggest that when we give a verbal report of red, it's largely just a perfunctory response. Sure, we actually do experience the redness as well, but that's just coincidence. That coincidence helps reinforce a behavior that's reinforced to hell and back. Take away the qualia, and for a while, we'd still say red, and still demand that our experience is real, and not illusory. The functional "dispositions" that Dennett describes would still be there, still disposing us to whatever emotional responses we might have to redness, etc.

But the redness isn't really there, and if there is no dualism involved, then eventually, negative feedback is going to start undoing those dispositions, unraveling those reflexive behaviors and dispositions. In a sense, the zombie would be virtually indistinguishable from a real human for the first few seconds, but slowly, whether it took minutes, hours, days, or weeks ..., the zombie would realize his or her predicament, and we'd be able to easily distinguish them, not just with PET scans, but simple verbal and sensory tests.



#65 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 22 May 2006 - 06:58 AM

Sorry that I can't really "get down and dirty" in this conversation. Maybe next time.

Jaydfox

However, this strict sense of a philosophical zombie (the one referred to by Don and Dennett) isn't the only useful such zombie for philosophy experiments. Your version certainly has its uses, though it's quite far removed from the "standard" philosophical zombie, and people like Don and Dennett will laugh at it (hence the "THEY'RE ALL GONNA LAUGH AT YOU!" comments).


Quite right Jay, a correct philsophical zombie demands strict functional equivalence. However, I would also like to make clear that my antics are all in the spirit of fun and playfulness (somehow they seem to lose some of their effect online vs in the real world [lol] ).

Here's a knock down refutation by Thomas Nigel:
Zombie Killer

If someone speaks falsely, they are either lying (i.e. intentionally stating things they disbelieve) or mistaken. However, since, ex hypothesis, my zombie twin is cognitively indiscernible from me, it cannot be lying when it claims to be conscious. Lying, after all, presupposes having an intention to lie, and if I do not have such an intention, neither does my cognitive doppelganger.

Furthermore, telling a lie surely involves cognitive mechanisms different from those involved in speaking sincerely. If, for example, the latter involves a mechanism whereby inner belief representations are converted into articulate form, lying, at a minimum, must either involve an extra or alternative mechanism whereby they are also negated, or it must apply the articulation mechanism to representations of certain disbelieved propositions (something I am certainly not doing when I claim to be conscious). In any case, my zombie twin cannot be both lying about being conscious and cognitively indistinguishable from me.

But suppose the zombie genuinely but mistakenly believes that it is conscious. Its claims will not be lies, and articulating them will involve exactly the same intentions and cognitive mechanisms that I employ in expressing my unmistaken belief. Here we must consider the mechanisms of belief formation. Do I and my zombie twin infer that we are conscious from our mutual observation of something that is reliably correlated with (or even sufficient for) consciousness in this world, but is not so correlated in the zombie world? Sometimes perhaps, but this had better not be the only way we know about our consciousness, because we could not then discover the correlation (or sufficiency). Conceivably consciousness (and the correlation) might gain its place in our conceptual repertoire as a non- observational term of some folk theory, but the zombiphile must surely reject this suggestion, because it leaves the door wide open to standard eliminativist moves (Churchland, 1979): i.e. to the possibility that consciousness, like phlogiston, just does not exist, that we might be zombies. Furthermore, given the notorious difficulty of integrating it into our scientific world view, consciousness would make an unusually appropriate target for such elimination. But if consciousness does not (or even might not) exist, if we might be zombies, then the zombiphile argument fails to show that functionalism might not fully explain us.

Thus zombiphiles normally (and plausibly) insist that we know of our own consciousness directly, non-inferentially. Even so, there must be some sort of cognitive process that takes me from the fact of my consciousness to my (true) belief that I am conscious. As my zombie twin is cognitively indiscernible from me, an indiscernible process, functioning in just the same way, must lead it from the fact of its non-consciousness to the equivalent mistaken belief. Given either consciousness or non-consciousness (and the same contextual circumstances: ex hypothesis, ceteris is entirely paribus) the process leads one to believe that one is conscious. It is like a stuck fuel gauge, that reads FULL whether or not there is any gas in the tank.

Such a process, like such a gauge, is worse than useless: it can be positively misleading. If the process by which we come to believe that we are conscious can be like this, we can have no grounds for confidence that we ourselves are not zombies (unlike the empty car, there will be no behavioral evidence to indicate otherwise). But (as before) if we might be zombies the zombiphile argument has no bite. If mistaken zombies are possible, the whole motive for ever considering such beings is undermined.



#66 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 May 2006 - 07:03 AM

Quite right Jay, a correct philsophical zombie demands strict functional equivalence.

Wait, wait, wait. Is it functional or physical? Because there is a difference.

#67 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 22 May 2006 - 07:17 AM

[huh]
Functional, unless you are trying to defend some form of identity theory.

#68 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 22 May 2006 - 08:53 AM

tomjones wrote:

I would have more compassion for the intelligent entities

-Jack

Suppose you discovered you were developing Alzheimer's disease. Suppose also that medical science was far from any effective treatment or cure that could ever benefit you. Would you care whether you were going to experience comfort or extreme discomfort in the advanced stages of the disease?

#69 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 22 May 2006 - 09:14 AM

jdfox wrote:

But the redness isn't really there, and if there is no dualism involved, then eventually, negative feedback is going to start undoing those dispositions, unraveling those reflexive behaviors and dispositions. In a sense, the zombie would be virtually indistinguishable from a real human for the first few seconds, but slowly, whether it took minutes, hours, days, or weeks ..., the zombie would realize his or her predicament, and we'd be able to easily distinguish them, not just with PET scans, but simple verbal and sensory tests.

Now consider the high power example of extremely intense torment. Suppose sentience is suddenly turned off in the middle of being subject to extremely intense torment. The tormented person may, logically, continue to display the same intense aversion reactions to the hostile circumstances that evoke the state of extremely intense torment. However, if the tormented person has enough intelligence in operation to think about the fact that this torment is a sentient experience, then the sudden disconnection of the sentient component of the experience will very quickly become quite obvious to the person’s intelligence. Even the aversion reaction may be quickly affected by the sudden shutdown of sentience if a significant object of the aversion reaction is the effect of the circumstances on sentience.

Edited by Clifford Greenblatt, 22 May 2006 - 09:48 AM.


#70 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 22 May 2006 - 09:47 AM

Jay

I'm merely interested in whether something can be *functionally* identical to a human, yet not sentient. That would be a basis for defining intelligence in the absense of sentience. Sentience is being *assumed* by some people to be inseparable from intelligence, which is making it well nigh impossible for those people to have a rational discussion here, because they use the words synonymously. They *might* be intertwined, and you can even argue that they are *almost definitely* intertwined, but they are separate concepts and we need to be careful to separate them when discussing them.


Cliff

Do you regard sentience as a certain kind of information flow or do you regard sentience as something which is intimate with certain kinds of information flows but which is not an information flow itself?


A functional system that (isolates)/("raises" to the level of consciousness) information flow is itself a pattern consisting of information. As is so often the case, the distinction between form and function blurs. (and this generalized concept would be different from Intentionality how exactly?)

The other area that needs to be addressed is how we are defining *information*. There is tendency to see information as strictly a semantical affair. This is erroneous, imo, because sensory imput also constitutes information flow -- to be assimilated by an intelligent system. So yes, for me sentience is a given in an advanced cybernetic entity.

But talk about assumptions Jay, you're so caught up in maintaining the Kantian distinctions, with all of their subjectivist flavor, that it's hard to imagine you ever entertaining a realist approach.

Is it the world that is permeating your mind, or is it your mind that is permeating the world?

Or do they meet somewhere in the middle?

#71 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 22 May 2006 - 10:01 AM

Suppose you discovered you were developing Alzheimer's disease. Suppose also that medical science was far from any effective treatment or cure that could ever benefit you. Would you care whether you were going to experience comfort or extreme discomfort in the advanced stages of the disease?


heh, I'd have an "accident" with a cryo team in stand by, but that's not addressing your question, now is it? :)

BTW, no, I really wouldn't care.

#72 tomjones

  • Guest
  • 36 posts
  • 0
  • Location:Internet

Posted 22 May 2006 - 10:52 AM

Suppose you discovered you were developing Alzheimer's disease. Suppose also that medical science was far from any effective treatment or cure that could ever benefit you. Would you care whether you were going to experience comfort or extreme discomfort in the advanced stages of the disease?

Hmm... No, I wouldnt care...

-Jack

#73 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 22 May 2006 - 04:56 PM

BTW, no, I really wouldn't care.

Hmm... No, I wouldnt care...


These responses present three possibilities that I can see.

1. You think that an advanced Alzheimer's patient is not capable of suffering.

2. You do not care care whether you are confortable or extremely uncomfortable if you have nothing else to do but expereince one or the other.

3. You do not care whether a person suffers, provided that no one else is affected by the suffering.

#74 tomjones

  • Guest
  • 36 posts
  • 0
  • Location:Internet

Posted 23 May 2006 - 02:42 AM

4. You do not care about future suffering only present...

I don't care if im going to be comfortable or uncomfortable in the future, only now... I shall deal with future disscomfort if and when the time comes...

-Jack

#75 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 23 May 2006 - 02:49 AM

5. *You* no longer exist.

Oblivious vegetative states have the same ontological status as live stock.

#76 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 23 May 2006 - 05:12 AM

Would an oblivious vegetative state even have sentience? I'm guessing by your answer that you think no. Otherwise, if such a being is capable of experiencing suffering, and if the transition is slow enough that at no time do you suddenly stop being you, then wouldn't you by definition experience that suffering? Isn't that something to be concerned about?

#77 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 23 May 2006 - 09:16 AM

I heard Ronald Reagan's son talk about his father in his final days. He was unaware of many things, but he continued to respond to affection from his son. A newbron baby has no language or reasoning ability, but does show definite capability of experiencing severe discomfort. Should we not care what the newborn baby is experiencing? Even an adult with full mental faculties can be temporarily mentally crippled by a severe discomfort experience. It is rather difficult for most people to reason intelligently or focus on any responsibilities when experiencing extreme torment.

#78 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 23 May 2006 - 09:43 AM

I guess I just wonder about the nature of the circular argument the intelligent entities of this thought experiment might use to justify themselves to compare it to the circularity of sentience.

#79 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 24 May 2006 - 04:47 PM

I guess I just wonder about the nature of the circular argument the intelligent entities of this thought experiment might use to justify themselves to compare it to the circularity of sentience.

I do not think intelligent entities would have have the problem of circular arguments, because intelligence can quine itself. The difficulty with sentience is that it is highly intimate with intelligence, but it also transcends intelligence. Intelligence can exist with or without sentience. Without intelligence, what would be left to do any circular arguing?

#80 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 24 May 2006 - 07:05 PM

Without intelligence, what would be left to do any circular arguing?

Fixed action patterns, perhaps, which probably could use an intelligence upgrade so that the circularity is more interesting.

#81 Clifford Greenblatt

  • Topic Starter
  • Member
  • 355 posts
  • 4
  • Location:Owings Mills, MD

Posted 25 May 2006 - 09:42 AM

Having thought further on the matter, I partially see a way that philosophical zombies could be logically possible. Suppose population T consists of sentient persons. Suppose population Z consists of philosophical zombies. To a Z person, sentience means subjective sense perception and self-awareness. To a T person, sentience means something that is intimate with subjective sense perception and self-awareness, but which transcends them. The two populations could then physically communicate many things about sentience with each other without being able to discern which ones belong to population T or Z. Some members of population Z may find extreme torment abhorrent because a strong mental aversion mechanism is triggered. Members of population T may find extreme torment abhorrent because of its powerful transcendent effect. However, a T person could become highly frustrated with some Z persons when conversing about the significance of extreme sentient experience.

There could still be a way to distinguish between a T person and a Z person. Suppose both a T person and a Z person both have a strong emotional attachment to a third person, who is totally paralysed and is in the last hour of life. Assume that none of the three are immortalists. Suppose both the T and Z persons are informed that the dying person is internally experiencing extreme torment during that last hour. The T person would likely be troubled by this news, due to the powerful transcendent association involved in extreme torment. The Z person would likely view any concern about the internal mental condition of the dying person as an irrational emotion, because the patient’s internal torment is an isolated process that will never have any meaningful effect on others or on the dying person’s state of health.

There is a highly possible complication in the above method of distinguishing a T person from a Z person. A T person could have been so disturbed by the transcendent power of his own sentience that he has developed a strong suppression mechanism to deny its existence. In this case, the T person could be mistaken for a Z person.

#82 7000

  • Guest
  • 172 posts
  • 0

Posted 02 August 2006 - 07:10 PM

It seems attention had been shifted from the initial question.I think the question is does intelligent have intrinsic value?The answer is simply yes and the answer support laws of nature that is intrinsic to our mind.Though it is rather unfurtunate that this is where the key to artificial intelligent lies.In actual fact it is better written in its own codes and can be translated to a certain math.I read some of your post Mr Clifford you are in a right part trying to search for a certain knowledge so hold tight to your thought.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users