• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Can software alone simulate “consciousness”?


  • Please log in to reply
106 replies to this topic

#91 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 14 September 2005 - 04:40 PM

Hmm, upon a further re-reading of the Chalmers paper I cited, I see that where he and I differ is in his continual use of something that your run-of-the-mill physicalism explicitly disallows (except for MWI coherence, as I mentioned): the woulda-coulda-shoulda's of life.

If p, then q. If p happened to me (e.g., I was playing chess, with the board in a specific configuration, and it was 5:37:28 PM in a room with three windows, with sunlight streaming in at such-and-such an angle, and the room was 73.26 degrees Fahrenheit, and there was a glass of water with condensation in a certain pattern on it to my left, and the game pieces were in a certain precise orientation (e.g., maybe my knight on G4 was facing 32.6 degrees left of center, away from me, and was offset 1.27 mm away and 3.45 mm to the right within the square), etc., etc.), then I would do q (e.g., wipe my brow, grab the glass of water and take a sip, and then move my knight to F6 and declare, "Check!").

Chalmers is saying that, what if p hadn't happened to me. What if, say p* happened to me. For example, what if the knight on G4 had been facing 32.5 degrees left of center, or 30 degrees, or 10? What if it had been facing right? What if it was ten degrees warmer in the room (which arguably would affect my comfort, and hence interfere with my concentration)? Then I would have done q*. In other words, I might still have moved my knight to F6, but I might have said "Check!" with more or less conviction, louder or softer, or perhaps with a more suspenseful pause. I might even have moved a different piece altogether, foregoing the check opportunity for a better position (or, just missing the opportunity altogether). I might not have wiped my brow (had the room been colder), or I might have taken a gulp rather than a sip (had the room been warmer). The tactile interaction with the condensation pattern on the glass might even affect my actions.

In other words, at that instant in time, p->q, but p*->q*. For a given arbitrary set of physical processes in a rock for which p->q is true (an easy condition to satisfy), it is overwhelmingly unlikely that for any such p*, p*->q*. In fact, across the full spectrum of p*_i's, we could say p*_i->q*_i, and this couldn't be done in a rock: there just isn't enough complexity.

A system which could replicate all the possible q*_i's, given all the possible p*_i's, would pass Chalmers's more stringent test, and as such, a lookup table would suffice, but would be larger than the universe, so it's not physically possible. On the other hand, a software representation of my neural networking in my brain might just suffice, and hence, on this basis, Chalmers (and I presume functionalists in general) would say that such a software-based neural network would be conscious, whereas the rock wouldn't.

However, this is the deadly trap Chalmers allows the functionalist to fall into. You see, in real life (under physicalism, minus MWI), p* doesn't happen. We could test an arbitrary number of p*_i's (by resetting the system to the original state, and then running it again with a different input), but doing so doesn't lead to combinatorial explosion: it leads to linear explosion. Add 10 test cases of inputs, and your table only needs 10 extra outputs. In theory, then, consciousness is NOT the possibilities, the woulda-coulda-shoulda's. Instead, it's just outputs. It's just bitstrings of a certain complexity. This of course trivializes into ultra-panpsychism (not just that grossly functional objects, like thermostats, are conscious, but every physical process, including the random motions of heat).

That everything is conscious, and more richly conscious than our minds, trivializes our own existence. Why balk at killing a person, if the entire bitstring representing the rest of their life (assuming determinism, ignoring QM) naturally occurs a near infinite number of times in nature if you slice it right and define the states correctly. That bag of blood and neurons, etc., that you "killed" was just one instance of a nearly ubiquitous consciousness, and that "murder" would be as wrong as stirring a cup of tea that might otherwise have instantiated the same complexity. It might not be directly contained in the cup of tea, depending on how much time the tea needs to replicate that complexity. Assuming the cup of tea isn't big enough, then at least including the interactions of the tea with your GI tract, bloodstream, kidneys, urine, and trip through the sewer system, into the ocean, until the complexity is sufficient. Given that the heat complexity of a cup of tea is orders of magnitude more complex than the neural networking of a human brain, we correspondingly would need data from a time period orders of magnitude smaller, perhaps only days, hours, or minutes. Of course, in stirring the cup of tea, you might destroy this instance of this consciousness (which would be murder), but you might instantiate a different consciousness. On the other hand, if you murder a real person, that's all you did, right? Wrong. The person's body, both as an object with heat, and as an object which will decompose (yet another physical process), will contain extreme complexity as well, so that it's pretty much a wash, break-even. Some indeterminate number of consciousnesses (including the one we presumably would attribute to the "person" killed) would be prevented, and some roughly equal number of consciousnesses would be created.

No, this is far too absurd. Something more than bitstrings must be responsible for consciousness, if we are to assign any value to it. Of course, we could drop functionalism and stop worrying about it (in which case, saying that a silicon brain must have consciousness because it's functionally identical is an unfounded claim), or we have to worry about it, about the sheer absurdity of it.

Without dropping functionalism, then, perhaps, as Chalmers points out, we must account for the woulda-coulda-shoulda's. But that's not functionalism in a deterministic universe. That's applying something more. Perhaps that something more is Cartesian dualism. Perhaps that something more is MWI coherence. I used to be a Copenhagen man, but MWI is seeming more and more likely, with coherence of extremely-close branches providing the "dualistic" nature. It's not confirmable, in the sense that the Copenhagen and MWI interpretations are equally valid empirically (here again, we're talking about the question of whether we're running on a Pentium or an Analytical Engine), but under Occam's razor, MWI is far more plausible (consciousness is no longer trivialized to meaninglessness, in that rocks aren't conscious, because they can't replicate the functionality in a deterministic universe with MWI). And it still allows for functionalism, just not in an objectively verifiable way. In effect, it makes our objective universe just one more "subjective" observer, viewing only its slice, its branching, within MWI.

Of course, this leads back to the question of a silicon brain. If we assume MWI is true, and we require functionalism to be true, then a rock can't be conscious, but perhaps software can be. Not by virtue of what the software does functionally within the objectively viewed universe, but because of the totality of MWI splits, in which the software performs functionally like an organic brain under all circumstances. Well, all ordinary circumstances. It won't react to the flu the same way a real brain will, and it doesn't functionally replicate a brain when a PET scan is run (i.e., the PET scan will be able to tell the difference). But as far as responding to the physical senses (which is our baseline for consciousness, not how it looks in a PET scan), the functionality is there, so it must be conscious.

Something for me to think about. I'm still not 100% convinced that software can be conscious under even MWI, but I'm more convinced than ever that it cannot be conscious without MWI (or some other version of physicalism where the woulda-coulda-shoulda's are taken into account). Once again, we see consciousness and quantum mechanics being linked in some way. Yet it's not necessarily that one is mysterious and so is the other, so they must be related. It's that analysis of one generally leads to analysis or consideration of the other. That correlation, not based on mystery but on function, is what keeps bringing QM to discussions of consciousness.

#92 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 14 September 2005 - 05:17 PM

By the way, looking at this article by Chalmers, I'd say I'm probably CPN, though CPP remains a possibility I'm open to. I mentioned above that software alone might be enough, given MWI, but that wouldn't be CC-; by virtue of requiring MWI (a physical characteristic of reality), it would be CP-.

To really clarify the positions in the vicinity, we have to distinguish three questions:

(1) What does it take to simulate our physical action?

(2) What does it take to evoke conscious awareness?

(3) What does it take to explain conscious awareness?

In answer to each question, one might say that (a) Computation alone is enough, (b) Physics is enough, but physical features beyond computation are required, or © Not even physics is enough. Call these positions C, P, and N. So we have a total of 27 positions, that one might label CCC, CPN, and so on.

Question (1) is the question Penrose is concerned with for most of the book, and the issue that separates B and C above. He argues for position P-- over C--. Descartes might have argued for N--, but few would embrace such a position these days.

Question (2) is the issue at the heart of Searle's Chinese room argument, and the issue that separates A from B and C above. Searle argues for -P- over -C-, and Penrose is clearly sympathetic with this position. Almost everyone would accept that a physical duplicate of me would "evoke" consciousness, so position -N- is not central here.

Question (3) is the central question about the explanation of consciousness (a question that much of my own work is concerned with). Penrose's positions A, B, and C are neutral on this question, but D is solely concerned with it; so in a sense, D is independent of the rest. Many advocates of AI might hold --C, some neurobiologists might hold --P, whereas my own position is --N.

...

One can have a lot of fun cataloging positions (Dennett is CCC; Searle may be CPP; Eccles is NNN; Penrose is PPP; I am CCN; some philosophers and neuroscientists are CPN or PPN; note that all these are "non-decreasing" in C->P->N, as we might expect), but this is enough for now...

Where does everybody here fall?

#93 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 15 September 2005 - 08:58 AM

Marc, I think you missed the point (as did Chalmers, as I tried to point out), when you consider the rock for only one finite time slice. If the rock is conscious for even one second, then regardless of whether the same physical system is conscious a second later is irrelevant. It stands as a mockery of computational consciousness. All you need is a larger system (did you read Jaron's paper? He used a meteor shower, if I recall), and over a larger time period, there will be enough data for it to be conscious for that larger time period.

Chalmers points out this "error" in Searle's supposition of a system over a finite period, saying, in effect as you did, "Well, that system only implements the FSA for 7 minutes, but before or after that, it doesn't. HA! A human brain implements consciousness for one's whole life!" Except that, a whole life is only about 100 years! That's just as finite a period of time. Furthermore, with physicalism, you could create a human being, which resets the clock to zero even though that person will believe he or she has been conscious since they woke up that morning, hours before. If that person only lives seven minutes and then gets blown up, no one in the physicalism camp would seriously argue that the person wasn't conscious because their program only ran for seven minutes, or even seven seconds. So implementing a consciousness for even such a fleeting moment clearly "counts".

The only thing (the ONLY thing), then, that separates a particular "trace" of consciousness (as Chalmers himself called it) from a bit-string, is that that "trace" was part of a larger program that could have been different in a very specific way, given different input. Or as I called it, the woulda-coulda-shoulda's. "Oh, if only I would have done this! Oh, if only I could have done that! Oh, I should have done that!"

If the input had been different, then the output would have been different, in a very specific way. For the rock, if the input had been different, the output would also have been different, but not in the same specific way. Only in this sense can we actually pin down the causality: we need the woulda-coulda-shoulda's. Classical mechanics doesn't allow this, so functionalism is either dead or useless (i.e. dead) without QM. Ironic, since "weird" quantum effects (e.g. quantum gravity collapse in the microtubules) might not be part of the computation of consciousness (as Don rightly points out), but QM nonetheless is the only saving grace for functionalism. At first I thought it would require MWI, but my understanding of QM is admittedly weak, and the Copenhagen interpretation might suffice (just knowing that the probability wave encodes the proper probabilities of actions, i.e. "outputs", given a certain set of inputs, might suffice).

The flipside, which perhaps doesn't quite affect Chalmers argument, because he's a philosopher and not a physicist, is that functionalism is still "okay" if we allow that QM does in fact affect consciousness. Of course, this very thought it quite disconcerting to most physicalists, but I think this bias against QM is mostly just a reaction to the mysterians who would try to link consciousness to QM to "kill two birds with one stone". On the other hand, if QM is invoked by necessity, rather than as a given condition, then I think it's justified.

sponsored ad

  • Advert

#94 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 15 September 2005 - 10:10 AM

While I like the idea of coding systems, they aren't processes: at best, they're physical structures (meaning lookup tables aren't allowed, and hence functionalism is false), or at worst, they're free-floating Platonic ideals (meaning what?).

Consciousness is a process. That's why Chalmers' example of the ABABABABA FSA failed. As Searle stated, a rock could implement the FSA with rules A->B, B->A. Chalmers said something to the effect of, "Aha! What if the FSA was really A->B, B->A, C->D, D->E, E->C".

So what makes ABABABA conscious (generalizing, of course), is not the transitions from A to B and B to A, but the existence, "somewhere" out there in the world, of the CDE rules.

Say what? That's not functionalism. Functionalism says it would do the same thing in the same situation. Well, a C was never presented, so the existence of the CDE rules is irrelevant to whether the "trace" ABABABABA is conscious or not.

Unless we allow for the woulda-coulda-shoulda's. The counter-factuals. Assuming functionalism, then for consciousness to be even physically based, let alone computationally based, the counter-factuals must have some physical significance. (Alternatively, in an information-centric theory of physicalism, the counter-factuals must have some computational significance, a sort of QM for computation, i.e. coherence, superposition, interference, or something conceptually similar. The existence of the counter-factual must impress itself upon the computation somehow. Of course, if physics is being simulated, that's a detail we can never "know", and it's a metaphor which doesn't have much meaning to us...) Without QM, the counter-factuals cannot have physical significance under functionalism. I tend to like functionalism, despite its shortcomings, so this leads me to conclude that QM in some way is required for consciousness. Of course, we could drop QM, but then we'd have to drop functionalism. If the woulda-coulda-shoulda's can't physically interact with the computation, then the existence of the CDE rules, the existence of the coding system, is irrelevant.

I'm curious which way Don would go. Would he give up functionalism, or would he admit a role for QM? I doubt he'll read this, so the world may never know...

#95 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 15 September 2005 - 10:18 AM

Heh, you said John Taylor, and I thought, "What does he have to do with this topic?" Ah, never mind...

You may be right here... functionalism needs the 'could have dones' ...these are called counter-factuals, in order to make sense of the notion of causality. As I pointed out above, the difference between a rock and a brain is that the brain retains a stable coding system over time, a rock does not. But MWI of QM is all that is required for functionalism to work. I see no need for a *direct* role for QM processes in the brain even with functionalism.

While MWI would be sufficient, I'm not so sure that that doesn't qualify as a sort of "direct role". The very close branches of reality would have to cohere at least somewhat (some degree of superposition of states during the computation) for the effect to have "physical" significance. Otherwise, it may as well be as if MWI weren't true (e.g. Copenhagen, or perhaps no QM altogether). Of course, if it turns out to be empirically unverifiable, then I suppose it mightn't matter, practically speaking...

Jay,

You seem like a really really smart guy who has a lot of new ideas bouncing around in his head. Could you be the next Eliezer? -hopefully the same kind of genius as Eliezer but without the horrid personality of an Eliezer  However my impression is that you're still pretty confused. Your ideas haven't had time to fully crystalize in your brain yet - they are bouncing around like particles in random Brownian motion  I may not be quite as smart, but I've thought about these things for much longer.

Er, thanks, I think... I've never met Eliezer, and I've heard good and bad about him, so I'll assume you meant you think I have the good qualities, and hope I don't have the bad... In which case, I hope so too...

#96 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 16 September 2005 - 07:01 PM

In keeping with a separate but parallel discussion can we see the problem of qualia as one where *if* qualia are cognitive categorization of memory/experience then do they represent (as yet) undefined algorithmic logic with respect to sensory input?

To return to

AI = Idiot Savant

and

AGI = Artificial Consciousness

then *consciousness* would be defined as the ability to be *generally* observationally aware and able to interpolate information cross categorically. This could include *self observational* awareness (but might not automatically be so); recursive reinforced *awareness* (synthetic sensing) and autodidact learning ability.

Humans presume the *conscious* to be individually self aware but this may be an example of human hubris (anthropomorphism) influencing the cognitive logic and a distributed intelligence framework may be considered a pragmatic option. Such a Distributed Awareness is in keeping with an alternative level of awareness though it is not human, it is more a hive mind.

The point of making a human analog for consciousness is that we would interact (externally) with it as if it were individualized but all AGI could interact internally to be one entity. In fact all we are doing is either limiting the AI's ability or fooling ourselves to consider every single individual instance of this mind a separate entity but if it were good enough (and it would rapidly be so I suspect) then we could be generally fooled into perceiving it as such.

It will be more interesting to observe if the early developing AI bifurcates itself in order to create synthetic gender analogs so as to promote its own *evolutionary development* or tries to go it alone (the true Singularity) in the universe.

I suspect the more we make it *human-like* the more it will know loneliness and seek to develop compatible companionship even though as a hive mind it might never be able to understand loneliness as it couldn't experience it without a sense that it is missing another of its own kind.

So if qualia encrypts as language for humans then qualia have environmentally determined systems of sorting . These are decipherable and describable mathematically. We can also apply what we learn from the structure of the human brain as we decrypt it but the neuro processing structure of simpler species are probably initially simpler to model from current study and then later as we decrypt more of the human mind/brain the more complex modeling will be catching up.

Levels of consciousness and self-awareness:
A comparison and integration of various views
Alain MORIN
Mount Royal College, AB, Canada

The notion of “levels of consciousness” has been around for quite some time. More than a century ago, two of the most influential theorists in psychology were already examining this notion—Sigmund Freud (1905), with the unconscious, preconscious and conscious, and William James (1890), with the physical, mental and spiritual selves, and ego. Other related proposals pertaining to the concept of consciousness and its various possible degrees have been offered since then (see Armstrong, 1981; Block, 1995, Nagel, 1974; Natsoulas, 1978; Rosenthal, 1986). There has been a major resurgence of this issue in the scientific literature over the past five years. New terminology and models describing levels of consciousness are being rapidly introduced, e.g., reflective, primary, core, extended, recursive, and minimal consciousness.

http://cogprints.org...8/01/Levels.pdf


and

...most of these views can be parsimoniously integrated into a more general and already-existing theoretical framework, some models being easily assimilated by this structure, others adding subtle—and yet important—nuances to it. Current models reviewed here suggest that two dimensions of a superior form of consciousness, called “self-awareness”, are particularly important: time and complexity of self-information. That is, examining past and future aspects of the self and being capable of acquiring more conceptual (as opposed to perceptual) self-information indicate higher levels of self-directed thought.

To this, three additional variables shaping levels of self-awareness will be added: frequency of self-focus, amount (or accessibility) of self-related information, and accuracy of self-knowledge. Considerations about levels of consciousness in relation to mirror self-recognition and language will also be briefly discussed.


This cite is about human self awareness but in many respects can be applied in parallel to the dilemma facing programmers intending to model consciousness. My point about modeling the hive mind ahead of individual awareness is that it is actually easier given the structure and complexity of computation. It could be built on specific web operations now that share distributed processing power. It is logically consistent with the physiology of the *kind* of consciousness we are working with.

By measuring complexity through *levels of consciousness* that may even possess quantum relationships of complexity (not the physics kind of *quanta* but the *qualia* kind that take us from a lizard mind of the medulla oblongata to the cerebral level of sensory and conceptual awareness) we are skirting the pan-psyche argument Jay that you and Marc observe.

However for nanotech to self assemble we come extremely close to affirming a pan-psyche for matter and influencing the type of mind that is the next dominant force of evolution. We invariably integrate with such a mind through our contributory interaction with it. We can see this individually, collectively (socially) and both.


Objections & Replies on Self Awareness
http://www.mulhauser...objections.html

Awareness and Understanding in Computer Programs
A Review of Shadows of the Mind by Roger Penrose
http://psyche.cs.mon...1-mccarthy.html

Levels of Consciousness
http://www.sci-con.o...s/20040802.html

#97 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 16 September 2005 - 07:55 PM

Lazarus Long:

To return to

AI = Idiot Savant

and

AGI = Artificial Consciousness

Heh, I was going to reply in the original thread, but luckily I saw you moved it here.

Unfortunately, I kicked the power cord, losing my reply, and have no time at the moment. We'll just have to pick this up later...

#98 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 18 September 2005 - 11:05 PM

I'll make a suggestion before we get too deep into a semantic debate over consciousness; how about I amend that concept to Artificial 'Awareness'.

We can debate to what extent 'aware' but even any sensing device can be considered at least minimally 'aware' and as we enhance the level of awareness at some point we cross a threshold to 'self awareness' and then I suspect the point is moot about consciousness.

#99 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 19 September 2005 - 01:14 PM

I'll make a suggestion before we get too deep into a semantic debate over consciousness; how about I amend that concept to Artificial 'Awareness'.

Fair enough.

#100 signifier

  • Guest
  • 79 posts
  • 0

Posted 22 September 2005 - 05:16 AM

Eh...

I see no reason why software couldn't be conscious, beyond the traditional chaff and straw of anthropocentrism and solipsism.

I have no idea what 'qualia' are, and I don't think anyone else here does either. I think we should stop using the term, or at least stop using the term like we're going to make some meaningful points with it.

To argue otherwise is to argue that there is something special, over and above software/computation, that makes human consciousness more than just a rock.

There is indeed "something special," but it is nothing that software couldn't also possess. The something special is how all that data interacts.

#101 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 22 September 2005 - 03:38 PM

I like Yudkowsky's judgment.

Qualia is to intelligence as elan vital- the "vital force" that was thought required to create life previous to thorough science, is to "the process of reproduction"
or, qualia is to intelligence as phlogiston is to fire (a substance that was thought to allow the interaction of combustion in air). It's an act of creating some object out of an unknown cause (instead of checking for underlying causes).

Wikipedia puts it well to call them "obsolete theory"

#102 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 23 September 2005 - 12:47 AM

I think Chalmers put it best:

With experience, on the other hand, physical explanation of the functions is not in question. The key is instead the conceptual point that the explanation of functions does not suffice for the explanation of experience. This basic conceptual point is not something that further neuroscientific investigation will affect. In a similar way, experience is disanalogous to the élan vital. The vital spirit was put forward as an explanatory posit, in order to explain the relevant functions, and could therefore be discarded when those functions were explained without it. Experience is not an explanatory posit but an explanandum in its own right, and so is not a candidate for this sort of elimination.


http://consc.net/papers/facing.html
Section 5, The Extra Ingredient

#103 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 23 September 2005 - 12:28 PM

(Marc Geddes)

My own current theory is that qualia are representations of a relationship between two things - either the relationship between Self And World (consciousness about the external world) , or between Current Self and Past Self (self-awareness). Clearly there *is* new information in such representations.


Marc clearly while you and I might differ on nuance we are far more in accord on this issue than the others with us. One point however on this last point.

I do not see the two conditions you describe:

SELF <--> World

and

Self (present) <---> Self (past).

as mutually exclusive and both could be correct, coexist and most likely do in order to to function. This is not necessarily a *bilateral relationship* and to represent it as such may be a false *dichotomy* or over zealous reductionism.


The distinction you are making about qualia NOT being a substance is important because it is also the reason it is false as many have tried to apply it with the erroneous quantum mechanical arguments IMHO.

I also concur that we are looking a natural system of *cognitive processing* and in this respect it is very important to return to the consciousness as software issue in a constructive manner and more importantly I suspect, understand that what qualia become within that model are heretofore not described mathematical representations of *sensory data* (information) that form the basic building blocks of all sensory input for DNA based life that when viewed as levels of complexity contribute to forming consciousness but is not necessarily consciousness itself.

If we can unravel this basic natural language encryption we might be a lot closer to creating a synthetic analog that contribute to building a true Artificial Consciousness.

I made the distinction above about Artificial Awareness as opposed to Artificial Consciousness precisely because I came to understand what I was theorizing was a *pre*conscious condition, but one absolutely essential to achieve before going on to the next level.

Of course I am presuming the *levels of consciousness* argument by that assumption but it does seem to be a very practical assumption when it is applied.

#104 7000

  • Guest
  • 172 posts
  • 0

Posted 31 October 2005 - 01:35 PM

This is a question of time but software alone can't simulate consciousness.There will be a need for it to experience something above in the atmoshere.
7000.

#105 signifier

  • Guest
  • 79 posts
  • 0

Posted 14 November 2005 - 02:01 AM

I think we're all using too narrow a definition of consciousness... I don't think you have to experience something in the "atmosphere" in order to qualify as conscious.

#106 7000

  • Guest
  • 172 posts
  • 0

Posted 12 December 2005 - 02:52 PM

I have said a thing like this before...but You "software" will have to experience something X which could be reduce to Y and it will definately take place in the sky within the atmosphere.I have gotten the password for this even though we will have to put it right.It so simple when you know it but very hard to arrive at .The technology of the world is almost there to fit into it,"the concept".Infact,Science might have reach the stage! 7000.

#107 7000

  • Guest
  • 172 posts
  • 0

Posted 12 December 2005 - 02:58 PM

This is an oppurtunity to tell you that there is something real and visual right in the sky within the atmosphere that will result to consciousness in cyborg.7000.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users