• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Can software alone simulate “consciousness”?


  • Please log in to reply
106 replies to this topic
⌛⇒ new years donation: support LE labs

#1 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 July 2005 - 08:12 PM


For that matter, what is "consciousness"?

Consider the following scenarios:

1)

For simplicity of analysis, assume there exists a computer that has one CPU, that follows in-order execution, that doesn’t use pipelining, and that can operate on only one location in memory at one time. However, this computer is really, really, really fast. Let’s also say that it has a huge amount of memory.

How fast? Well, let’s say it runs at 100 exa-Hertz, or 10^20 operations per second. As for memory, let’s say that it has an exa-byte of RAM, or 10^18 bytes of RAM.

Now, let’s say that we have a neural network designed to roughly approximate a brain scan of a human mind, let’s say my mind. This scan of the brain of Jay shall be named Jay-1.0. The scan consists of a node of data for every neuron in the brain. Physical location is irrelevant, since it is the connections and weights of such connections that truly define the relative spacing of neurons. So the node itself is mainly just a marker, perhaps containing information about the node’s state (activity level, how close it is to reaching its firing threshold, etc.). For simplicity, we’ll just ignore the metabolic requirements of the neurons, so brain cells won’t get tired from lack of glucose or oxygen, etc. This simulation won’t get hungry, but such a change is probably just clearing up what is otherwise a burden.

Additional information will be stored in the form of interneural connections. Each of these will represent the link from one neuron to another (or to or from glial cells, I’m not up to speed on all my neurology). Information required for such an interneural connection might include the time it takes for a signal to propogate down the connection, the strength of the connection, etc. The details aren’t terribly important for the sake of discussion, so long as we can stipulate that the details take into account the best and most current knowledge we have available at such a high level.

Now, this computer will go through each neuron and interneural connection, one at a time, and using basic mathematical equations, determine when to change certain pieces of information stored in what is basically a large flat memory space. All the inherent structure and complexity is stored in the flat data file, and hence the computer is blissfully oblivious to such structure and complexity. The computer just sees bits and bytes and floating point numbers, and it performs basic math (including, if necessary, basic operations like sine, cosine, square root, etc., or even numerical integration, which is just repeated multiply and adds). The timeslice used will need to be fairly granular, at least 1,000 frames a second, but let’s push the envelope and go for 10,000 frames per second.

This simulation should respond very approximately like a human, and any discrepancy would likely be unobservable due to the inherent complexity and randomness of human nature. So, does this simulation experience qualia? Is it “conscious”?

2)

Now we’ll allow for a slightly more realistic scenario. Now we have a computer with a million parallel processors, each capable of out-of-order execution, pipelining, and executing multiple instructions at once. The OS’s for these processors are using sychronization protocols to keep the memory and various caches in sync. Otherwise, the basic program of this “human” mind is the same. Hundreds of billions, perhaps trillions of neurons, and hundreds of trillions of interneural connections. We’ll call this program Jay-1.1.

Does this change at all whether we can consider that this software simulation experiences qualia? Why or why not?

3)

Now for a more elaborate setup. The codebase will be expanded, to include new data structures. In addition to neurons and interneural connections, we’ll have data structures to represent the actual synaptic gaps, including the individual vesicles, concentrations of various enzymes and ions, membrane potentials, etc., etc.

This new, more elaborate, scenario will also add DNA and metabolism to the picture. Gene transcriptions and expression rates will be modulated according to known theory, and responses to glucose, oxygen, and other nutrients will be used. This will necessitate virtually feeding the brain in question.

We’ll call this program Jay-2.0.

We’ll also need a lot more hardware. Let’s say we’ve got a hundred million parallel processors, printed at 100 processors per die, for one million chips total. Processor speed is 1 petahertz, for a total of 10^23 operations per second (about a mole’s worth, coincidentally). Assuming an even finer timeslice resolution of 25,000 frames per second, this computer would obviously not be able to process the information in “real-time”, but that’s irrelevant to whether qualia are experienced, right? The virtual world this “mind” will be “experiencing” (or, more accurately, being fed simulated sensory data about) will follow the proper laws of physics, so “time” will flow at the appropriate rate.

So, does this program experience qualia? It should be able to process the simulated electrochemical impulses traveling down the optic nerve, translate those impulses into the necessary output pattern of electrochemical impulses to push through memory and cognition filters, and tell the dispersed neural network that a cat is in its field of vision. A grey cat with white splotches and white “socked” feet. This stimulated the memory of “Walter”, a cat from the “real” Jay’s childhood. Jay-2.0 is confused, having thought Walter dead for over a year already.

But just because Jay-2.0 can process all this information, does that mean that this collection of 1’s and 0’s, being processed through 100 million CPUs, experienced qualia? Actually experienced them. Experienced the grey and white, experienced the feeling of confusion of seeing the dead cat alive? In what way were Jay-2.0’s experiences more vivid and “real” than those of a Sony hand-held video camera, which can also process photons into a grid arrangement of data that represents colors, etc.?

4)

In the next scenario, we’ll throw all our understanding of neurology out the window, because it’s probably wrong anyway. We’ll just do an atomic-level scan of my brain, and store every molecule, every atom, every electron, in its proper place and state. Then we’ll just run the most accurate chemisty/physics simulation available on this 10^27 or 10^28 atoms that comprise my head. We’ll do timeslices of picoseconds, or smaller if it’s necessary. The computer to run this simulation will be the size of a large city, running a billion trillion parallel processors and an enormous amount of RAM. Each picosecond (or smaller) timeslice will be processed in a microsecond of real-time, so the simulation will run about a million (or more) times slower than reality, meaning it will take a year to simulate 30 seconds’ worth of time. Jay-3.0 won’t notice, of course.

Does this simulation experience qualia? Why? How? It’s just a billion trillion parallel processors going through and performing endless vector calculations and integrations on a huge array of floating point data. Where is the “process”? There are no actual laws of physics, no actual atoms, no actual electric fields, just a bunch of numbers, which are themselves just a bunch of 1's and 0's. How is this the same as the actual atoms?




Really, how or why would anyone think that software alone could experience qualia? How is the flipping of bits in a flat memory space ever going to even remotely be analogous to chemistry?

I do not discount that we will someday have the ability to upload. "Real" uploading, by which I mean uploading to an environment that preserves the ability to truly experience the world, will not just be a stupendously fast computer running software. It will require special hardware, hardware that does the analogous job of whatever it is within our biochemistry that allows us to experience qualia. Even then, preservation of “identity” is far from given, but that is another topic to be addressed in a separate thread.

#2 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 July 2005 - 08:34 PM

I should mention that there are several issues to consider here.

First of all, would any of these software programs even be able to pass as the real Jay, in an objective test? If not, the question isn't entirely valid.

Second, assuming that one or more of these programs could pass for me in an objective test, does that mean that these programs are "sentient" or "conscious"?

More in-depth, we get into the question of whether the actual experience of qualia can affect our behavior, or are qualia entirely "passive". If entirely passive, then a philosophical zombie is not a logical impossibility. Such as simulation might represent one such scenario, where the simulation performs all the active processes of consciousness, but fails to provide the passive experience of qualia.

If qualia are not passive, then it raises the question of whether we could objectively tell whether qualia were present or absent in such a simulation. We can't use human behavior as the test, because it is so complex that no test could be devised, short of one that could manipulate MWI itself, which could test it. If we subject the real Jay to a set of physical stimuli, we couldn't hope to reproduce the test, because I am never the same exact physical construction from one instant to the next. Each test would be a one-shot deal. Without reproducibility, we couldn't hope for an objective test which is 100% accurate. Of course, if we could manipulate MWI, and actually observe a billion reactions to the same set of stimuli by the same exact configuration of my particles, and then run the same stimuli through a simulation of me in that initial configuration, then we might hope to have a basis for an objective test. I don't foresee such a test being possible for centuries, if ever.

#3 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 July 2005 - 08:45 PM

Note: If your understanding of neurology is better than mine, which is very probable, please forgive my high school understanding of the specifics. The specifics aren't important for the thought experiment, as long as the specifics that are actually used in the programs are the correct and relevant specifics according to the best theories available today, or even available in the next 20 years.

sponsored ad

  • Advert

#4 susmariosep

  • Guest
  • 1,137 posts
  • -1

Posted 22 July 2005 - 10:16 PM

Here is my simple idea.


Can software simulate consciousness?

Consider yourself Jaydfox in deep dreamless sleep, in a fainted state, in general anesthesia, in a knocked out condition, are you then conscious or unconscious?

If you never come out of those states but still remaining alive, are you existing as human or not?

When you do come out from the unconscious state, do you know of anything that has happened to you while you were in the unconscious state and period?

It seems so simple to know what is consciousness, and your thread has this title: Can software alone simulate “consciousness”?

My answer is yes, if you are talking about a software that has the sensor functions of a human, namely: sight, hearing, and the other senses as many as are also present in man, and memory storage and memory retrieval and memory processing functions.

For me it is conscious, such a software in a computer; so that we can say and it really is that a computer can be and is conscious.

It can be more conscious or less conscious than the average conscious person, and it can be selectively conscious if programmed so, just like people have programmed themselves or been programmed by human and non-human agents to be selectively conscious.


Honestly, I can't see why you have to go so far and so long into three messages to ask what is consciousness, when if you examine yourself in all those states of unconsciousness that I mentioned, and examine yourself in your conscious moments, then certainly a machine can be and is conscious even though crudely compared to a man; but a man can also be very crudely conscious compared to many a computer with the customized consciousness software.


Consciousness in man is nothing but the active functioning of his senses, all of them, and storage of memory data, and retrieval, and processing of the data.

When your senses are not functioning as when you are in general anesthesia, are you conscious or unconscious? Of course unconscious; and if you don't come out of that unconscious period then you are gone, even though you can still vegetate, or better and more correctly you are vegetating physiologically in all other aspects of your biology just to stay or to be kept alive -- but your existence as a human, namely, a conscious being, is gone gone gone.

Now, suppose your senses are working but your memory database is all wiped out, you have suffered total irrecoverable memory loss, complete irreparable amnesia, in which case you are indeed conscious but you have no identity.

That identity can be restored by feeding your empty memory with all the data of your identity as can be obtained from family and friends and all the social and civil registers of your identity, like membership in clubs, police dossiers, government records, school documents.


Briefly, if we program a computer with a software to do everything we can do when we are conscious, making it operate with senses and memory to get things done, then it is conscious and it can and does have an identity.


I used to write many messages about consciousness and identity and death and non-existence even without reading about consciousness as examined by philosophers and computer experts and neurologists and psychologists, and my conclusion is that we can clone ourselves already today with a computer, and we can restore ourselves from death with a computer.

And the sooner we get ourselves computerized and exist as computers the better for mankind, for then we can exist and exist repeatedly for indefinite duration.


You want to restore Einstein? Look up all the records we have of Einstein and produce a computer with all the senses and super senses that can detect its outer and inner environment, but loaded with the identity data of Einstein: there, you have restored Einstein and even a better version of the biological but demised Einstein, and he can do his work in astrophysics.

You ask me, what about his emotions and personality? Simple question deserving a simple answer: restore them also from all the remembrances people have of his emotions and personality and make the computer act out his emotions and his personal quirks; there, you have a Einstein in person.

What is personality? but identity plus all the impulsive illogical behavioral peculiarities of man in his conscious moments. A computer can act also emotionally and pursue personal impulses, even without or against ratio and logic.

Susma

#5 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 July 2005 - 10:35 PM

Susma, perhaps you missed my point. A video camera can functionally detect light and produce a data representation of that light. Are you suggesting that the camera can actually "see"? If you don't understand the question, I'm afraid it's pointless to try to answer the deeper questions I offered.

As for identity, you are oversimplifying. In my dreams, I often hold beliefs and perform actions that I would never hold or perform in the real world. It is as if I were a different person. But it is still "I" that is experiencing this other life, not some other person.

A new person, built from scratch with my memories and emotional response system and thought processes, etc., would not be "me". Oh sure, such a person might externally act like me, and be indistinguishable should we end up fighting over custody of my children. But that person wouldn't be "me".

I'm not sure what to make of people who don't agree. It's as if such persons deny their very experience of the world. That, or they just didn't understand the question.

#6 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 July 2005 - 11:11 PM

But, er, for what it's worth, sorry about the snap reply; I suppose I should let discussion follow its own course.

#7 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 23 July 2005 - 02:42 AM

Hi, Jay. I haven't read it yet, but the content in a recent essay by Ben Goertzel might correlate well with your inquiry. My apologies if it doesn't.

http://www.goertzel....QualiaNotes.htm

#8

  • Lurker
  • 1

Posted 23 July 2005 - 03:48 AM

One thing to take note of about the simulation of brains and experiencing qualia is that the simulation would either have to run at a rate which would approximate a real brain or the sensory streams would have to be slowed down to match the rate of the simulation. In other words, either the simulation runs at real time to match the sensory environment or time has to be slowed to match the simulation.

In my opinion a simulation that is able to replicate the dynamic information pattern of a brain would have the same "consciousness" (whatever that is) as an orgainc brain.

#9 treonsverdery

  • Guest
  • 1,297 posts
  • 160
  • Location:where I am at

Posted 23 July 2005 - 05:03 AM

well, it might be encouraging to think of it this way, sentient or not a less than a million jaydfox simulations are quite capable of characterizing then directing the physiological growth programs of human brain tissue. Then growing brain tissue perhaps like a yogurt push up pop with consciousness is able to continually diagnose its "am I still alive" adequacy as it goes from zero pt to 100pt nonorganic matter. Like other humans though I have minimal standards. when I wake up, if I'm me, then I qualify as alive. That's minimal as I'm accustomed to the idea of having completely different thoughts when I go to sleep as when I wake. There's also what I have the urge to call the "cheap slut" or "fine thanks'" perspective. even as the pre-Borg like jaydfox pushup-pop brain grows n silicizes itself it risks asking consciousness if it's there as much as prior to the most recent modification, consciousness is "nice" about it, says "uh-huh" then the silicized entity edits such that [blip]


I had a better idea than this which is more cheerful but It might not be of my creation.

There are key differences between time n matter, they have different nonfinite shapes. keenly enough math "software alone" n the physical world have different combinatorial adjacency densities. any math object like a moving average oir chaotic attractor can be simulated with say a million college students dog earing or un earing a page on book, then using telephones to communicate the state of the entire project, thats functionally equivalent to the scoping nonfiniteness jaydfox describes. From a physical combinatorial perspective all of the possible mutual activities of just a few physical "atoms" like a cubic centimeter of a human brain is much much much bigger. Between these two nonfinities are different ways. The nature of those ways may have the opportunity to structure replies to the question "is there anyplace to live aside from one's head"
I guess I could try to specific about these ways.
I will write again on this.

Just being like ordinary I think that the way companion animals emote, have motive, n do simple mental tasks suggests that the physical platform of consciousness is abundant. I say that thinking of the drake equation that suggests life of whatever kind is abundant on different planets from just one earth, the different physical structures of consciousness are multiple from turkey corwin the parrot to primates.

I've been told that we all lived as part of a unified head, that it was wonderful, might even be that way right now.

Edited by treonsverdery, 19 October 2006 - 04:26 AM.


⌛⇒ new years donation: support LE labs

#10 kevin

  • Member, Guardian
  • 2,778 posts
  • 50

Posted 23 July 2005 - 06:29 AM

Just being like ordinary I think that the way companion animals emote, have motive, n do simple mental tasks suggests that the physical platform of consciousness is abundant.


In fact I believe essential to reality itself... When atomic particles are attracted/repelled by each other could they be said to have an elemental 'awareness' or be consicous of the other entity?

I have to agree with prometheus as well..

If one is simulating a brain, any conciousness which is derived from that simulation has qualia, no matter what the substrate or algorithms used to create the simulation. Think of conciousness as the ultimate multidimensional holograph that is created by the physical activity of information and hardware. If the resulting dynamic information pattern is the same, that is all that is required to create an instance of 'Jay'.

#11 Lazarus Long

  • Life Member, Guardian
  • 8,090 posts
  • 237
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 23 July 2005 - 11:33 AM

I have been away and now nice threads like this one crop up. Thanks Jay et al.

"If a tree falls in the forest and nobody hears it does it make any noise?"

Remember that little test question from Philosophy 101?

That is the basic claim of requisite observer to impart validity, or *existence* to a *qualia* of phenomenon but it also is predicated on a lack of scientific understanding. It would only beg the question to insert remote detection devices to the scenario because all that does is extend the abilities of the observer it does not alter the conditions of the premise.

Perhaps it should be called the Function versus Form of Consciousness problem.

In a way much of the claim in this issue seems to revolve around a similar conundrum. How much of the self is Me as observer of Me?

This is a kind of existential schizophrenia that we play with ourselves like a self correction test also called a conscience. ;))

Not exactly the same as consciousness right?

Or is it?

While I let that one sink in for a bit and study more of what you folks have said I want to address one thing treonsverdery said:

There are key differences between time n matter, they have different nonfinite shapes. keenly enough math "software alone" n the physical world have different combinatorial adjacency densities.


I would say not according to Einstein.

The whole point of combining Space/Time is that they are completely interdependent dimensional qualia. Space (hence matter) cannot exist without a temporal component and visa versa.

Edited by Lazarus Long, 23 July 2005 - 01:58 PM.


#12 Lazarus Long

  • Life Member, Guardian
  • 8,090 posts
  • 237
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 23 July 2005 - 02:09 PM

Another point of the sensory memory function issue is learning.

Time is change.

In fact it is a mathematical constant.

The *qualia* (variability) of that change is experienced as *self existence* or broader social historical (hence memetic) and cultural structural organization.

Information can be *organized* (even digitally) by individual or collective focus. Consider it a top/down-bottom/up dichotomy in this respect.

Nonetheless the aspect parallels social/individual dynamics.

Materialism encounters the duality of function versus design in a logical way that derives of evolution, which in turn derives of the competitive effectiveness of *form and function* for adaptive mutation.

Whether as a brain or the memory of a species (history) information is processed by certain structured dynamics. This may have a material relevance with respect to the physics of matter and for that discussion the focus is DNA and any other potential *smart molecular organization*, which someday might create true nanotech.

#13 susmariosep

  • Guest
  • 1,137 posts
  • -1

Posted 23 July 2005 - 09:22 PM

BRAVO!!!


Joydfox says:

(jaydfox)
But, er, for what it's worth, sorry about the snap reply; I suppose I should let discussion follow its own course.


And Susma says:

Behold a true pure-bred and blue-looded gentleman discussant. You are a credit to the leadership of ImmInst Org.


Susma

#14 susmariosep

  • Guest
  • 1,137 posts
  • -1

Posted 23 July 2005 - 09:47 PM

[b]Where's your doggerel English?[/i]


Proceed first to the text below prefixed with a line of asterisks (*******), then read the following afterward if you prefer, addressed to Treon.

Treon, here you are, after so much absence. We meet again. And Lazarus told me that you are from Iceland or is it Greenland and therefore your English is peculiar.

I asked you whether you were being doggerel-ish on purpose and was simulating some garbled plagiarism of my posts some months back, but you kept quiet.

So you can and do write intelligibly.

Tell me then, what was indeed your purpose in those two or three messages where you sound like a very clumsy text generator working on an input of my messages, my thoughts of religion?

But as before I am afraid that you will keep silent seemingly to have volatilized into thin air when put in face to face dialogue.


-------------

***********

well, it might be encouraging to think ...

snipped snipped snipped



Very interesting, ideas here expressed...

Allow me to just suggest that as a man in the street, it is certainly more productive now that we do know how to produce a machine and have produced machines that can perform many many acts man can perform, we should ask the technicians and engineers to produce better and better such machines, while theoreticians make more and more demands on them to add functions to these machines which will enable them to perform acts of man which earlier versions could not.

In that manner we will have a man better than man is actually, more intelligent, more civilized, more artistic, and even if you prefer more emotional in those sentiments universal experience and value of mankind has concluded to be most desirable for all mankind of all times and climes.

Susma

#15 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 25 July 2005 - 01:47 AM

There are no actual laws of physics, no actual atoms, no actual electric fields, just a bunch of numbers, which are themselves just a bunch of 1's and 0's


Why does that matter? A function is a function, it recieves input and has an output. To say that intelligence isn't a function implies that there is no cause and effect = supernatural. There is more evidence pointing toward a physically possible function being the root of intelligence than otherwise.

What i'm implying is that our null hypothesis should be that there is a single "human intelligence algorithm" that describes the process of intelligence in humans- applicable to ANY input and ANY output, as long as they are FUNCTIONALLY equivalent to humans. Why would our null be based on the supernatural? It would be useless, although it is distantly possible.

Does that make sense Jay? I'm curious to your response.

#16 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 25 July 2005 - 02:01 AM

What i'm implying is that our null hypothesis should be that there is a single "human intelligence algorithm" that describes the process of intelligence in humans- applicable to ANY input and ANY output, as long as they are FUNCTIONALLY equivalent to humans


So basically we should start the debate with the null hypothesis that you are right and other opinions are wrong. Specifically, we should assume that 0's and 1's are equivilant to reality, despite the utter lack of evidence for this, or even a credible hypothesis.

#17 treonsverdery

  • Guest
  • 1,297 posts
  • 160
  • Location:where I am at

Posted 25 July 2005 - 03:54 AM

Mermaid

Lazarus has me thinking about
um i might want to restudy that relativity thing: The Brane n String theory people to my vague perception are creating those theories as there's a beauty-impaired NP| not NPseam between the three spatial dimensions n time, but they might be doing the Brane n String thing as gravity is being noticeable. but that's just me covering my ass. I don't know how the contraction of space with velocity affects my statement.

notational differences: i don't know what I'm saying here but NP complete or not complete is a human way of saying a division between math that doos itself instantly n math that doos itself with among various possibilities: iterations, computers iterate
I was writing that a CC of brain to my perception has more NP incomplete dynamically solved equations solved than any possible boolean digital computer where the iteration rate is directed with a currently described physics thing like integer atoms or atom pieces. My perception is that non integer math forms are more higly dense or shaped that non nteger math forms are um, uploading wayish.

describing ways: I will make a little effort to do what I mean at 3d space then describe that here.

different item
Then growing A brain tissue perhaps like a yogurt push up pop: might be fun

Edited by treonsverdery, 19 October 2006 - 04:27 AM.


#18 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 03:25 PM

As for making the assumption that pure numbers and informating processing might experience qualia or have consciousness, it's a fun hippy way of saying "Whoa, dude, the universe is, like, all-knowing... It like, blows your mind, man..."

But in effect, it doesn't really say anything. I experience the world, and I am according to all my experience, composed of atoms and molecules and chemical reactions and macromolecules and complex molecular structures all interacting in a perversely complex manner, with electrical and chemical processes being involved at the least, and possibly (but not necessarily) quantum level events as well. This is the only basis for which we can rationally assume that consciousness might be physically based, and it's a hell of a lot more than just "information processing". Concluding that informating processing, completely removed from any ordinary substrate (e.g. perhaps running on an earth-sized "Difference Engine" of 19th century design), would have qualia, is about as useful as saying that everything, even rocks or protons, experience qualia.

Absent a plausible theory on the subject, I assert that software alone cannot experience qualia, no matter how complex the software, no matter how "intelligently" and "consciously" that software may appear to interact with the outside world.

There are two main bases for such a conclusion, though by no means are these two bases exhaustive of the many arguments against software-based qualia.

The first basis for this conclusion is, if qualia are truly a physical "material", so to speak, that come into existence (or precipitate out of a dense vacuum "field", like the zero point energy field, but filled with qualia "particles" instead of electrons and photons, etc.), then any current simulation would not include these particles, since we don't know of their existence. But what if we did? Would it matter? Could we just throw in a bunch of 1's and 0's that we as programmers know are supposed to represent qualia particles, but as far as the computer is concerned are just more random data to keep track of? This goes back to original question, how is a symbol equivalent to an object? How can a bunch of numbers that represent the position, spin direction, and velocity of an electron, be exactly equivalent to a real electron?

But a second, more interesting scenario, is the question of if qualia are "emergent" phenomena. In other words, if there aren't qualia particles. But just as a bunch of rubidium atoms, in close proximity and cooled to near zero Kelvin, can act as one particle in a Bose-Einstein Condensate, perhaps a certain arrangement of neurons and organic molecules, undergoing certain chemical and electrical reactions, can act as a quale of "redness" or "coldness" or "sadness".

But how would the software know this should happen? Would a quale emerge in a software simulation, if the actual atoms and molecules and chemical and electrical reactions aren't occuring. One idea is that, without a proper understanding of the physics that might lead to such an event, the simulation would fail to simulate the quale. But even if our current understanding of physics were enough (either now or in the near future), would a simulation of a quale actually be experienced? What if the simulation were just a few bytes of data? Well, the word "yellow" is a few bytes of data, but does anyone here think that my computer experienced a quale of "yellowness" as I typed the word? Patently ridiculous. I assert too that just ascribing a few bytes of data at the right time and place during the simulation, in response to the simulated atoms lining up and reacting in a certain way, wouldn't actually make some conciousness in the computer experience yellowness. It's still just a bunch of ones and zeroes, and regardless of whatever ability the informating processing capacity of the program might have to "verbally report" that it's alive and conscious and experiencing the world, we have no plausible reason to believe it. Well, other than our hippy desire to be at peace and be one with Gaia and treat rocks and protons with respect.

It is from this position then that I evaluate all claims about simulating consciousness in a computer. Some sort of hardware must be present to perform the non-software task of doing whatever it is that biochemistry does to give rise to qualia. It might be possible to do without an organic substrate... In fact, we may even be able to turn much of our raw informating processing capabilities into software. But whatever portion of our brains gives us the capacity for qualia, to see, hear, feel, and even sense and "see" and "hear" our own thoughts, will need to remain on special hardware. Probably not basic transistors, but something capable of... whatever it is that either allows qualia to "emerge" (I hate that word, it's such a cop-out), or to precipitate or be created out of the void. And that's even assuming that qualia are physical in nature. Which we have no credible theory to even believe, yet. If qualia are non-physical, then... Well, that still doesn't rule out a change of substrate, but it certainly makes it nigh impossible to ponder at this time.

#19 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 04:19 PM

BTW, Nate, thanks for the link to Ben's article. Good reading, just finished it. I'm not sure how it affects the software simulations I proposed, but it did help put a new perspective on the qualia issue that I found in neither Dennett's nor Chalmers's writings, at least the few I've read from either.

#20 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 05:07 PM

But in effect, it doesn't really say anything. I experience the world, and I am according to all my experience, composed of atoms and molecules and chemical reactions and macromolecules and complex molecular structures all interacting in a perversely complex manner, with electrical and chemical processes being involved at the least, and possibly (but not necessarily) quantum level events as well. This is the only basis for which we can rationally assume that consciousness might be physically based, and it's a hell of a lot more than just "information processing".


Jay all you are basically saying here is that consciousness is really really really complex, and therefore we can't simulate it, which is a falicious argument. So it's complex, that just means it will be very difficult to simulate it.

I don't know how complex it truely is, and neither do you. Perhaps it will require a computer you mention in your first case, and perhaps it will require one far beyond what you mention in your last. But saying that these magic qualia you speak of can only emerge in the one way you already know them to have is like saying people can never fly because we don't have wings.

#21 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 25 July 2005 - 05:48 PM

Jay all you are basically saying here is that consciousness is really really really complex, and therefore we can't simulate it, which is a falicious argument. So it's complex, that just means it will be very difficult to simulate it.


Actually, that wasn't what he was saying. The question is: can reality be represented perfectly by 0's and 1's? Is a computer simulation equivilant reality, no matter how complex the simulation is? Complexity is not the issue here.

The set of possible computer simulations built on 0's and 1's is countably infinite system (disregarding the limitation of obtaining matter to build the computer), but is everything in reality representable within this system?

#22 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 05:50 PM

Edit: Hmm, Osiris got a word in there before I could, so to be clear who I'm replying to:

(JustineRebo)
Jay all you are basically saying here is that consciousness is really really really complex, and therefore we can't simulate it, which is a falicious argument. So it's complex, that just means it will be very difficult to simulate it.

I don't know how complex it truely is, and neither do you.

No, I didn't just say it's really, really, really complex. I said it's complexity (of whatever degree) upon atoms, and upon actual physical processes. People like to believe that complexity "mystically" leads to "emergent" phenomena, and then tease religious people. Hypocrites.

The information processing can in theory be done in software, because what is software but information and its processing. But the qualia are, as far as we can determine, complexity layered upon atoms, electrons, the electromagnetic force, and possibly but not necessarily upon various quantum phenomena. There is no reason even vaguely, let alone cogently, plausible to suggest that placing the same degree of complexity in the complete absense of the physical components would lead to qualia. To think otherwise is to be a mystic.

⌛⇒ new years donation: support LE labs

#23 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 06:20 PM

There is no reason...

...yet. I reserve the right to be proven wrong by future philosophers and/or scientists.

#24 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 07:01 PM

Hmm, I hate to throw this discussion a little off-topic, but I think it's a fun tangent that won't digress too far from the original question...

Lazarus Long brought up the old "If a tree falls in the forest and nobody hears it does it make any noise?" question that starts off so many discussions of philosophy. It's a wonderfully sinister question in that it's never terribly relevant to any other discussions, yet has the power to undermine them.

In a very simplistic sense, I experience the world, and from this subjective reality I build an objective reality, as per the description in Ben's blog entry (not the article that Nate Barna linked to, but the blog entry which that article referenced).

In my subjective reality, there are other agents (DonSpanton, for example, who hasn't replied here yet, or Osiris, or JustinRebo, or Susmariosep, etc.) who, according to my objective reality, are humans like me. Of course, according to my subjective reality, they are just the authors of a bunch of comments and PM's here at ImmInst. But in "my" objective reality, they are human beings with thoughts, feelings, and presumably, qualia.

But in my subjective reality, the only person that has qualia is me. And since my objective reality is only a subset, and probably not even a proper subset, of the "real world" (which itself doesn't have much meaning, since my objective reality is about as real as the world gets for me), who's to say whether or not anybody but me has qualia?

Now, in Susma's objective world (not his subjective world, but "his" objective world), perhaps the software programs I mentioned would have qualia. Perhaps in JustinRebo's objective world as well.

But presumably, there is an actual "real world" out there, and in that world, it doesn't matter whether I think a rock or a software program has qualia, or whether Justin or Susma does either. Either the rock or the software program has qualia, or it doesn't. No amount of mathematical complexity or "beauty" or "elegance" will change that fact in the "real world", though it might change that fact in various person's objective realities.

Thus the question of whether a tree which falls and nobody hears it. Does it make a sound? Well, it doesn't make a sound in anybody's subjective reality. But what about in a person's objective reality? And what about in the actual "real world"? Does it make sense to speak of the "real world" if we can't know anything more than our own "objective" realities?

For example, let's say we have a string of a million 1's and 0's. How does that string of 1's and 0's come to represent a picture of a dog? In the absense of someone to look at the picture and say, "Hey, that's a picture of a dog!", does that string of 1's and 0's represent a dog? Might it not be a sound file that represents a funky 22nd century musical instrument? Or a file that needs to be decrypted and uncompressed to be a map of a buried treasure? What gives those 1's and 0's meaning? What makes those 1's and 0's a picture of a dog, and not the data structure of a 20,000 node neural network that is experiencing the quale of yellow?

In this sense, it is "we" who give those bits meaning. The bits, in and of themselves, have no "meaning", and hence couldn't possibly experience the quale of yellow.

So in a sense, those bits of data in the Jay-3.0 simulation, or perhaps even in the Jay-2.0 or Jay-1.0 simulations, might experience qualia in the objective reality of some people here. But is that what matters? If a person believes a rock is experiencing qualia and should have rights and be protected, if in fact that person believes the rock is a supremely intelligent (if not quiet) alien who, if we destroy it, will never be able to teach us the meaning of life, should we respect that person's wish to not destroy the rock?

Should we respect a future society that gives human rights and protections to purely software-driven AI, when in fact it is patently absurd that such AIs deserve such respect and recognition and protection? This is why many religious groups are opposed to the idea, and somehow, because religious groups oppose giving rights to software AIs, that must mean that software AIs can be conscious, because anything a religious person believes must be mystical BS.

It's like having the KKK come support your effort to have racial preferences overturned. It taints your effort, even if the effort is justifiable and sound.

Yes, religious people think the idea of giving human rights to software AIs is patently absurd. Does the fact that they're religious make them wrong? Why can't a rational person, who even accepts the possibility that qualia and consciousness are entirely physical in nature, suggest that qualia and consciousness can't be "simulated" in software alone?

A world of people which believe that software AIs truly "experience" the world wouldn't be much different from a world in which people believe that the Pharaoh is divine, or that burning trees (with or without virgins tied to them) at the winter solstice causes the sun to come back. It makes it "real" in their personal objective worlds, but does it really make it "real"?

And yes, I'm heavily biasing this argument on the conclusion that software AIs can't truly "experience" the world, but we heavily bias our assumptions every day based on what's logically concludeable at the time. I could be just as wrong as the vitalists or the evolutionists. [tung]

#25 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 09:43 PM

To think otherwise is to be a mystic.


I dunno, your position seems quite mystical to me.

What seems most mystical is that you think a simulation of you could act like you in every way without having qualia. And you think that whatever behavior any kind of AI demonstrates that it can't have qualia either. That's ridiculous. Until then your position could be right, but once something like that is demonstrated you have been proven wrong (if it happens). Your rational for a simulation of you taken from a scan of your brain displaying all of your kind of behavior but still not having anyone behind the wheel doesn't make any sense. Do you think a scan of your brain would be transformed into a super advanced A.L.I.C.E.?

Just what do you think it is that your qualia comes from? Atoms bumbing into eachother? Some degree of haphazardness? Super duper complexity? Granted we don't know entirely the nature of consciousness, but all you seem to be doing is pointing to the unknown and saying "we don't know what that is therefore it is impossible to ever simulate it".

The brain is an information processing machine made out of matter just like computer chips are made of matter. In a computer chip you don't have 1s and 0s flying around. The one's and zeros are just representations of what is happening. What is really happening is electrons are physically zooming around, voltages build up that are either capable or not of bridging gaps. Quantum randomness is taking it's toll.

What is happening in your brain? Ions are being exchanged causing voltages build up that are either capable or not of bridging gaps (capable=1 not=0). Quantum randomness is taking it's toll.

This is information processing. It is just much more complex than any information processing we are currently capable of.

Where does the magic qualia come from? Some sodium atom doesn't cross over a plasma membrane when it was supposed too... BANG! qualia?

Why is it that you think "qualia" can't be an output just like an output just like an image on a computer screen is output. Just a type of output designed to monitor other kinds of output. If it isn't output what is it?

I'm really having trouble understanding your rational.

#26 DJS

  • Guest
  • 5,797 posts
  • 10

Posted 25 July 2005 - 09:51 PM

Hey Jay, I wrote some of this the other day, but never had a chance to put it up:

Don  Neurotransmitter diffusion rates, post synaptic potentials, neuromodulators, and so on and so forth… There is no doubt that the complexity underlying the process of consciousness is mind boggling.  However, I’ve come to realize that your concerns go further than just the complexity displayed by our current biological substrate.  For you, it is not just about the *information* or *information processing* capabilities exhibited by the substrate.  What concerns you is this nagging suspicion that certain aspects of our consciousness simply can not be *coded* for.  From this base line supposition all functionalist accounts of consciousness will be inherently deficient.  So instead you focus your attention on entirely speculative (can’t even really say hypothetical yet, can we?) biochemical components of the substrate which could supply us with the phenomenal aspects of our consciousness.


I think one of the greatest challenges facing the creation of real AI is the need to be able to take sensory data from the outside world and incorporate it into a coherent internal model. The reason, I would suggest, that humans have such an easy time with this is that evolution has supplied them with an already "hard wired" substrate that predisposes them towards language acquisition, spatial perception, motor coordination, etc. There is no doubt that this will make creating *seed AI* difficult, but one should not operate under the assumption that AI is an all or nothing deal. After all, humans aren't created as fully functioning adults. They come into the world as helpless infants with minds ready and waiting to suck up as much information from the external world they possibly can. The idea would be to build a nascent AI with the ability to acquire new information using it sensory equipment.

The mind is a four dimensional modeller of the external world. Objects and motion ARE, at a very basic level, coded for. I could go into a whole monologue about structures, schemas and metaphors. I could cite you numerous neuroscientists with the latest theories on how representation and abstraction occur. But somehow I don't think this would make any difference because you would simply respond back, "Yes, but does that really explain why we 'understand'?" The answer is that, yes, it does. Information processing is not information. Information processing is information being related to other information in a dynamic, functional manner. So, IMO, when it comes to conceptual elements of consciousness you are simply wrong. There is an abundance of information out there for anyone willing to look. For instance, check out ---> The Cerebral Code (free online!)
----------------------------------------
The area of our debates that has bothered me, however, was my inability to explain the phenomenal or *qualitative* aspects of subjective experience. So this weekend, while I was busy performing other mundane tasks, I really racked my brain trying to come up with a functionalist explanation for the notion of *qualia*. And to my surprise I did. Of course, although I haven't yet found it, I am sure that someone else has thought of this idea, but here goes...

(I wrote this while covered in paint [lol] )

The fact that we can not define a "quale" in a meaningful, empirical way seems to imply that phenomenal experience is not representational in the traditional manner associated with a functionalist account of consciousness, but is instead indicative of a relational model.  The particular wave length represented as the quale of red is distinguished as a relationally defined member of the total color schema.  IOW, a quale's subjective quality is not defined by its functional representation, but by its functional relationship to other quale.


This line of reasoning fits in nicely with Dennett's attempt to destroy the notion that qualia exist in objective reality. If a quale's phenominal properties can in some instances fluctuate, then this would seem to suggest that it is defined not by its own attributes, but by its relation to other quale of a particular order.

There will never be the physicalist explanation you are looking for Jay because qualia are completely a product of subjective experience -- nothing more.

#27 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 10:04 PM

What seems most mystical is that you think a simulation of you could act like you in every way without having qualia. And you think that whatever behavior any kind of AI demonstrates that it can't have qualia either. That's ridiculous.


Not really. A video camera can make out a visual representation of the room, and if attached to some pathetically basic software (pathetically basic compared to the brain of an insect, at least), the camera could be used to pick out objects and identify colors in the room. Does the video camere/software combination have "qualia". The idea is absurd, and yet there you have it, something without qualia that can pick out objects visually and identify colors. So why couldn't a simulation of me easily do the same without actually experiencing qualia.

And I admit that, if qualia have an effect on the physical world, such as affecting someone's decisions, then the simulation would not act exactly like me.

The problem is, according to MWI, two versions of me that start out identically wouldn't act exactly the same either. So we couldn't, even in theory, determine if the simulation failed to act exactly like me, if it acted very very closely like me. Yes, the absence of qualia might be a dead giveaway, if we but had the ability to view every path that MWI takes and create the appropriate statistical model of what my behavior would be like, and then compare that to the statistical behavior of the simulation after multiple runs. Unfortunately, such a test isn't even theoretically possible at this time, though I hold out hope it might be some day. At any rate, short of such a test, a simulation could act very very similarly, though not identically, and we'd never be able to tell the difference. Of course, this assumes that qualia have very little direct impact on our behavior, but this is fairly obvious when you consider how much of our behavior is dictated by A) our genes, B) our upbringing and everything we've experienced, C) our chemical and hormonal state, including any imbalances, etc. It seems obvious to me, given how much of our psychology we know to be governed by the laws of physics (and the higher order "laws" of chemistry and biology), that our qualia could have very little immediate impact on our actions. This doesn't rule out classical free will, it just limits the timescales it operates on. Anyway, that's a discussion we can get into in another thread...

#28 DJS

  • Guest
  • 5,797 posts
  • 10

Posted 25 July 2005 - 10:10 PM

What gives those 1's and 0's meaning? What makes those 1's and 0's a picture of a dog


The 1s and 0s must be related to the external world.

#29 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 10:13 PM

By?

⌛⇒ new years donation: support LE labs

#30 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 10:13 PM

Jay, how would your simulation respond to the question "Do you have qualia?"




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users