For that matter, what is "consciousness"?
Consider the following scenarios:
1)
For simplicity of analysis, assume there exists a computer that has one CPU, that follows in-order execution, that doesn’t use pipelining, and that can operate on only one location in memory at one time. However, this computer is really, really, really fast. Let’s also say that it has a huge amount of memory.
How fast? Well, let’s say it runs at 100 exa-Hertz, or 10^20 operations per second. As for memory, let’s say that it has an exa-byte of RAM, or 10^18 bytes of RAM.
Now, let’s say that we have a neural network designed to roughly approximate a brain scan of a human mind, let’s say my mind. This scan of the brain of Jay shall be named Jay-1.0. The scan consists of a node of data for every neuron in the brain. Physical location is irrelevant, since it is the connections and weights of such connections that truly define the relative spacing of neurons. So the node itself is mainly just a marker, perhaps containing information about the node’s state (activity level, how close it is to reaching its firing threshold, etc.). For simplicity, we’ll just ignore the metabolic requirements of the neurons, so brain cells won’t get tired from lack of glucose or oxygen, etc. This simulation won’t get hungry, but such a change is probably just clearing up what is otherwise a burden.
Additional information will be stored in the form of interneural connections. Each of these will represent the link from one neuron to another (or to or from glial cells, I’m not up to speed on all my neurology). Information required for such an interneural connection might include the time it takes for a signal to propogate down the connection, the strength of the connection, etc. The details aren’t terribly important for the sake of discussion, so long as we can stipulate that the details take into account the best and most current knowledge we have available at such a high level.
Now, this computer will go through each neuron and interneural connection, one at a time, and using basic mathematical equations, determine when to change certain pieces of information stored in what is basically a large flat memory space. All the inherent structure and complexity is stored in the flat data file, and hence the computer is blissfully oblivious to such structure and complexity. The computer just sees bits and bytes and floating point numbers, and it performs basic math (including, if necessary, basic operations like sine, cosine, square root, etc., or even numerical integration, which is just repeated multiply and adds). The timeslice used will need to be fairly granular, at least 1,000 frames a second, but let’s push the envelope and go for 10,000 frames per second.
This simulation should respond very approximately like a human, and any discrepancy would likely be unobservable due to the inherent complexity and randomness of human nature. So, does this simulation experience qualia? Is it “conscious”?
2)
Now we’ll allow for a slightly more realistic scenario. Now we have a computer with a million parallel processors, each capable of out-of-order execution, pipelining, and executing multiple instructions at once. The OS’s for these processors are using sychronization protocols to keep the memory and various caches in sync. Otherwise, the basic program of this “human” mind is the same. Hundreds of billions, perhaps trillions of neurons, and hundreds of trillions of interneural connections. We’ll call this program Jay-1.1.
Does this change at all whether we can consider that this software simulation experiences qualia? Why or why not?
3)
Now for a more elaborate setup. The codebase will be expanded, to include new data structures. In addition to neurons and interneural connections, we’ll have data structures to represent the actual synaptic gaps, including the individual vesicles, concentrations of various enzymes and ions, membrane potentials, etc., etc.
This new, more elaborate, scenario will also add DNA and metabolism to the picture. Gene transcriptions and expression rates will be modulated according to known theory, and responses to glucose, oxygen, and other nutrients will be used. This will necessitate virtually feeding the brain in question.
We’ll call this program Jay-2.0.
We’ll also need a lot more hardware. Let’s say we’ve got a hundred million parallel processors, printed at 100 processors per die, for one million chips total. Processor speed is 1 petahertz, for a total of 10^23 operations per second (about a mole’s worth, coincidentally). Assuming an even finer timeslice resolution of 25,000 frames per second, this computer would obviously not be able to process the information in “real-time”, but that’s irrelevant to whether qualia are experienced, right? The virtual world this “mind” will be “experiencing” (or, more accurately, being fed simulated sensory data about) will follow the proper laws of physics, so “time” will flow at the appropriate rate.
So, does this program experience qualia? It should be able to process the simulated electrochemical impulses traveling down the optic nerve, translate those impulses into the necessary output pattern of electrochemical impulses to push through memory and cognition filters, and tell the dispersed neural network that a cat is in its field of vision. A grey cat with white splotches and white “socked” feet. This stimulated the memory of “Walter”, a cat from the “real” Jay’s childhood. Jay-2.0 is confused, having thought Walter dead for over a year already.
But just because Jay-2.0 can process all this information, does that mean that this collection of 1’s and 0’s, being processed through 100 million CPUs, experienced qualia? Actually experienced them. Experienced the grey and white, experienced the feeling of confusion of seeing the dead cat alive? In what way were Jay-2.0’s experiences more vivid and “real” than those of a Sony hand-held video camera, which can also process photons into a grid arrangement of data that represents colors, etc.?
4)
In the next scenario, we’ll throw all our understanding of neurology out the window, because it’s probably wrong anyway. We’ll just do an atomic-level scan of my brain, and store every molecule, every atom, every electron, in its proper place and state. Then we’ll just run the most accurate chemisty/physics simulation available on this 10^27 or 10^28 atoms that comprise my head. We’ll do timeslices of picoseconds, or smaller if it’s necessary. The computer to run this simulation will be the size of a large city, running a billion trillion parallel processors and an enormous amount of RAM. Each picosecond (or smaller) timeslice will be processed in a microsecond of real-time, so the simulation will run about a million (or more) times slower than reality, meaning it will take a year to simulate 30 seconds’ worth of time. Jay-3.0 won’t notice, of course.
Does this simulation experience qualia? Why? How? It’s just a billion trillion parallel processors going through and performing endless vector calculations and integrations on a huge array of floating point data. Where is the “process”? There are no actual laws of physics, no actual atoms, no actual electric fields, just a bunch of numbers, which are themselves just a bunch of 1's and 0's. How is this the same as the actual atoms?
Really, how or why would anyone think that software alone could experience qualia? How is the flipping of bits in a flat memory space ever going to even remotely be analogous to chemistry?
I do not discount that we will someday have the ability to upload. "Real" uploading, by which I mean uploading to an environment that preserves the ability to truly experience the world, will not just be a stupendously fast computer running software. It will require special hardware, hardware that does the analogous job of whatever it is within our biochemistry that allows us to experience qualia. Even then, preservation of “identity” is far from given, but that is another topic to be addressed in a separate thread.