• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Can software alone simulate “consciousness”?


  • Please log in to reply
106 replies to this topic

#61 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:33 PM

Depends. If they are purely software, then it wouldn't be any different than killing zombies or aliens in a video game.

It might be like Battlebots, only with robots that scream and seem to be in pain when attacked. If it's just software simulating such for the crowd's sick pleasure, I wouldn't have a problem with it. Well, I'd be worried for the audience, but I guess we all need our outlets, right?

#62 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:34 PM

Time to head home. So the "realtimeness" of this "chat" will have to come to an end...

#63 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:36 PM

Depends. If they are purely software, then it wouldn't be any different than killing zombies or aliens in a video game.


Dont you see this is the root of our disagreement?
It is a blanket assumption that you have not been able to back up. You cannot know this.

All you have said is that it isn't like the way I work and therefore can't have qualia.

sponsored ad

  • Advert

#64 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 25 July 2005 - 11:44 PM

Camp2 (Don): Since we don't have a physical explanation for it, Qualia is probably an illusion, but even if it does exist, it couldn't play any casual role in reality.

Camp3 (JustinRebo): Qualia exists and it is logically impossible for a complex information processing machine of any sort not to have qualia.


No, I am not saying that qualia is an illusion. I am saying that the meaning you are giving it is an illusion. Qualia ARE the way your mind represents phenominal aspects of your reality. They are one and the same.

What I believe Justin is saying, and which I agree with, is that there are varying degrees of qualia based on varying degrees of cognition. It is not an all or nothing deal.

A good example of this would be individuals with blind sight. Often this phenomenon has been used to argue for the existence of qualia. But a more thorough investigation has demonstrated that individuals with blind sight do not possess the same ability to descriminate between objects visually as those with normal vision. Thus, people with blind sight are seeing, just not at a level sufficient to produce our level of phenomenal experience.

However... Don just threw me a curveball:

Explain how an advanced AI could identify colors without qualia?

Answer: It couldn't


If qualia is probably an illusion, then wouldn't it never be required for the identification of anything? And I thought qualia played no role in physical reality, yet it is required for the AI to have qualia in order to identify colors? The camera with only a simple information processing capacity can identify colors based on wavelength but a complex information processing machine cannot do this unless it experiences qualia???



See above. Qualia is not an illusion, it is humanity's dualistic conception of it which I now believe is illusory.

Come on Mark. I can see you disagreeing with my monistic contention that information processing and qualia are one and the same, but surely you at least know this is the position I am advocating.

Edit our level of phenominal experience

#65 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 25 July 2005 - 11:49 PM

I'm hoping someone more eloquent than me can explain my viewpoint about this, because it seems so obvious but I'm having a hard time describing it.

However, I think Don was hitting a good point about the AI.

From the "qualia" viewpoint:

Explain how an advanced AI could "experience" colors without qualia?

Answer: It couldn't.


Basically we are getting at a more specific defintion of qualia, other than "the feeling of experience" or whatever. For example:

My qualia have meaning in and of themselves, not because you give them meaning, but because I give myself meaning

So, you obtain qualia through sensory input. Also, you give meaning to these sensory input (which is kind of what defines the "qualia", right?).

And I think you are making a mistake by referring to an AI as a series of 1s and 0s, not that you are wrong, but that you are confusing yourself. An AI is a process that accepts inputs, just like you. It could give its own "qualia" meaning to those inputs, just like you.

oh, didn't realize the "realtime" part was just a second ago. lol

Oh and I'm really interested in understanding the whole concept of qualia, so try to help me get it.

#66 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 26 July 2005 - 12:35 AM

I can see you disagreeing with my monistic contention that information processing and qualia are one and the same


Actually I do agree with that statement, but for different reasons than you do. I don't see time as existing independent of qualia. There are no processes without time. For these reasons I don't see my position as dualistic. I see your position as dualistic because the qualia don't play any role in the information processing, they are not logically required for the information processing to function.

So let me try again:

camp3 (JustinRebo): Any information processing machine which is capable of displaying behavior that cannot be distinguished from human behavior must have qualia, and this is absolutely independent of the design of the machine (i.e. substrate).

camp2 (Don): Qualia is an emergent property of complex information processing that plays no causual role. Although we currently have no hypothesis as to why qualia exists, it will be explained in the future as one in the same as complex physical information processing. Whether an organism or machine experiences qualia can be determined by its behavior indirectly through its information processing capabilities. Intentionality/self-awareness (in the information processing sense) are the criteria for determining if the organism/machine experiences qualia, or at least a degree of qualia worthy of ethical treatment.

camp1 (Osiris, Jay): Qualia may play a casual role in reality. The only qualia you can be absolutely sure of is your own. Others can be assumed to experience qualia based on their similarities to you. Extending this, since we are fairly confident that, say, a SimAnt simulation does not experience qualia, we assume that qualia is substrate dependent. The only difference between a SimAnt simulation on a convential computer, and a brain-scan simulation also running on a conventional computer, is complexity. Since we believe that qualia is substrate dependent, if we assume that SimAnt is not experiencing qualia, then we will also assume that the brain-scan simulation is not. What the two simulations have in common, is classically deterministic information processing.

I'd like to point out that even though I'm not convinced that complexity is all that is sufficient for qualia, I would err on the side of caution when dealing with an AI that was behaviorally indistinguisable from a human. Just as I err on the side of caution with animals, and am a vegetarian. Whatever any of us believe, none of us know the answers to any of these questions.

#67 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 26 July 2005 - 01:02 AM

camp2 (Don): Qualia is an emergent property of complex information processing that plays no causual role.


Qualia is not an emergent property, that would be dualistic. However, I can see why you would be confused by interacting with me, because originally I was arguing from an epiphenominalist perspective. Jay and yourself have changed my position. [lol] I am now back to being firmly in the monist camp, which would mean that qualia ARE information processing. From this perspective, qualia (subjective "phenomenal" experience) are, objectively speaking, "complex information processing" that most certainly DO have a causal role to play.

Although we currently have no hypothesis as to why qualia exists, it will be explained in the future as one in the same as complex physical information processing


hhmm, I do not believe that we will ever have an hypothesis because qualia (and I'm using the term because of our discussion, but a true monist wouldn't even use such terminology) will never be something that can be objectively measured, but yes, I do agree that in the future it will be looked at as the same thing as information processing. As AI systems begin to display emotive qualities, as they become indistiguishable from MOSHs (Kurzweil's term for mostly original substrate humans) there will develop a movement to recognize them as sentient. It is only natural that, as humans, we empathize with other beings who display identical behavioral characteristics.

Whether an organism or machine experiences qualia can be determined by its behavior indirectly through its information processing capabilities. Intentionality/self-awareness (in the information processing sense) are the criteria for determining if the organism/machine experiences qualia, or at least a degree of qualia worthy of ethical treatment


Yes.

#68 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 26 July 2005 - 02:13 AM

plato.stanford.edu/entries/qualia/ plato.stanford.edu/entries/qualia/

Edited by treonsverdery, 19 October 2006 - 06:10 AM.


#69 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 26 July 2005 - 06:13 PM

The white supremacy remark was a bit low, even for you.

I did not mean it to be low. But I do see it as the same kind of me centered logic. And indeed. You stated that you would have no trouble turning off (killing) what I would view as a sentient being.

I'd like to clear this matter, in case anyone in this debate feels it was unresolved and there were bitter feelings. I harbored none, once Justin explained himself as I quote above.

I'd like to point out how I think we need to be viewing this.

Art has value. It has meaning. That meaning is given by us. Without us, the art is just more atoms and molecules, etc. It's basically raw material, about the same as a rock. We give it the meaning.

My switching off an AI would be like burning the original Mona Lisa. It has a LOT of worth, as far as many people are concerned. But that worth is not measureable on the same scale as a human's life. Its worth is not measured on the basis of consciousness, etc. It has economic and academic and artistic worth.

I wouldn't just willy-nilly turn off the AI in question, for no good reason, just like I wouldn't burn the Mona Lisa if given the opportunity. But if I did switch off the AI, I would not feel like I had killed a real sentient being. I would have destroyed something of value, but not of the type of value that we would use to compare me to a white supremacist.

Now we have copies of the Mona Lisa, so perhaps not all is lost. Another example I came up with, is if J.K. Rowling mailed her manuscript for the 7th and final Harry Potter book, then died in an accident. That manuscript is one-of-a-kind, and cannot be replaced. Then I steal it before it can be copied, and burn it.

Think of the outrage of tens, perhaps hundreds of millions of people! But it's not because I did something equivalent to killing a human being, killing a sentient mind. Yes, I destroyed something of value. But it wasn't a true "sentient being", not the type that experiences qualia, that derives its meaning from within, that is more than just "art".

Of course, software AIs comparable with human intelligence will probably run on desktop computers by the end of this century, so switching one off before we go to bed will be a common occurance. We'll feel secure in doing this, because it's just software. The AIs that will eventually deserve and get the respect and rights of sentient minds, will be the ones with hardware platforms that lift them up from being pure software simulations. But the software ones will be a dime a dozen, and we'll shut them off, delete them, reformat them, etc., at will, without compunction, without regret. You should get used to that idea now, and start helping to figure out just how much hardware is necessary before our future AIs really are sentient, conscious, experiencing the world with qualia, etc.

#70 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 26 July 2005 - 06:32 PM

Don: I can see why you would be confused by interacting with me  I am now back to being firmly in the monist camp, which would mean that qualia ARE information processing.


[lol] (feel free to delete this post but I couldn't help it... ;) )

#71 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 26 July 2005 - 06:43 PM

I'd like people to write functional pathways to uploading
I percieve talk about beingnessability may be thought of as planning activity prior to action

Just to be fun then I'm making a thread Creating the billion year lifespan mind's body. If humans like the idea of ongoing lifespans then it is a kindness to create an item with an ongoing lifespan.
I'm thinking that the hyper rapid path to the creation of such a being is to add neural function to a plant like the King's Holly that is now 40K year old. Genetically engineering it to make mammalian brain tissue. It is not personal immortality It is the creation of an immortal being with the possibility of personality . They will be happy we created them. I think just a few scientists might accomplish that.

King's holly: imminst.org/forum/index.php?act=ST&f=48&t=513&s=
imminst.org/forum/index.php?act...T&f=48&t=513&s=
With neural tissue blobs: ahherald.com/images/news/2003/little_school_tree.jpg
ahherald.com/images/news/2003/l...school_tree.jpg

Edited by treonsverdery, 19 October 2006 - 04:34 AM.


#72 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 26 July 2005 - 08:17 PM

Camp 1 (Jay, Osiris): The book doesn't experience the world.

Camp 2 (Don): Although we don't currently have a hypothesis for why the book experiences the world, it will be explained in the future as being emergent from complex information.

Camp 3 (Justin): The book experiences the world.

This is what this conversation effectively boiled down to. Thank you, gentlemen.

#73 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 26 July 2005 - 08:20 PM

If you insist on a physical process (and yet any physical process will do, which includes reading a book, but I digress...):

Camp 1 (Jay, Osiris): The 19th century Difference Engine doesn't experience the world.

Camp 2 (Don): Although we don't currently have a hypothesis for why the 19th century Difference Engine experiences the world, it will be explained in the future as being emergent from complex information.

Camp 3 (Justin): The 19th century Difference Engine experiences the world.

#74 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 26 July 2005 - 08:27 PM

umm, I don't think the book experiences the word. There is no dynamic information processing going on. This interpretation of what I have written just indicates that you haven't understood any of it.

You're right, the physical process reading the book experiences the world, in the case you mention that would be the person reading it.

#75 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 26 July 2005 - 08:34 PM

Or rather than having a person read it, we could have a spotlight that flashes across each row of 1's and 0's, lighting them up one at a time. A simple mechanical system could turn the pages after the spotlight has lit up every number on the page, allowing the next page to be lit up, one number at a time. That's a physical process.

By the way, I should correct myself with regards to the difference engine. I think the analytical engine was the more appropriate device. Programmable with punch cards, powered by a steam engine, possessing a "mill" that does the calculations, and a "store" that keeps track of numbers, e.g. intermediate results.

Edit: Added links to wikipedia entries on the analytical and difference engines

#76 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 26 July 2005 - 09:25 PM

Jay,

Will you please explain what you think qualia is if you don't think it is some form of information processing through a physical process.

Or rather than having a person read it, we could have a spotlight that flashes across each row of 1's and 0's, lighting them up one at a time. A simple mechanical system could turn the pages after the spotlight has lit up every number on the page, allowing the next page to be lit up, one number at a time. That's a physical process.


This is a red herring. A light shinning on pages? Common.

#77 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 27 July 2005 - 12:36 AM

Kurzweil:

It is not at all my view that the simple recursive paradigm of Deep Blue is exemplary of how to build flexible intelligence in a machine. The pattern recognition paradigm of the human brain is that solutions emerge from the chaotic and unpredictable interplay of millions of simultaneous processes. And these pattern recognizers are themselves organized in elaborate and shifting hierarchies. In contrast to today’s computers, the human brain is massively parallel, combines digital and analog methods, and represents knowledge as highly distributed patterns encoded in trillions of neurotransmitter strengths.

A failure to understand that computing processes are capable of being—just like the human brain—chaotic, unpredictable, messy, tentative, and emergent is behind much of the criticism of the prospect of intelligent machines that we hear from Searle and other essentially materialist philosophers. Inevitably, Searle comes back to a criticism of “symbolic” computing: that orderly sequential symbolic processes cannot recreate true thinking. I think that’s true.

But that’s not the only way to build machines, or computers.

So-called computers (and part of the problem is the word “computer” because machines can do more than “compute”) are not limited to symbolic processing. Nonbiological entities can also use the emergent self-organizing paradigm, and indeed that will be one great trend over the next couple of decades, a trend well under way. Computers do not have to use only 0 and 1. They don’t have to be all digital. The human brain combines analog and digital techniques. For example, California Institute of Technology Professor Carver Mead and others have shown that machines can be built by combining digital and analog methods. Machines can be massively parallel. And machines can use chaotic emergent techniques just as the brain does.

My own background is in pattern recognition, and the primary computing techniques that I have used are not symbol manipulation, but rather self-organizing methods such as neural nets, Markov models, and evolutionary (sometimes called genetic) algorithms.

A machine that could really do what Searle describes in the Chinese Room would not be merely “manipulating symbols” because that approach doesn’t work. This is at the heart of the philosophical [sleight] of hand underlying the Chinese Room .



#78 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 27 July 2005 - 12:56 AM

How about we call this a truce guys? Obviously both of the positions we are espousing are more complex than the straw men we have constructed over the past day or two. There are brilliant philosophical minds on both sides of the aisle.

In a weird kind of way, I am reminded of going over my cousin Chrissy's house when I was a child. Towards the end of the day, when it was time to leave, we would always find a way to manufacture a disagreement. In this way it would make it easier for us to part ways being upset with each other. I think that there are some parallels between this and our (all of our) behavior on this thread. After all, its much easier to walk away from an exchange saying to yourself, "Ah, that guy thinks that books are conscious. How idiotic is that?" or "Ah, that guy believes in ghosts and goblins, he's crazy." When in actuality both of the positions being advanced are much more complex than that.

The human mind craves certainty. The curse of us thinkers is that our mind will never get what it wants and so we find ourselves tortured to a certain extent. The behavior we are displaying now is indicative of the human mind trying to supress uncertainty.

So why don't we all just agree that the dialog we have engaged in was fruitful and that maybe we will pursue another exchange at some point in the future? Friends, right?

Okay, I'm out. [thumb]

Don

#79 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 27 July 2005 - 01:46 AM

So why don't we all just agree that the dialog we have engaged in was fruitful and that maybe we will pursue another exchange at some point in the future? Friends, right?


Yah, I think we all need to depart this conversation for a little while. Merry-go-rounds are fun though ;))

#80 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 27 July 2005 - 07:49 PM

I was about to post this in Osiris's thread, but in respect for his efforts to keep that thread fairly focussed, I'll cross-post it to here instead.

Justin posted this in the vegetarian thread:

All this talking about time slice resolutions is pretty meaningless. A computer running software is a physical process.

Justin, how does the computer simulate the processes in my brain? Surely there must be some virtual measure of time. If simulating at the atomic level, it has to simulate discrete increments in time. If simulating neurons, then it must simulate the delays and durations of nerve impulses. Time. Since we're talking about software, it doesn't just happen because of the laws of physics. The software has to run the atoms and their relative positions through a physics simulation subprogram, to determine the forces affecting each particle, then integrate those over a specific timeslice. That's not a physical process, that's an information process which doesn't even vaguely resemble the actual physical process. Oh, but the information process of numerical intergration takes place on a computer, which processes the bits via a physical process.

I think we may have hit on why no one seems to understand my very real point. A computer doesn't simulate the electric force between two atoms. It takes a set of numbers, which represent the current position and velocity (and electron shell configuration, etc.), and then it runs a set of numerical calculations that determine the magnitude of the force between the two. Then the computer uses a timeslice, an arbitrary delta-time value, and integrates the current velocity and position of each particle, based on the force between them (force/mass = acceleration).

If the arbitrary timeslice is made smaller, the calculations become more accurate, but you A) introduce more timeslices into any given time interval, which slows the simulation, and B) add numerical inaccuracy due to rounding errors. Special methods such as Euler's method or the Runge-Kutta method can greatly reduce the errors, allowing for larger time slices, but when you're dealing with atomic/molecular simulations, the time-slices can only get so big.

But the point is, to get two particles to accelerate toward or away from each other, you have to run a bunch of calculations over multiple specified time intervals. The calculation methods aren't unique, so any of a great variety of methods will suffice. But this isn't a physical process. It's a numerical process.

And what about this "process"? Let's say I have 6 and 7, and I want to multiply them. The result will be 42. What is the correct "process" to multiply them?

One way is to set up a loop, start with an accumulator set to 0, and add 6 to it, 7 times. Another way is to look up 6*7 in a multiplication table. Another way is to write 6 as 110, and 7 as 111, and perform binary multiplication:

111
 110
110
 110
-------
101010


OR, you could just say it equals 42, without ever providing an explanation or performing a calculation.

So, we could start with an initial scan of my brain. Then we could have the computer spend countless computer cycles running all the physics equations and numerical intergrations required to figure out the new position of every atom 1 nanosecond later. Then we could have the computer spending another countless computer cycles running all the physics equations a numerical integrations required to figure out the next new position of every atom at 2 nanoseconds. A billion such sets of calculations later, we have the state the brain is in 1 second after the initial brain scan, which by your logic should have "experienced" the world for 1 second's worth of subjective time.

However, why run all those physics simulations? Why not just write the initial state to the computers memory, then write the state at 1 nanosecond, without any calculations. Then write the state at 2 nanoseconds, then at 3. We could just read in each timeslice, as stored in a huge database. A 4-dimensional database.

It becomes much like a movie. No calculations required. No simulation of the laws of physics required. No simulation of a physical process. Just display a movie of the process unfolding. But why even display it? Why doesn't the mere existence of the 4D database, either in computer memory or printed on the pages of 10^50 books, constitute the same information process? Why don't my books experience qualia. I must assume that Don's and Justin's positions require the books to experience qualia. If they don't, then they don't fundamentally understand my question about software.

Have I lost anyone?

This is a red herring. A light shinning on pages? Common.

How is that any different than running the same bits through a few transistors? Come on! It's just the information, right? Information is all that matters?

#81 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 27 July 2005 - 08:18 PM

Heh, in thinking this over more, I've almost completely convinced myself that, short of static data experiencing qualia, no software-only program can experience qualia. Actually, I've even almost managed to convince myself that the books experience qualia, so strong have been Don's arguments.

My examples using the human brain are fairly straightforward. If you run a physics simulation of the highest accuracy possible with the currently known laws of physics, the brain simulation should basically do everything a real brain would do. Any failure to do so would prove the laws of physics as we know them wrong, or software incapable of experiencing/simulating qualia, at least through direct simulation of known physical substrates that do support qualia (i.e. my brain, and by logical extension, probably the brains of most or all other humans).

If it does act like a human brain, however, that doesn't mean it experiences qualia. Qualia could be epiphenomenal, meaning that a brain without qualia could functionally perform identically to one with qualia. Or, qualia could make so little difference in the actions the brain performs that no current method of observation could detect the difference in a statistically significant manner. So it could be qualia-less.

If it does experience qualia, then we have the more profound question of "why"? Is it because we ran all those bits through transistors? Is this just a fancy way of saying that transistors and neurons are both suitable substrates for experiencing qualia? If so, then the answer to my question "can software alone experience qualia" is an emphatic NO!!!!

What if we ran those bits through one of Babbage's analytical engines. You know, as stack of punch cards reaching from here to Alpha Centauri, run sequentially through the steam-driven mechanical engine. Does this setup experience qualia? It's the same exact software. Not similar. It can be the same exact software.

Here, I find myself of mixed opinion. It's not totally ridiculous, but it does stretch the imagination.

But really, there's no difference between this, and just writing all the bits in a set of books that spans the width of the known universe. I mean, if the exact physical process isn't important, so long as there is some physical process, then why not?

If the books experience qualia (not just information processing, but actually experience the redness of red, etc.), then I guess the answer to my question is a resounding YES! I've almost got myself convinced, but the basic outcome is that people, sentience in general, is valued on the same scale as art. And since the suggestion that my experience of the world is an illusion isn't tenable (that's more ridiculous than thinking the books "experience" the world), then I'm really stumped on where this goes. Obviously the "observer" in the books scenario isn't the same "observer" in the real Jay. Or is it? I mean, if the completely ridiculous idea that books can experience qualia is something we're seriously considering, then it makes me wonder if just taking a brain scan of me would be enough to prevent "me" from ever really ceasing to experience the world.

#82 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 27 July 2005 - 10:12 PM

Babbages anaytical engine could run an intelligent program, because it is an active process with input and output.

A book doesn't do anything at all, therefore it is not a process, let alone an intelligent one.

also, I don't see why we need a computer to run a simulation of your brain down to the atomic level for the computer to run YOUR intelligence process, all it would need is your memories and a human intelligence algorithm.

#83 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 27 July 2005 - 11:57 PM

A book doesn't do anything at all, therefore it is not a process, let alone an intelligent one.

Why isn't a book a process? If I flash a spotlight across one letter at a time, such that one letter is lit up, and hence is just as "current" as the "current" bit, byte, or word of memory being "accessed" by the "physical process" inside a microchip, then how is that not a physical process? We can put all these words in quotes as much as we want, but seriously, someone explain how the current byte being operated on in a computer has any more "physical process" significance than the current letter lit up on a page in a book.

Hey, don't blame the messenger. If a computer transistor and a valve on an analytical engine can perform the analogous "physical process" task as a synaptic gap, then why doesn't the lighting up of a single letter on a page in a book qualify. Sheesh, people think I'm daft because I won't accept that a transistor is somehow "obviously" exactly equivalent of performing the job of a synaptic gap, and then they woose out and say that a spotlight lighting up a letter on a page doesn't count.

Hey, the spotlight is the conservative scenario. I mean, if any and all "physical processes" are just as good as any other, then why not the random molecular motion of the heat in the books, assuming they're not at absolute zero. The ink that comprises each letter (or number, in my original example) will have a different molecular weight, and hence a different amount of oscillating momentum in response to the random kinetic motion, than the cellulose molecules that make up the pages. So every bit is being actively "processed" while the stack of books just sit there. If you want them processed in some order, just slam the top book of the stack, and let the compression wave travel at the speed of sound through the stack, compressing, and hence making "current", one page at a time.

I bore of this. Either you people understand computer science, or you don't. Information is not qualia. Qualia, if not epiphenomenal, depends on the physical processes that actually "process" the information, and those can't be simulated precisely because they don't need to be simulated for the simulation for follow the same path. An actual electric field is not the same as a bunch of numbers and calculations that could be performed in arbitrary order, by arbitrary methods, at arbitrary levels of precision, and yet the simulation can make electric particles follow the same path as real particles in a real electric field. The simulation simulates the effect, but never the cause.

Software cannot and will never experience qualia. Shutting off a computer isn't a crime against a sentient being. It is at best destruction of private property, intellectual property, art, etc.

30 to 40 years from now, we'll have human-or-greater level intelligences running in software. We'll shut them off, tweak their programming, and restart them. Shut them off, tweak some more, and restart them. This is not equivalent to using a nanofactory to create a human being from a brain scan, then destroy it because it needs tweaking. It will be equivalent to what software programmers already do every day: run a program, debug it, shut it off, and tweak the code or data.

The logical extension is that anyone who "uploads" into a software only environment loses their right to life as much as if they just stuck a gun to their heads and pulled the trigger. The program that they become doesn't have human rights, though certainly the relatives might claim the property rights and thus "protect" the program. But their claim of protection is the same one used by relatives of a deceased author to protect the intellectual property rights. In effect, the software program becomes the greatest intellectual legacy of the person. But it's not alive, it has no "human rights", just a lot of property rights. Hopefully, the family will protect the program until such a day when the program can be transferred to a substrate thought to actually, functionally provide qualia. Must it be an organic substrate? Not necessarily. But it will use specially tuned hardware that uses the laws of physics, and not just the arbitrary flipping of gates, to provide experience.

Any scenario in which you grant sentience and human rights to pure software, you give those rights to books. This is basic information theory and computer science.

The physical processes of which I speak aren't the ones flipping the bits, because any physical process will do. I'm talking about the physical processes that move atoms, or send ions streaming across synaptic gaps. Those aren't simulated calculations, there are physical forces involved, and the analogous function must be performed physically, not in software.

#84 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 28 July 2005 - 12:57 AM

Once you stop clinging to the notion that software alone (and please, try to figure out what definition of software I'm talking about. I'm talking about the information, not the substrate, e.g. the Intel processor or analytical engine. If it's Turing computable, it's software, and any physical substrate will do: I'M JUST TALKING ABOUT THE FRICKING SOFTWARE!! CAN'T YOU PEOPLE READ!!!?).. ahem, let me try that again.

Once you stop clinging to the notion that software alone can experience qualia and be truly sentient, the way that a religious person stops clinging to the notion of an everlasting soul or afterlife or a God, then you can start to appreciate the really interesting problem of what substrates, when combined with what types of information processing, will actually experience qualia, actually be sentient. It's a rather freeing experience. Osiris and I have already moved on to the interesting, hard problem of substrate: what is it about actual physical processes, and not simulations thereof, that allows them to endow us with qualia, the ability to experience the redness of red?

And you're still chasing ghosts in the machine. Move on already. There's more interesting, real problems out there to ponder.

#85 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 28 July 2005 - 04:22 AM

I just went and looked up the simplest definition of qualia I could find because I want to be clear that the discussion of the experience of qualia is a red herring.

Experience is itself a form of qualia by definition.

http://dictionary.re...h?r=67&q=qualia

n. pl. qua·li·a (-l-)
A property, such as whiteness, considered independently from things having the property.

--------------------------------------------------------------------------------
{From Latin qule, neuter of qulis, of what kind. See quality.}
Wiki: http://en.wikipedia.org/wiki/Qualia


First off a book is a process, just not an intelligent one simply because it is a conveyor of information. It is however a memetic process and the interaction of author and reader continues to be a living dynamic so long as the work is read.

The issue of substrate matters but only as the filter of experience and you are getting awful intimate with existentialism down this path.

The key qualia and why I introduced the *observer* is that of experience itself. And I posed history as a meme of qualia shared as culture that transcends individual experience and perception but most important was the aspect of observing the self.

Senses and physical experience are simply the observer externalizing perception (retrospection and inspection) but self examination is introspection and also critical.

Could software perform this function with any (or without any at all) substrate to operate on?

No.

Could there possibly be alternative substrates to biological ones?

I suspect yes.

Could the *power of an idea* exist such that it is capable of defining its own substrate?

Is that what DNA does?

Could other ideas self organize matter?

Here comes the other shoe drop... :))

Is the Will a qualia or a choice?

Is choice itself a qualia?

Is the exercise of the 'will to exist' a qualia or a process for *evaluating* qualia?

Information itself is neutral and itself nothing more than a qualia but value (also a qualia) is the result of process that comes from observation, comparison, and choice.

Observation = Senses combined with Self Perception

Comparison = Rational and Emotional analysis

Choice = The exercise of the Will (step aside from the Free vs Determinism issue for a moment as choice exists regardless of being either) combined with the first two as learning and experience as existential memory.

Software is not merely information, it is also not only the process itself, it is a system of organizing information on any given substrate.

A while ago I suggested that we might be able think of DNA as a tangible example of a Platonic form for this very reason.

Is DNA merely the chemicals that compose it or is it an example of a self organizing program that is actually composed of its software?

Is the qualia of *being* (a living species) the result of the idea of DNA?

In the case of DNA it is impossible to completely divide qualia (its properties from its form) in a way because the result of the qualia of its software is the qualia of being alive. The chemicals of DNA are the substrates that the software doesn't just reside or operate on, it is what it the software is composed of.

When I referred to the sporification aspect of reducing all experience to storable memory I was a priori granting that the memory of the person is not itself alive (or interactive) without an adequate substrate hence no experience of being alive under those conditions. It could be argued that this is also analogous to sleep or the *unconscious dilemma.*

However I was saying that there exists an analogue of this behavior for life in bacterial genetics that can reduce itself to primary information literally encapsulated as a kind of zip program that unzips itself when adequate elements to provide a substrates are available from basic matter and it can do so over literally hundreds of millions of years.

In this very basic sense (not so complex as human scale intelligence) bacteria can reduce itself to a *genetic PROGRAM* and suspend *animation* untill the substrate question is answered but also do so out of some relatively common basic universal elements.

Edited by Lazarus Long, 17 September 2005 - 10:49 AM.


#86 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 28 July 2005 - 07:27 PM

Ok ok, maybe i'm slow, but why are we abandoning the possibility that software alone can exprience qualia?

What's so special about "whiteness"? Do have any clue what you are specifically referring to?

Whiteness is an abstract concept. What's so complicated???

#87 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 29 July 2005 - 01:14 AM

person:Could the *power of an idea* exist such that it is capable of defining its own substrate
well the power of electromagnetism "exist such that it is capable of defining its own substrate"
actual book type physics resonance happens with physics. One imagines say using big lasers to heat up plasma a few light years away, then tuning the plasma to have the right thermalness to reflect n absorb such that it has a standing wave pattern. People have made these. a famous but more complicated item that creates its own substrate is the sun. Noting the big variety of possibilities, although I'm rejectful of integer combinatorics, That there is ideaness makes me think there is ideaness "capable of defining its own substrate" I use define here as make.

use define as describe well two urges
practical I think a jupiter brainish thing might be capable of truly knowing but that verges on cop-out

topological
two characters Carol n Alice.
Carol Klien looks like this:Posted Image
Alice Tanker Truck looks like this: scd.mm-a1.yimg.com/image/173433235.jpg
Posted Imagehttp://scd.mm-a1.yim...image/173433235[/URL]

Then put Carol nside Alice (fun relationship)

Now if Alice Tanker Truck's personal being has any relation at all to Carol Klien then the topology of her being is a container that holds both nside n outside at the same time, plus has additional space.

Alice might have a chance at: Could the *power of an idea* exist such that it is capable of defining its own substrate

ok ok the truth is I saw a tanker truck on a nature path marked Klien n I wondered what shape it would "be" if it were carrying its namesake.

Edited by treonsverdery, 19 October 2006 - 04:44 AM.


#88 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 06 September 2005 - 04:27 PM

I found a great article that parallels many of my ideas and thought experiments (including ones I haven't formally presented here, for a variety of reasons ranging from space to lack of interest to hostility).

http://www.well.com/...ron/zombie.html

Almost everything up to part Three is great. Part Three itself, by its very nature of being a speculative attempt at building a bridge, is probably going to be hard to grasp for either zagnets or zombies, as Lanier labels them. I'm not as concerned with building the bridge yet, since I'm still working on the problem of experience itself.

I think my favorite point that he illustrates is the idea that any information that conceivably could be, may as well already be. Many of the information-theoretic functionalists seem to miss this point, in that any and every conscious existence that ever could be already has been, is, and will be, with no sense of a flow of time, except of course to those consciousnesses. While not logically impossible, it certainly makes life itself trivially meaningless. Why not kill yourself if you're poor? There's a version of you out there (theoretically or physically, but what's the difference from a pure informational point of view?) who's rich, so why bother suffering through this life?

Or do we already tacitly agree that our existence somehow is special, and that killing ourselves really is significant (since, from an information-theoretic standpoint, it's not: the information exists, and will always exist, and has always existed, and our consciousness does not go into oblivion, but gets relived an infinite number of "times", if such even makes sense, which of course it doesn't).

At some point, matter matters (beyond mere information), and experience matters, etc. That we could conceivably create a computer that could turn any sufficiently long string of random bits into a consciousness means that consciousness is everywhere, and in everything, from rocks (which have as rich an experiential existence as I do, at least to a countably infinite limit, and while some specific infinities are greater than other in artificial limiting cases, they're all mappable to each other) to meteor showers to hurricanes.

So why are people and our software programs special? Is it just because of Mother Nature and the intentional stance? It seems a stretch, but from such a view, information itself, that building block of existence, doesn't really exist. Complexity itself is meaningless. Emergence is just a code word for mysterious.

In the end, software alone can't give rise to consciousness, unless we want to credit my hypothetical enormous stack of books with consciousness, or rocks themselves (not just with "a" consciousness, but one as rich as ours, from the right point of view). If we want to avoid absurdities (more absurd than Cartesian dualism, if you really think about it), then we must admit that software alone does not have consciousness. Something in the hardware matters. What might that something be? If we knew that already, Dennett wouldn't be as popular as he is.

But we don't know yet, so Dennett and other zombies remain powerfully popular and influential. It's tempting for the anti-functionalist to say that the Hard Problem may never be solved, but just considering where the human race will be, intellectually and technologically, in the next century, let alone the next few millenia, and let alone in a million or a billion years, I think it's very unlikely that the Hard Problem won't be solved. It'll be solved, and when it is, we'll know where to draw our boundaries, so that we don't give human rights to toasters, or assign property rights to conscious, sentient non-human minds.

At any rate, I suspect with the influence the zombies have on philosophical and AI debates, human rights and property rights will become one and the same, with "degrees" based on the intentional stance (or some similar objective measure), such that human rights are just very specific and powerful forms of property rights, and that property rights (whether we call them "human" rights or "sentient" rights) of similar specificity and power will be assigned to AIs, such that, functionally speaking, they'll have human rights (even if they don't experience the world, and hence even if it wouldn't really be morally wrong to "torture", "subjugate", "enslave", or "kill" such programs).

In the mean time, at least until the Singularity, functional distinctions based on the intentional stance will seem to suffice. But with the Singularity rapidly approaching (20-50 years, depending on who's guessing), it's not a question we have the luxury of letting our great grandchildren decide. We'll need an answer soon.

I don't know what sort of hardware will be needed, but at the least, I think it should be obvious that software alone won't do it (since what is software but a slice of Platonic space, or of an (near-)infinitely-dimensioned random noise field, for that matter? For that matter, software alone would seem to indicate that rocks have human rights, and hence miners and sculptors are some of the biggest mass murderers of all time...).

#89 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 13 September 2005 - 12:41 PM

Don't get me wrong: as Jaron points out, there's a certain "hippie-ish egalitarianism of it; that even a thermometer gets to be a 'little' conscious."

You, of course, realize and accept this. But I think that most fundamentalist functionalists (or, at least, the amateur ones) seem to miss this rather odd point. It's that everything is conscious. Panpsychism by definition becomes true. In a further sense, you realize that since the "slices" of space that are to be considered a particular complex physical system are arbitrary, and that unlike traditional "software", the computer itself doesn't have to physically exist in an identifiable or well-defined form (since it's an arbitrary slice), that yes, a rock is just as complex and conscious as us.

Think of all the heat in a rock. What is that? Complexity in its most complex form. Random kinetic energy. There's more complexity in the kinetic energies of all the atoms in a pebble than in an entire human brain viewed from the neuronal level of abstraction. By the mere definition of panpsychism, that rock has just as much consciousness as a human. In fact, perhaps even more. Because what sets a human apart from a rock isn't its complexity, it's the order!! Of course, too much order, and you're just a crystal, an automaton. Too little, and you're considered a rock, randomness. Go figure.

To accept panpsychism is worse than to accept dualism: it's giving up, admitting defeat, as Dennett would say. It's the explanatory gap in all its glory, redistributed across the spectrum of computational systems, to make it look like it was there by design, instead of concocted by a lack of imagination on our parts. It could be right, but if it is, it's sillier than dualism ever was, and that's a sad state for science to find itself in. But, as Jaron said:

David Chalmers argues that all action in the universe is at least a little computational, and the right computation give rise to consciousness, so consciousness is everywhere, but in varying degrees. I like the hippie-ish egalitarianism of it; that even a thermometer gets to be a "little" conscious.



#90 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 14 September 2005 - 12:24 PM

Jay,

Random kinetic energy is high entropy and hence *low* complexity. A highly ordered state like a crystal is also *low* complexity. High complexity is mid-way between maximal order and maximal entropy. The complexity of the physics in a rock is far too low to support high level consciousness.

As for the heat in a rock, it is complex, "entropy" notwithstanding. You can compress an approximation of the heat, in that you can randomly generate an "equivalent" temperature of kinetic motions. But to generate the exact same set of kinetic motions, for each and every one of the 10^25 atoms in a pebble, would require about 10^25 * N bits of data, where N depends on your level of accuracy per particle (given a 3D velocity vector, N is probably at least 2^6 bits per particle, but potentially much, much more, plus a few bits for encoding the type of particle).

It's the same concept behind why random noise is the most uncompressible form of data possible. You can compress it, in the sense that you can make a string of similar length and "apparent" randomness (via pseudorandom number generation), but it's not the same exact string. Random strings are virtually uncompressible, period. The very uncompressibility is the measure of the data's inherent "complexity", at least in a mathematical/computational sense. That's why a rock will always beat a brain for complexity, because the brain is ordered to some degree, i.e. less complex, more compressible.

As for not being able to arbitrarily assign a finite state automaton to a rock, you're making the same classic mistake Chalmers did in Does A Rock Implement Every Finite-State Automaton?, by assuming that the computations performed by a rock don't qualify because A) they don't really do anything useful, and B) they couldn't do anything else, hence they're not "causing" the state transitions. The same might be true of a human mind in the absense of free will: the brain doesn't cause the events computed therein, the causation lies A) with past events, preceding the formation of the brain, and B) random input by quantum mechanics. Since most respectable neurologists discount B as a mechanism of consciousness (much like most respectable scientists discount dualism as the nature of consciousness), we're left with A. But how is that different from the latent heat in a rock?

For example, it's assumed that because we humans could do a lot of things (like play chess, or ride a bike, if given the proper input data/situation), then that's what makes our complex computations different from a rock's. But under physicalism, you can't play chess or ride a bike at any given moment, because there's one and only one input/situation. If the situation asks you to ride a bike, then all that data in your mind that theoretically "could" play chess if needed, actually can't, and hence it's irrelevant to the situation at hand. If viewed from the right "slice", a rock could be seen computing the moves necessary to win a particular endgame in chess. Of course, the rock can't compute the moves necessary for any other endgame, nor can it trigger the states that might correspond to the motor neuron firings needed to ride a bike, based on kinesthetic feedback. But, that's not the point. The input is fixed, so it only has to respond to the input, and generate the corresponding output.

It's funny how many people seem to miss this obvious fact. The complex sequence of all the complex "outputs" of the neurons of a brain is not dependent on the complex dynamics of neurons in the brain. Make a recording of the input and the output, and then play the input through a "computer" that just spits out the same complex sequence from the recording, and functionally, for that exact input, it's no different than a real person's brain (the same output was generated), and hence that recording is just as conscious.

You can't say, "Hey, that's cheating, if we had given the brain a different input, then it would have generated a different output!". Nope, that argument is completely empty! Why? Because under physicalism, in the original situation (the one that was recorded), the input that was recorded was the only input that could possibly have happened (barring coherence of MWI splits, but saying consciousness relies on MWI coherence, while fascinating, is not compatible with deterministic computation, and hence software is disqualified again!), and the output that was generated was the only output that could possibly have happened, and hence, that combination of input and output functionally defines the conscious states that were had during the interval of time in question.

Reproducing both input and output, under functionally-defined physicalism, can only mean that the same conscious states must be present when the recording is played back. Arguing anything else means that functionalism is false. Of course, I don't necessarily equate functionalism with physicalism 100%, so if one is willing to part with functionalism (as I am), then there's room to shoot down my argument and still hold physicalism to be true. Adhere to functionalism (this is aimed more at Don than you, Marc), and you have to accept this argument.

Hence, under functionalism, a complex bit string is consciousness. But complex bit strings are everywhere in nature, even in rocks. To argue otherwise is to argue that there is something special, over and above software/computation, that makes human consciousness more than just a rock.

Personally, I prefer the beauty of the MWI coherence possibility, and it leaves open the possibility of software using quantum events as random inputs to be conscious, by creating the same sort of coherence. Once functionalism is dropped (since admitting ultra-panpsychism is sillier than Cartesian dualism), there's still a whole slew of ideas under physicalism to be pursued, so the physicalists shouldn't be dismayed.

Actually, this isn't a surprising turn of events. Considering panpsychism, Cartesian dualism, etc., it makes one wonder whether consciousness is one of those things that, by it's very nature, is inherently ineffible. Approach the problem from any paradigm (dualism, functionalism, representationalism, etc.), and push the consequences of that paradigm to their limit, and you end up with a contradiction and/or an absurdity (an absurdity being to intuition what a contradiction is to logic, I suppose). A reductio ad absurdum can be made from any direction, so that in the end, we end up right back where we started: this is my opinion, that is yours, we disagree, and each can prove the other's idea is absurd.

It reminds me of the question of whether the software on a computer could ever determine the true nature of the computer it's running on. How could a piece of software determine whether it's running on a Pentium or an Analytical Engine (Babbage)? Two software programs running on the same computer might argue with each other, one claiming the world in which they live is a Pentium, the other arguing that it's an Analytical Engine. Which is right?

Sometimes, that's how these debates feel to me.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users