• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Can software alone simulate “consciousness”?


  • Please log in to reply
106 replies to this topic

#31 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 10:22 PM

Of course, this assumes that qualia have very little direct impact on our behavior, but this is fairly obvious when you consider how much of our behavior is dictated by A) our genes, B) our upbringing and everything we've experienced, C) our chemical and hormonal state, including any imbalances, etc.


I don't know about you but my "observer" is the same thing as my "director". I'm not discounting that my behavior is because of A, B, and C. What I'm saying is the little me that experiences everything is a result of A, B, and C too.

#32 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 10:25 PM

Jay, how would your simulation respond to the question "Do you have qualia?"

Each of them would most likely say they have them. Of course, I could write a simple program that gives the same response to the same question, so verbal report isn't enough.

As I've stated before, the only person I can be sure experiences qualia is me. I take for granted that you do, although I'm still not even 100% sure you're human, since I've only seen pictures of you and read your writings. But since I'm not aware of any great progress in software AI, I can reasonably assume you're not an AI, so I can therefore reasonably assume you're human, and since our biochemistry is so similar, I can reasonably assume you have qualia. But I don't know for a fact you have qualia.

#33 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 25 July 2005 - 10:32 PM

Jay

By?


Your sensory apparatus.

sponsored ad

  • Advert

#34 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 25 July 2005 - 10:37 PM

But the qualia are, as far as we can determine, complexity layered upon atoms, electrons, the electromagnetic force, and possibly but not necessarily upon various quantum phenomena.


Qualia are, as far as we can determine, nothing -- objectively speaking.

#35 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 25 July 2005 - 10:40 PM

Don Stanton "is distinguished as a relationally defined member of the total color schema. IOW, a quale's subjective quality is not defined by its functional representation, but by its functional relationship to other quale"[/quote]Uh-huh defining an apeture of sensation creates qualia which then build a different apeture of sensation to create or reference further qualia, that works as everyone has a mom to give them patterns. The place the observer observes their first qualia from, the "I am" is created with a different mechanism, I percieve that the "I am" mechanism is what JDF doubts is mechanodifference engineable.

To build an "I am" mechanism might be accomplished numerous ways The yogurt push up pop is metaphorically like this:
The skin on my fingers keeps growing, if I like I'm able to graft it on a dish. There's sufficient tissue that I have fully ordinary fingers.
The nervous tissue that makes up a human brain, with engineering research keeps growing, If I like I might put a chunk of it on a dish. There's sufficient tissue that I have an ordinary brain. Being the Bold pioneering chunk of brain on a dish that it is it agrees to neurally connect to silica electronic components. every time it replaces another .001 pt of its function it reports on its beingness: "treon, I feel like I'm on antihistamines, yuk, I'm less "I am"", or preferably "wow Treon I'm more alive than prior to this, I'm more "I am" than you are"
Doing this kind of science a bold push-up pop brain may migrate to a new consciousness material, be verified then be named uploading.
The Greeks argued on definitional spaces but Francis Bacon brought the procedural version of truth to civilization.

Now if I were the reader here I'd ask questions about neural growth on substrates of whatever material that had a higher bus bandwith than anything measured about neurons.

The onesnzeros version has me thinking "reverse compiled person" may not run, as it may not be accuate reverse compilation of source. The immortalist idea is to find or make reverse compilation of personal "I amness" that's portable.

creating *seed AI*
Thats why I keep saying yogurt push-up pop silicization.

Edited by treonsverdery, 19 October 2006 - 04:29 AM.


#36 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 10:40 PM

Each of them would most likely say they have them. Of course, I could write a simple program that gives the same response to the same question, so verbal report isn't enough.


THEY ARE NOT A SIMPLE CODE! They are a full blown simulation of you. Made from a scan of your brain. Tell me how they could possiblly display any behavior that remotely resembles human behavior without qualia? It's a plain logical impossiblity. I'm not saying you couldn't build a zombie of sorts, I'm just saying you certainly couldn't do it by scaning a human brain and running a simulation of it. It would either function, and be a real being like you are, or it wouldn't function. If it doesn't have it's observer then it doesn't have it's director either.

The logical contradictions here are piling up way too high. Can't you see that? You're cop out is that only something built almost exactly like you are built can experience anything. Might as well be saying that only aryans can experience qualia, so it's ok to turn everyone else off unless their existance benefits you.

#37 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 10:48 PM

Explain how a camera could identify colors without qualia!

#38 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 10:51 PM

Explain how a camera could identify colors without qualia!


That has absolutely nothing to do with it.

Explain how a bacterium could identify colors.

The problem here is simplicity. Nothing more.

#39 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 25 July 2005 - 10:54 PM

Elrond is right, its all about the complexity.

In the case of a video camera there is no brain to identify qualia, so you might as well be asking if a rock could experience qualia.

the video camera is not distinguishing the "qualia", YOU are when you look through the view screen.

#40 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 25 July 2005 - 10:55 PM

Explain how a camera could identify colors without qualia!


A better way to put this question would be:

Explain how an advanced AI could identify colors without qualia?

Answer: It couldn't.

#41 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 10:58 PM

The white supremacy remark was a bit low, even for you. What I'm saying is the fact that I can process a bunch of bits of information doesn't give us any reason to believe I experience qualia, and yet I do. Why? Most likely because of my physical substrate. In the absence of that, we could say that other moral agents give my existence meaning, in much the same way that I give the 1 million bit string of 1's and 0's the "meaning" of being a picture of a dog.

Except that I experience the world, so I don't see how my qualia are dependent on an outside observer to give me meaning. That's the hard problem. My qualia have meaning in and of themselves, not because you give them meaning, but because I give myself meaning. That bitstring of a million 1's and 0's doesn't give itself meaning, at least not in any sense that isn't ridiculous. Why would a much, much, much bigger string of 1's and 0's give itself meaning, give itself qualia? We speak of information processing, but a simulation that comprises of a sequence of timeslices isn't "processing" anything. It's just the same as the sequence of millions of timeslices laid out end to end as one long bitstring. There is no "process". The only process is the physical components of the actual computer that are changing physical states of matter to correspond to the virtual information, but if a transistor can do the same job as a vacuum tube as a steam-driven valve, how does that physical process combined with the static information lead to qualia? Hmm? Why can't I just take the simulation of my brain, timeslice after timeslice, and transcribe it as a huge collection of books containing 1's and 0's on its pages? This is NOOOOOO different that having a computer generate the same set of timeslices. Why is it ridiculous for the book to experience qualia, and not for the computer? Here then we must admit that something very fundamental about the substrate matters, or we must admit that the books experience qualia. Which is more ridiculous?

#42 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:01 PM

The white supremacy remark was a bit low, even for you.


I did not mean it to be low. But I do see it as the same kind of me centered logic. And indeed. You stated that you would have no trouble turning off (killing) what I would view as a sentient being.

#43 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:02 PM

In the case of a video camera there is no brain to identify qualia, so you might as well be asking if a rock could experience qualia.

the video camera is not distinguishing the "qualia", YOU are when you look through the view screen.

Wrong, Don. A camera with very rudimentary software could analyze the pixels, then report via text or, for some extra spice, via a text-to-speech synthesizer, the colors in its field of view. No human required, other than to here the verbal reports of "Yellow" and "Red".

Qualia? I think not.

Explain how an advanced AI could identify colors without qualia?

Answer: It couldn't.

Don, come on, you're not even trying! If I can write a program to analyze an image bitstream and report colors through some freeware text-to-speech synthesizer, and I'm not even a good programmer, then how could you say that an AI can't identify colors? We have software programs that do it now, that we wouldn't even consider AIs!

Come on!

#44 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:03 PM

Me-centered logic? Because I don't think rocks or protons or really, really, really long and complex bitstrings can experience qualia? Shame on me!

Edit: added "really, really, really long and complex"

#45 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:05 PM

This has almost become a chat, so quick are our responses!

#46 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 25 July 2005 - 11:08 PM

Tell me how they could possiblly display any behavior that remotely resembles human behavior without qualia?  It's a plain logical impossiblity.


Qualia are, as far as we can determine, nothing -- objectively speaking.


So let me see if i have this straight, there are three camps here, not just two.

Camp1 (Osiris, Jay): Qualia exists and may play a casual role in reality, but it can't be assumed that every sort of information processing machine experiences them

Camp2 (Don): Since we don't have a physical explanation for it, Qualia is probably an illusion, but even if it does exist, it couldn't play any casual role in reality.

Camp3 (JustinRebo): Qualia exists and it is logically impossible for a complex information processing machine of any sort not to have qualia.


However... Don just threw me a curveball:

Explain how an advanced AI could identify colors without qualia?

Answer: It couldn't


If qualia is probably an illusion, then wouldn't it never be required for the identification of anything? And I thought qualia played no role in physical reality, yet it is required for the AI to have qualia in order to identify colors? The camera with only a simple information processing capacity can identify colors based on wavelength but a complex information processing machine cannot do this unless it experiences qualia???

#47 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:08 PM

The place the observer observes their first qualia from, the "I am" is created with a different mechanism, I percieve that the "I am" mechanism is what JDF doubts is mechanodifference engineable.

[glasses]

To build an "I am" mechanism might be accomplished numerous ways The yogurt push up pop is metaphorically like this:
The skin on my fingers keeps growing, if I like I'm able to graft it on a dish.

[mellow]

Uh, Don, he was replying to you, so I'll, uh, let you field this one...

#48 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 25 July 2005 - 11:12 PM

person: Explain how an advanced AI could identify colors without qualia?
A yogurt push-up pop parented AI would ask a blind friend that the AI had complete faith about,
then having "I amness" the AI could make a repeatable version of the color qualia to share with others that the others affirm. as an AI with "I amness" making a persistent "emperor's new Qualia" is functional. Then literally like a moving average or the reputation of a restaurant The qualia qualifies. The qualia heritagizes.

Edited by treonsverdery, 19 October 2006 - 04:32 AM.


#49 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:13 PM

Qualia exists and it is logically impossible for a complex information processing machine of any sort not to have qualia.


This is certainly not my position. Complex machines certainly could exist without being able to observe in our sense of the word. My position is that any simulation made by a detailed scan of a human brain would have qualia if it actually worked.

as in actually behaved like a human.

#50 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:16 PM

...if it actually worked

as in actually behaved like a human.

meaning what? I'm never the same person twice, so you can't even repeat an experiment with me. Given quantum uncertainty (even if it only indirectly affects consciousness), how closely would a simulation have to act like me before it passes the test. Because there is a maximum resolution of the test to detect discrepancies, dictated by those pesky laws of physics. Each test is a one shot deal.

#51 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:16 PM

Also I don't think Don's and my camps are really any different at the root of it. The difference being semantical if anything.

#52 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:17 PM

Jay, it would have to behave like a human being. Be capable of learning, coming up with new ideas. Making decisions.

#53 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:18 PM

I act like a human (most of the time, anyway), and Don acts like a human, and yet we have very different responses. A test of whether a simulation could act like a human would be a poor test indeed.

Whether it acts like me is virtually untestable, given that just the lag time between my brain scan and the test would introduce uncertainties difficult to account for. And even then, we don't know how different my reactions to certain situations could be in a given scenario under MWI. I just don't see this being testable for many decades, perhaps centuries, after the first AIs created from brain scans.

#54 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:19 PM

and it doesn't matter if you could write code for any of that. Becuase you aren't writing code for it, you are making a copy of a human brain within a simulated environment from a real human brain. You aren't writing any of the code, other than perhaps the code for how a neuron behaves.

#55 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:20 PM

Be capable of learning, coming up with ideas. Making decisions.

Sounds an awful lot like information processing. And software can do that. How do we know when that information processing isn't accompanied by qualia?

We can't. Not yet, anyway...

#56 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:21 PM

It would not have to be just like you in order to qualify as being a sentient being.

#57 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 25 July 2005 - 11:22 PM

Me-centered logic? Because I don't think rocks or protons or really]


I know that you are not intentionally trying to be immoral by taking your current position, but if adopted by humanity, an anthropocentric position such as the one you have just expounded upon would lead to a stratifed society of humans and artificial *nonhumans*.

Ever see Spielberg's AI? Remember how the AIs were used in gruesome circus side shows, where they were torn limb from limb, ripped to shreds? Based on the position you are advocating, you would have no problem with this sort of behavior. After all, the AIs aren't really feeling anything are they? They may say that they are feeling pain, they may even scream that they are feeling pain -- but they're not really feeling pain -- they're just machines.

#58 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:22 PM

Becuase you aren't writing code for it, you are making a copy of a human brain within a simulated environment from a real human brain. You aren't writing any of the code



#59 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 25 July 2005 - 11:25 PM

This has almost become a chat, so quick are our responses!


Heh, indeed.

#60 jaydfox

  • Topic Starter
  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 July 2005 - 11:31 PM

Ever see Spielberg's AI? Remember how the AIs were used in gruesome circus side shows, where they were torn limb from limb, ripped to shreds? Based on the position you are advocating, you would have no problem with this sort of behavior. After all, the AIs aren't really feeling anything are they? They may say that they are feeling pain, they may even scream that they are feeling pain -- but they're not really feeling pain -- they're just machines.

Depends. If they are purely software, then it wouldn't be any different than killing zombies or aliens in a video game.

Of course, I suspect that we'll develop non-software-only AIs pretty early after the first software AIs, so such robots being torn limb from limb would be in a different boat. I suspect that more people than just me and Osiris think there's more to consciousness than software, so they'll be actively pursuing a means of building AIs that might have qualia.

Though absent a theory that could test objectively for qualia, we'd at best only be able to come up with rough probabilities that such AI actually experienced qualia. Probabilities like "about as likely as a monkey flying out of my butt", which is a non-zero probability, but also one not worth getting worked up about when the robots are torn limb from limb. Another probability might along the lines of "about as likely as a non-human mammal having qualia, though we can't say which mammal, whether it be a chimp or a newborn shrew", in which case I'd support a ban on ripping such robots limb from limb.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users