• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Deep Blue Thinking about Chess


  • Please log in to reply
10 replies to this topic

#1 Aegist

  • Guest Shane
  • 1,416 posts
  • 0
  • Location:Sydney, Australia

Posted 27 July 2007 - 05:58 AM


I only just noticed Josephs signature for the first time (either I am unobservant or he recently changed it...) and it says something along the lines of:

"Saying that Deep Blue doesn't think about Chess is like saying an Airplane doesn't fly because it doesn't flap its wings. (Quoteing someone else)"

I disagree with that statement so very completely.

I think a better analogy would be to say that "Deep Blue thinks about chess in exactly the same way that a thermostat thinks about the temperature."

Just because a programmed piece of machinary or software is able to produce the right results does not, in anyway, imply that there is a cognizant awareness of what it is doing. Flying is an easily defined concept which is not defined by the motion of the wings. Thinking is a much more complicated topic, but one thing is for sure, and that is that it is not defined by its output. The fact that Deep Blue can produce an output which beets human players does not mean that it is thinking. That would be treating "thinking" as a behavioural term, when I believe the most common (and the most scientific) understanding of thinking has nothing to do with behaviour. I can sit very still and do nothing at all for many many hours, and be thinking the whole time. At the same time you can sneak up behind me and scald me with hot water, and my reaction will be made without a moments thought. My reaction will infact be a reflex, occuring faster than the signal can get from the scalded section of skin to my brain, then the signal from my brain out to the parts which control the reaction.


Deep Blue thinks about chess in exactly the same way that a valve thinks about directing flow.

Deep blue thinks about chess in exactly the same way that a calculator thinks about addition.

Deep blue thinks about chess in exactly the same way that a cell thinks about reproduction.

#2 Zarrka

  • Guest
  • 226 posts
  • 0

Posted 27 July 2007 - 06:20 AM

This is where the argument for these kinda of "AI" is atm... What is not interesting about these programs is that they play chess. what IS interesting is how they do it... if they do it through brute force of calculating then it turns into a giant look up sheet / calculator, and thats not interseting.

If it is doing it through some kind of huristical methodology, then it COULD be intertesting.

if it is doing it thrugh some kind of modeling that follows the thought process of a real chess player then it COULD be interesting.

I say could be interesting as the distinction between a brute immitation of thought and these programs possibly moving thoughts for themselves needs to b established. if it moves beyond mear immitation nad can react to novel stimuli, then that is interesting.

but thats not how deep blue was programed, deep blue, as far as i rememeber was more of a giant look up table esk program. Computational force along is not going to cut it.

sponsored ad

  • Advert

#3 chubtoad

  • Life Member
  • 976 posts
  • 5
  • Location:Illinois

Posted 27 July 2007 - 07:49 AM

I don't think I believe in this idea of "cognizant awareness" (I think you are talking about the thing that many people call consciousness?). I have long taken a view that consciousness is just a useful category that the brain throws a number of brain processes into when you are thinking about thinking, and which is accompanied by some built in delusions of possession and importance.

#4 Aegist

  • Topic Starter
  • Guest Shane
  • 1,416 posts
  • 0
  • Location:Sydney, Australia

Posted 27 July 2007 - 07:55 AM

I don't think I believe in this idea of "cognizant awareness" (I think you are talking about the thing that many people call consciousness?).  I have long taken a view that consciousness is just a useful category that the brain throws a number of brain processes into when you are thinking about thinking, and which is accompanied by some built in delusions of possession and importance.

To take that stance you would have to justify why a brain would put energy (more importantly, evolutionary steps) into producing what would otherwise seem to be pointless frills. I have trouble justifying why consciousness exists at all given that it seems obvious to me that a machine could be designed which acts exactly like me, but doesn't "Experience" anything like colour, sound, hot, cold etc. So if you think the brain actively dedicates brain processes into creating this illusion, why on earth would evolution bother adding that seemingly functionless function?

#5

  • Lurker
  • 0

Posted 27 July 2007 - 07:57 AM

> but thats not how deep blue was programed, deep blue, as far as i rememeber was more of a giant look up table esk program. Computational force along is not going to cut it.

I highly doubt it is a giant look up table - just way too many combinations for chess. Maybe that would work for tic-tac-toe, but that's about it. A lot of thought goes into these sorts of programs. Its not just a matter of running a simple program on really fast hardware.

#6 chubtoad

  • Life Member
  • 976 posts
  • 5
  • Location:Illinois

Posted 27 July 2007 - 09:26 AM

To take that stance you would have to justify why a brain would put energy (more importantly, evolutionary steps) into producing what would otherwise seem to be pointless frills.


People think something exists which they call consciousness under which some types of thoughts are included and others are not, and they have a strong feeling about possessing it and it being important; but nobody can say what it really is. So the evolutionary work is already done under either of our views. Now the onus is on you to show that there is actually something behind this deep seated intuition and it is not simply a delusion. Now you are faced with what I think people mean by the "mind body problem", why do I feel this pain instead of just thinking I do and acting accordingly, and so on. But of course if you act accordingly that is good enough so there can be no evolutionary pressure. So it seems unlikely that there is really any consciousness, or feeling, or "I" if the belief there is is good enough.
There is still an interesting question, the evolutionary one of why we think we have the delusion we call consciousness, the one you asked me. But this is like figuring out why people believe in God, once you believe he doesn't exist, and it may be that like the delusion of God the delusion of consciousness is easiest to address with memetic arguments. Evidence for this is the distinction between the traditional Eastern and Western beliefs of the existence of an "I" or "mind".

#7 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 27 July 2007 - 11:21 AM

Aegist, Yeah, I just recently changed my sig (probably less than two weeks ago)...

I see exactly where you are coming from, and I understand perfectly why you think what you do, however I am taking the quote in a slightly broader sense. It would be a more accurate description of my thoughts if I were to edit it as so:

"Saying that [a non-biological computer] doesn't think about Chess is like saying an Airplane doesn't fly because it doesn't flap its wings"

Further, what I mean is this... There isn't any robust and or accurate definition of a thought, or of what a "thought stream" is... or what "consciousness is", as far as we can tell, the brain, and hence the mind's processes are grounded to mechanistic interactions between their components... Now, I am using this quote to represent the idea that just because a machine isn't biological, that doesn't mean it cannot think, I am not limiting the object under question to Deep Blue.

As I discussed with Dimasok in the teleportation thread a week or two back, I believe smacking a label on things as conscious or not is the wrong way to approach the issue, it is the exact same issue as defining what life is, how do we set a definition for life? we can't measure "life-forces", so we set arbitrary constraints on the definition such as: being able to reproduce, being able to self-repair, possibly self-governed movement or decisions... who knows. We will never have a fundamental definition for it the same way we do for matter or speed.

[optional_rant]
That being said, imagine yourself playing chess, you are focused mostly on the game, but you can look around you and converse with your opponent... now, you may feel a slight bit of nervousness, fear, excitement, or anger towards the situation, you might be thinking about your wrecked car in the back of your head, or you might be thinking about why you're thinking in the first place... but while making your moves on the board, you are using reason, logic, and your internal concept of cause and effect to play the game, all of those side-thoughts are completely unrelated to the game, but are lumped into the same definition of thinking when you are trying to quantify it within a machine... so you must remember that a machine will/can/might have the same abilities to solve chess as you, and in which case it would be thinking, just not about wrecked cars, and it wouldn't have the capacity for emotions....
[/optional_rant]

If a program were designed to play chess using pattern recognition, facial cue recognition, heuristics based on previous games played with humans, and an understanding of how to perform moves that would seem intimidating to the human player was constructed, and a parallel program was written to pick a move from a pre-determined set of moves based on what worked best last time (a basic genetic algorithm)... how would one define and locate the mechanism that makes the former a thinker and the latter incapable of thought?

I agree that I could never imagine how such a program would be capable of thought, however it is impossible to prove that it hasn't attained at least a degree of thought.

While working on Sapphire (my personal agi project) I have thought to myself, if/when I finish it, I will not have implemented a single "consciousness" function, or "thought" function, but it will claim it is capable of such processes because it will be capable of reasoning as well or better than I. It will definitely be "aware" or its effects on reality, how it relates to its environment, and what sorts of mechanisms it needs to employ to solve particular problems.

I say this because I feel self-awareness is trivial to implement within a computer program, a simple input, output, and a method of representation of how the input affects an external environment and how it will affect future inputs... Now, this isn't the human brand of self-awareness because it is far too simplistic, but it is no doubt self-awareness by all reasonable definitions.

So, in short, I think defining thought is analogous to trying to define life, you cannot define it without setting arbitrary constraints on the definition, at which point you have begun to abstract your definition beyond the amount of information observing the mechanism can provide in the first place. If the operations of every single brain cell in a given human brain were emulated on a computer, the computer would have the same responses to input, and would output the same information the original human brain would (assuming identical sensory mediums), so... now that the mind has now been converted to a super-complex program, try to define the mechanism that provides that computer program with thought and at the same time show why a program traversing a linked-list is not thinking.

Because of these apparent limitations of objective observations, I have adopted a view that "thinking", "consciousness", and "life", must be measured directly by their capabilities instead of discrete values. But we must remember that two given devices with the same capabilities may have completely different representations of each of those mechanisms.

"Deep Blue thinks about chess in exactly the same way that a thermostat thinks about the temperature."

Well, is this due to the fact that they are both deterministic devices? The human brain is as well, and it is worth noting that his statement fits perfectly within my theory as well.

#8 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 27 July 2007 - 11:38 AM

After re-reading your post trying to make sure I didn't miss anything, I realized that the way I was describing how to measure a machine's capacity for thought (by their capabilities) maybe have sounded like I was refering to behavior like in the following quote, however, I was meaning it as the machine's capability to form an output based on the input, not what it actually outputs. So, once again, I agree... capacity for thought or intelligence cannot accurately be measured by a machine's output.

That would be treating "thinking" as a behavioral term, when I believe the most common (and the most scientific) understanding of thinking has nothing to do with behavior



#9 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 27 July 2007 - 12:02 PM

Consciousness is such a problomatic thing..

#10 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 27 July 2007 - 01:02 PM

Perhaps what Aegist is trying to say is that the analogies presented are not trying to convey thought, but rather the similarities of the functions of the objects being mentioned.

sponsored ad

  • Advert

#11 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 27 July 2007 - 01:21 PM

Hmm, To try to sum it up, I do not believe there is a single type of "thought" I am sure there are different variations of how thoughts are mediated in lizards, primates, and fish... but they are clearly thinking, so... why would the formats evolved naturally for the "thought" be the only ones that can be classified as such?

The way I see it, until someone can bridge the two worlds of objective observation and subjective experience, I will base my theories off of what capabilities each machine has to manipulate data (completely independent of internal representation).




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users