Aegist, Yeah, I just recently changed my sig (probably less than two weeks ago)...
I see exactly where you are coming from, and I understand perfectly why you think what you do, however I am taking the quote in a slightly broader sense. It would be a more accurate description of my thoughts if I were to edit it as so:
"Saying that [a non-biological computer] doesn't think about Chess is like saying an Airplane doesn't fly because it doesn't flap its wings"
Further, what I mean is this... There isn't any robust and or accurate definition of a thought, or of what a "thought stream" is... or what "consciousness is", as far as we can tell, the brain, and hence the mind's processes are grounded to mechanistic interactions between their components... Now, I am using this quote to represent the idea that just because a machine isn't biological, that doesn't mean it cannot think, I am not limiting the object under question to Deep Blue.
As I discussed with Dimasok in the teleportation thread a week or two back, I believe smacking a label on things as conscious or not is the wrong way to approach the issue, it is the exact same issue as defining what life is, how do we set a definition for life? we can't measure "life-forces", so we set arbitrary constraints on the definition such as: being able to reproduce, being able to self-repair, possibly self-governed movement or decisions... who knows. We will never have a fundamental definition for it the same way we do for matter or speed.
[optional_rant]
That being said, imagine yourself playing chess, you are focused mostly on the game, but you can look around you and converse with your opponent... now, you may feel a slight bit of nervousness, fear, excitement, or anger towards the situation, you might be thinking about your wrecked car in the back of your head, or you might be thinking about why you're thinking in the first place... but while making your moves on the board, you are using reason, logic, and your internal concept of cause and effect to play the game, all of those side-thoughts are completely unrelated to the game, but are lumped into the same definition of thinking when you are trying to quantify it within a machine... so you must remember that a machine will/can/might have the same abilities to solve chess as you, and in which case it would be thinking, just not about wrecked cars, and it wouldn't have the capacity for emotions....
[/optional_rant]
If a program were designed to play chess using pattern recognition, facial cue recognition, heuristics based on previous games played with humans, and an understanding of how to perform moves that would seem intimidating to the human player was constructed, and a parallel program was written to pick a move from a pre-determined set of moves based on what worked best last time (a basic genetic algorithm)... how would one define and locate the mechanism that makes the former a thinker and the latter incapable of thought?
I agree that I could never imagine how such a program would be capable of thought, however it is impossible to prove that it hasn't attained at least a degree of thought.
While working on Sapphire (my personal agi project) I have thought to myself, if/when I finish it, I will not have implemented a single "consciousness" function, or "thought" function, but it will claim it is capable of such processes because it will be capable of reasoning as well or better than I. It will definitely be "aware" or its effects on reality, how it relates to its environment, and what sorts of mechanisms it needs to employ to solve particular problems.
I say this because I feel self-awareness is trivial to implement within a computer program, a simple input, output, and a method of representation of how the input affects an external environment and how it will affect future inputs... Now, this isn't the human brand of self-awareness because it is far too simplistic, but it is no doubt self-awareness by all reasonable definitions.
So, in short, I think defining thought is analogous to trying to define life, you cannot define it without setting arbitrary constraints on the definition, at which point you have begun to abstract your definition beyond the amount of information observing the mechanism can provide in the first place. If the operations of every single brain cell in a given human brain were emulated on a computer, the computer would have the same responses to input, and would output the same information the original human brain would (assuming identical sensory mediums), so... now that the mind has now been converted to a super-complex program, try to define the mechanism that provides that computer program with thought and at the same time show why a program traversing a linked-list is not thinking.
Because of these apparent limitations of objective observations, I have adopted a view that "thinking", "consciousness", and "life", must be measured directly by their capabilities instead of discrete values. But we must remember that two given devices with the same capabilities may have completely different representations of each of those mechanisms.
"Deep Blue thinks about chess in exactly the same way that a thermostat thinks about the temperature."
Well, is this due to the fact that they are both deterministic devices? The human brain is as well, and it is worth noting that his statement fits perfectly within my theory as well.