• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Turing test of six computer programs next Sunday!


  • Please log in to reply
19 replies to this topic

#1 Shoe

  • Guest, F@H
  • 135 posts
  • 1

Posted 05 October 2008 - 07:36 AM


Article in The Guardian:

No machine has yet passed the test devised by Turing, who helped to crack German military codes during the Second World War. But at 9am next Sunday, six computer programs - 'artificial conversational entities' - will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognised 'thinking' machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be 'conscious' - and if humans should have the 'right' to switch it off.


This is pretty exciting stuff! I really hope we'll get a detailed follow-up article on how it went.

Edited by Shoe, 05 October 2008 - 07:39 AM.


#2 Shoe

  • Topic Starter
  • Guest, F@H
  • 135 posts
  • 1

Posted 05 October 2008 - 07:45 AM

Ok, I may have gotten a bit carried away; I didn't read the whole article before posting. Ultra Hal will very probably not succeed. I hope the other programs are better.

sponsored ad

  • Advert

#3 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 05 October 2008 - 12:59 PM

You must remember, most of those programs are programmed to answer in a correct manner to certain combinations of words.
They will all fail, if somehow one will not fail, it still will not be intelligent, far from what we aim for.

In order to program life, you must start from the basis emotions and build it on top.
After that you only have an animal, the next stage is to build its understanding and self learning, improvement.
That stage, in programs, will evolve much faster than in humans.
But the difference between stage 2 without stage 1 is the fact that you will have to feed the program with information and it will only be able to give you the correct pattern.

If you have stage 1 as well as stage two, you will have a self aware living being.

I am one of the people who claim emotions are mostly meaningless, but people must remember, without the basic emotions, none will ever act or see the reason to act.

#4 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 05 October 2008 - 03:43 PM

You must remember, most of those programs are programmed to answer in a correct manner to certain combinations of words.
They will all fail, if somehow one will not fail, it still will not be intelligent, far from what we aim for.

In order to program life, you must start from the basis emotions and build it on top.
After that you only have an animal, the next stage is to build its understanding and self learning, improvement.
That stage, in programs, will evolve much faster than in humans.
But the difference between stage 2 without stage 1 is the fact that you will have to feed the program with information and it will only be able to give you the correct pattern.

If you have stage 1 as well as stage two, you will have a self aware living being.

I am one of the people who claim emotions are mostly meaningless, but people must remember, without the basic emotions, none will ever act or see the reason to act.



I agree with winterbreeze.


I'm not really concerned with a computer passing the turing test. I just want an intelligent computer, and these computers from the article are not intelligent, they're just programmed to pass the turing test. I see it as a pointless endeavor, except that if some computer passes it, it could raise some public interest and that's always good as long as neo-luddites don't come screaming that HAL came to bring apocalypse to the world.

#5 nanostuff

  • Guest
  • 17 posts
  • 0

Posted 05 October 2008 - 10:14 PM

they're just programmed to pass the turing test.


You can't program an AI to JUST pass a turing test. To pass a turing test the AI must represent itself as a competent human and communicate in a competent manner. If the AI can competently communicate with people, it must also necessarily exhibit a competent thought process, which is required for the said competent communication.

If any of these things pass their implementation of the turing test, it will either mean the testers were incompetent or the AI was competent. If the latter, that would be incredible.

#6 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 06 October 2008 - 02:18 AM

they're just programmed to pass the turing test.


You can't program an AI to JUST pass a turing test. To pass a turing test the AI must represent itself as a competent human and communicate in a competent manner. If the AI can competently communicate with people, it must also necessarily exhibit a competent thought process, which is required for the said competent communication.

If any of these things pass their implementation of the turing test, it will either mean the testers were incompetent or the AI was competent. If the latter, that would be incredible.



Then we can be sure that these computers will fail, unless the testers are really incompetent.

#7 Neurosail

  • Life Member, F@H
  • 311 posts
  • 0
  • Location:Earth
  • NO

Posted 06 October 2008 - 03:17 AM

I don't think I could pass the Turing test. I just scored 30 out of 50 points for an autism spectrum quotient. The machine/ computer possibility has a warmer, friendly voice over the phone than I have. If I can't read the person's face then it is hard for me to guess what the other person is feeling. That is the test, a person goes in one room and a computer is in another room and a person or persons ask questions over the phone and tries to guess which room has the real person and which room has the computer? Most people would chose the computer over me! ;)

#8 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 06 October 2008 - 04:39 AM

Jimmy, you have never spoke to a computer before.
And it is on text chatting.

When you talk, there is sense, there is the idea of the conversation behind the words.

When a computer talks, especially one programmed specifically to pass the Turing Test, you cannot get much out of him.

I also must remind you, the computer who speaks to you has no knowledge of any subject, so all you need to do is simply.. ask him about details.

#9 Traclo

  • Guest, F@H
  • 101 posts
  • 3
  • Location:Ontario

Posted 06 October 2008 - 10:59 AM

Does anyone else find the HAL attempt in the article hilariously bad?
They had me second guessing myself by putting the human first, thinking that maybe just maybe this was really a computer and I was simply underestimating the state of the art. Then they added the HAL response and all my doubts were cleared!
Not to insult the programmers on the HAL project, just to point out how far we really are from passing that darn Turing test...

(The degradation of HALs spelling was what got me... I'm just picturing this thing melting down typical movie style. 'DOES NOT COMPUTE')

#10 Shoe

  • Topic Starter
  • Guest, F@H
  • 135 posts
  • 1

Posted 06 October 2008 - 04:02 PM

Does anyone else find the HAL attempt in the article hilariously bad?


Yes. My enthusiasm went down by several degrees when I read the conversation between Hal and KW.

Edited by Shoe, 06 October 2008 - 04:02 PM.


#11 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,074 posts
  • 2,000
  • Location:Wausau, WI

Posted 06 October 2008 - 04:12 PM

I just wanted to re-iterate the fact that just because the computer might not have the same reasoning/language capability of a human doesn't mean there isn't some sort of thought process going on. It might be a very simple algorithm but it represents a "thought pattern". If the computer can functionally carry on a conversation and fool humans, then it doesn't matter what is going on "behind the scenes" within the silicon. It is functionally human conversationalist.

Thinking back to the Deep Blue event with Kasparov. The computer did not have any understanding of chess strategy, it was just a big brute force decision tree, but that did not matter in the end. It was functionally at the level of a human grandmaster. Remember that Kasparov complained afterward that some human must have been assisting the computer. The moves were eerily similar to what a human grand master would do.

#12 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 06 October 2008 - 04:26 PM

I just wanted to re-iterate the fact that just because the computer might not have the same reasoning/language capability of a human doesn't mean there isn't some sort of thought process going on. It might be a very simple algorithm but it represents a "thought pattern". If the computer can functionally carry on a conversation and fool humans, then it doesn't matter what is going on "behind the scenes" within the silicon. It is functionally human conversationalist.

Thinking back to the Deep Blue event with Kasparov. The computer did not have any understanding of chess strategy, it was just a big brute force decision tree, but that did not matter in the end. It was functionally at the level of a human grandmaster. Remember that Kasparov complained afterward that some human must have been assisting the computer. The moves were eerily similar to what a human grand master would do.


Your point is valid yet mistaken.
This is not intelligence, this is how you said, brute force.
If we want computer intelligence, it requires understanding.

If we satisfy ourself alone by problem solving computer, then however effective it is, and I do not deny that it is effective, there is still much to be done.

#13 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,074 posts
  • 2,000
  • Location:Wausau, WI

Posted 06 October 2008 - 04:45 PM

Your point is valid yet mistaken. This is not intelligence


I depends on your definition of intelligence. I view human intelligence as a huge compilation of complex algorithms - one of which IS brute force calculation. Most of what the human brain does is pattern recognition. Computers do that too.

Unless you are saying there is something supernatural about intelligence, then computers will eventually achieve our level, even if it is bit-by-bit, algorithm-by-algorithm.

Hans Moravec says it better (link):

(**remember, this was written in 1997...it is even more true today)

The Great Flood

Computers are universal machines, their potential extends uniformly over a boundless expanse of tasks. Human potentials, on the other hand, are strong in areas long important for survival, but weak in things far removed. Imagine a "landscape of human competence," having lowlands with labels like "arithmetic" and "rote memorization", foothills like "theorem proving" and "chess playing," and high mountain peaks labeled "locomotion," "hand-eye coordination" and "social interaction." We all live in the solid mountaintops, but it takes great effort to reach the rest of the terrain, and only a few of us work each patch.

Advancing computer performance is like water slowly flooding the landscape. A half century ago it began to drown the lowlands, driving out human calculators and record clerks, but leaving most of us dry. Now the flood has reached the foothills, and our outposts there are contemplating retreat. We feel safe on our peaks, but, at the present rate, those too will be submerged within another half century. I propose (Moravec 1998) that we build Arks as that day nears, and adopt a seafaring life! For now, though, we must rely on our representatives in the lowlands to tell us what water is really like.

Our representatives on the foothills of chess and theorem-proving report signs of intelligence. Why didn't we get similar reports decades before, from the lowlands, as computers surpassed humans in arithmetic and rote memorization? Actually, we did, at the time. Computers that calculated like thousands of mathematicians were hailed as "giant brains," and inspired the first generation of AI research. After all, the machines were doing something beyond any animal, that needed human intelligence, concentration and years of training. But it is hard to recapture that magic now. One reason is that computers' demonstrated stupidity in other areas biases our judgment. Another relates to our own ineptitude. We do arithmetic or keep records so painstakingly and externally, that the small mechanical steps in a long calculation are obvious, while the big picture often escapes us. Like Deep Blue's builders, we see the process too much from the inside to appreciate the subtlety that it may have on the outside. But there is a non-obviousness in snowstorms or tornadoes that emerge from the repetitive arithmetic of weather simulations, or in rippling tyrannosaur skin from movie animation calculations. We rarely call it intelligence, but "artificial reality" may be an even more profound concept than artificial intelligence (Moravec 1998).

The mental steps underlying good human chess playing and theorem proving are complex and hidden, putting a mechanical interpretation out of reach. Those who can follow the play naturally describe it instead in mentalistic language, using terms like strategy, understanding and creativity. When a machine manages to be simultaneously meaningful and surprising in the same rich way, it too compels a mentalistic interpretation. Of course, somewhere behind the scenes, there are programmers who, in principle, have a mechanical interpretation. But even for them, that interpretation loses its grip as the working program fills its memory with details too voluminous for them to grasp.

As the rising flood reaches more populated heights, machines will begin to do well in areas a greater number can appreciate. The visceral sense of a thinking presence in machinery will become increasingly widespread. When the highest peaks are covered, there will be machines than can interact as intelligently as any human on any subject. The presence of minds in machines will then become self-evident.



#14 Traclo

  • Guest, F@H
  • 101 posts
  • 3
  • Location:Ontario

Posted 06 October 2008 - 05:52 PM

But isn't the point behind the Turing test to differentiate between area specific intelligence (Deep Blue) and general intelligence? I'm not suggesting that it is a perfect way to do it, but rather that there is in fact a difference between the intelligence that is being tested for and the intelligence you define Mind. In the analogy you used the idea the land is gradually being covered in water, but that isn't exactly a good analogy of the A.I. we have now. What we have now is more focused in specific areas of expertise, and remain useless in most all other areas. Take Deep Blue and try to get it to do the Turing test. It will inevitably fail, because it is not what it is designed to do. My point with this is that the analogy of water is an incorrect one. In spite of having machines covering the lowlands and approaching the bases of our mountains, the water itself is composed of many different incompatible programs. Even if we had programs which could outperform humans in all tasks, unless these programs were unified into a single applicable program I would still argue that the machines did not have true intelligence.

I think that is also what Winterbreeze meant...




Edit: When I say your analogy Mind I just mean your use of his analogy to illustrate a point... not that you ascribe to the analogy exactly (I wouldn't want to presume)

Edited by Traclo, 06 October 2008 - 05:54 PM.


#15 rombus

  • Guest
  • 42 posts
  • 2
  • Location:California

Posted 12 October 2008 - 10:06 PM

The winner was Elbot. Here is a link to Elbot if you're interested in chatting with it. You must press its red button.

Elbot managed to achieve a 25% success rate when convincing a human being that they were talking to another human, which is below the established 30% threshold for passing the Turing test.

Here is the AP article: link

#16 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 12 October 2008 - 11:01 PM

The winner was Elbot. Here is a link to Elbot if you're interested in chatting with it. You must press its red button.

Elbot managed to achieve a 25% success rate when convincing a human being that they were talking to another human, which is below the established 30% threshold for passing the Turing test.

Here is the AP article: link


Lol did this Elbot fool anybody? They must have asked crackpots to take this test... All that Elbot was did was giving elusive answers and trying to take the conversation the way it wanted to.... almost never a direct answer. Very easy to spot that it's not a human, even if i didn't know that i wasn't chatting with one.

#17 rombus

  • Guest
  • 42 posts
  • 2
  • Location:California

Posted 12 October 2008 - 11:23 PM

Lol did this Elbot fool anybody? They must have asked crackpots to take this test... All that Elbot was did was giving elusive answers and trying to take the conversation the way it wanted to.... almost never a direct answer. Very easy to spot that it's not a human, even if i didn't know that i wasn't chatting with one.


Looks like he fooled someone, which is baffling to the mind.

#18 rombus

  • Guest
  • 42 posts
  • 2
  • Location:California

Posted 13 October 2008 - 01:06 AM

Lol did this Elbot fool anybody? They must have asked crackpots to take this test... All that Elbot was did was giving elusive answers and trying to take the conversation the way it wanted to.... almost never a direct answer. Very easy to spot that it's not a human, even if i didn't know that i wasn't chatting with one.


Looks like he fooled someone, which is baffling to the mind.



I mistakenly referred to the Elbot as a he. However upon asking its gender it replied, "I don't think that's so important. With all the adapters they have today you can connect anything everywhere."

#19 Traclo

  • Guest, F@H
  • 101 posts
  • 3
  • Location:Ontario

Posted 13 October 2008 - 02:42 AM

Elbot managed to achieve a 25% success rate when convincing a human being that they were talking to another human, which is below the established 30% threshold for passing the Turing test.

It might be just a matter of personal opinion, but I have always heard that to pass greater then 50% of the judges would need to think it's a human, and it would have to score above 50% consistently. But I guess it's just a matter of stringency.

Does anyone know what percentage of humans pass the Turing test? I know it sounds funny but I was just wondering what percentage are misinterpreted as machines (whether they actively try to fool the judges or not). I ask this because to truly pass the test the machine would need to achieve a success percentage that humans can get, at the very least.

sponsored ad

  • Advert

#20 rombus

  • Guest
  • 42 posts
  • 2
  • Location:California

Posted 13 October 2008 - 04:07 AM

Elbot managed to achieve a 25% success rate when convincing a human being that they were talking to another human, which is below the established 30% threshold for passing the Turing test.

It might be just a matter of personal opinion, but I have always heard that to pass greater then 50% of the judges would need to think it's a human, and it would have to score above 50% consistently. But I guess it's just a matter of stringency.


The may very well be the case. I think the 30% number is what they were aiming for for this particular contest and not an official established percentage that everyone agrees upon.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users