• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Do we really want a FAI to pass the Turing Test?


  • Please log in to reply
19 replies to this topic

#1 jrhall

  • Guest
  • 17 posts
  • 0

Posted 04 May 2006 - 09:21 PM


I just found the SingInst site recently and have been reading many interesting papers on Friendly AI. Coming on the heels of reading 'The Singularity is Near' I can appreciate the need to build something friendly.

With that in mind I was wondering if it made sense to design and build something that will be judged a success or failure based on passing the Turing Test. As a criteria for deciding if something intelligent was actually built, it’s a brilliant idea. But, seeing as even the friendliest most considerate human still has the same basic angers, fears and desires of everyone else, would you really want to evolve a complex system that exhibits those emotions? For example, if during the course of the Turing Test I threatened the FAI with destruction, what would a human realistic reaction be? Self-Defense? Pissed-off? Or maybe just the idea that humans are threatening? This is an extreme example, but there would be all kinds of lesser affronts that the AI would need to pass in order to convince a skeptical judge. Why would you want to program these emotions into a FAI? How about greed, a strong desire to reproduce, territoriality, lust?

#2 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 04 May 2006 - 09:32 PM

The Turing Test is a test of intelligence, but in a very limited way. It's not really much different from using chess or Go or another game as a test of intelligence.

In the case of the Turing Test, what is being tested is NOT intelligence per se, but a very narrow skill of being able to impersonate a human well. Just like chess is not a test of intelligence per se, but a test of a very narrow skill of being able to play chess well.

Now admittedly, the Turing Test is not as narrow as the game of chess, as far as the skill being tested. But it's still very narrow.

sponsored ad

  • Advert

#3 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 04 May 2006 - 11:14 PM

I'm no expert on these matters, but I think you trivialize the Turing Test. I don't think a superficial impersonation of a human qualifies as passing a Turing Test. (There are already online chat bots that do that.) Passing a Turing Test means that an entity demonstrates itself as human-equivalent IN DEPTH based on extensive investigative conversation. Ability to converse in human language is merely the interface of the Turning Test, it is not the Turing Test itself.

---BrianW

#4 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 04 May 2006 - 11:43 PM

The point, however, is that it is human-style intelligence, regardless of its triviality. The game of Go is a MUCH better measure of intelligence than the game of Chess, yet the rule base is actually far simpler. The domain of knowledge being tested isn't the point of the Turing Test, it's the degree of complex intelligence exhibited within that domain. The Turing Test is useful because it's such a broad domain compared to trivially narrow domains like the game of Go, or performing speech recognition. But it's still a narrow domain compared with the domain of general intelligence (an abstract concept).

The point is, the Turing Test is a very good test of intelligence, but it's still very narrow. It's a great deal broder than a game like chess or go, and far more difficult, so yes, it's a good measure of intelligence. But it's still too narrow to serve as a general measure of intelligence.

#5 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 05 May 2006 - 12:03 AM

I guess, think of it this way. Liken the Turing Test to being able to successfully impersonate Tom Cruise. The whole nine yards: the body language, the manner of speech, the Scientology bit, his life experiences, his politics, etc.

Now, how useful would such an AI be, versus an AI that was just designed to be a normal, well-adjusted human being. In other words, would you try to program a duplicate of Tom Cruise, or would you just try to program a person?

Well, aiming for a normal, well-adjusted human being is probably narrower a target, with respect to AGI, than aiming for Tom Cruise is with respect to normal, well-adjusted human beings. It seems broad, because humans can do so many things, but it really is incredibly narrow, and there are lots of things about humans you don't want to duplicate or imitate (just like there are many things about Tom Cruise that...)

#6 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 05 May 2006 - 01:00 AM

Let me put it another way. The ability to pass a comprehensive Turing Test would be a *sufficient* condition for an entity to be considered an AGI. However, per your arguments, it may not be necessary.

---BrianW

#7 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 05 May 2006 - 03:36 AM

I doubt anybody that could actually build an intelligence would need a Turing Test to tell them they were successful. I could be wrong, but I really doubt it.

#8 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 05 May 2006 - 05:56 AM

I doubt the first AGI will be able to pass the Turing Test anytime before it enters into a hard takeoff. But hopefully it will be capable of speaking and understanding in a stilted, formal English, allowing a higher bandwidth of information transfer between the AI and the programmers than just inputting code. With this language, it may be capable of fooling some Turing Testers, but probably not a determined skeptic.

#9 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 05 May 2006 - 10:02 AM

That's about right, Brian.

#10 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 05 May 2006 - 08:48 PM

Michael wrote:

I doubt the first AGI will be able to pass the Turing Test anytime before it enters into a hard takeoff.

That seems to be a contradiction. Modern technology notwithstanding, humans still oversee all important physical processes of the economy. It seems to me that the only way a "hard takeoff" could happen is for an AGI to understand human nature so well that it could manipulate humans into doing its bidding (giving it more resources, more control, etc.). That implies skill even beyond that necessary for passing a Turing Test. Without such manipulative skill, even the most powerful AGI remains a process sitting in a box.

If not manipulation of humans into doing stupid things (like the infamous failure to distinguish between programs and data pioneered by Microsoft), what do you foresee?

---BrianW

#11 jrhall

  • Topic Starter
  • Guest
  • 17 posts
  • 0

Posted 06 May 2006 - 12:08 AM

So even before the question of whether an AGI can pass the Turing Test before or after hard takeoff, my question is would you want it to *ever* pass the Turing Test? After reading Turing's original paper, my understading is that for an AGI to 'pass' the Turing Test it would either have to posses the emotions necessay to fool a skeptical judge or would at least have to imitate the emotions. I don't see much point in just imitating the emotions (except maybe to facilate communications with humans?) and it seems dangerous if the AGI possed emotions such as fear, greed and anger.

I bring this up because it seems that the Turing Test is still the gold standard for judging an AGI. I've seen it cited by both Kurzweil and Yudkowsky (though I may be out of date here)

#12 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 06 May 2006 - 10:30 PM

Brian,

A sufficiently intelligent AGI will invent sophisticated nanotechnology using a sequence of tools that may begin with something as simple as an STM or custom-synthesized proteins.

Alternatively, an AGI could take files or computers hostage and distribute itself across a million machines. Even a process based on CEV might choose to do this. The rise of a clumsy unFriendly AGI might lead to an event similar to the fictional Pluto's Kiss.

An AGI capable of writing code could also manipulate technological progress by contributing selectively to open source projects.

#13 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 06 May 2006 - 11:51 PM

my question is would you want it to *ever* pass the Turing Test

Only if it's FRIENDLY!!!

I bring this up because it seems that the Turing Test is still the gold standard for judging an AGI.

It's not that important. It is important in the sense that it gives definition to some important ideas, but realistically, an AGI passing the Turing Test will be a symbolic event, if such a thing ever happens. We could easily distinguish something with intelligent behavior from something without intelligent behavior long, long before it would be anwhere near capable to pass a Turing Test.

#14 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 07 May 2006 - 12:24 AM

A sufficiently intelligent AGI will invent sophisticated nanotechnology using a sequence of tools that may begin with something as simple as an STM or custom-synthesized proteins.

Such a process could not occur without extensive human involvement. Even allowing for maximum stupidity and malevolence of human beings, such a nanotechnology bootstrapping program would stretch into years.

Alternatively, an AGI could take files or computers hostage and distribute itself across a million machines. Even a process based on CEV might choose to do this. The rise of a clumsy unFriendly AGI might lead to an event similar to the fictional Pluto's Kiss.

That just rolls AGI into the general problem of computer security; the eternal race between malevolent people and programs and the people and programs that defend against them. I take this risk more seriously than the risk of an AGI suddenly hijacking the physical economy, but I don't think hacking the world's computers qualifies as a "hard takeoff" given the smackdown that would follow.

To me, ideas like "hard takeoff" and Gigadeath seem to be the AGI equivalent of "grey goo." They get attention, but is the attention justified, and is the attention beneficial? Grey goo has been debunked, but not before needlessly tarnishing the image of molecular nanotech.

"Hard takeoff," to whatever extent the concept has been defined, would be more credible if somebody would write some detailed scenarios for how it could happen. For instance, repair scenarios for cryonics patients have been written to make it easier to visualize that intuitively implausible notion. If Hard Takeoff is a real risk, then it behooves those who recognize the risk to describe it better.

An AGI capable of writing code could also manipulate technological progress by contributing selectively to open source projects.

I don't deny that AGIs will accelerate technological progress. My point is that the nature of the existing economy and resource base requires that humans, at least some humans, be part of the process. An AGI could never suddenly take over the world behind the backs of all humans.

---BrianW

#15 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 May 2006 - 12:31 AM

Brian-

The reason we, as humanity, tend to avoid these large-but-unlikely risks, is because we actually take them seriously.

Except when we don't. Then we sometimes get screwed over. You see how this works?

#16 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 07 May 2006 - 05:57 AM

To be clear, I think FAI is a great and worthy goal. My post was in response to Michael writing

I doubt the first AGI will be able to pass the Turing Test anytime before it enters into a hard takeoff.

This statement suggests that AGI is intrinsically unstable with respect to existential risks for humanity. It seems to be the AI equivalent of beliefs that molecular nanotechnology is intrinsically unstable with respect to replicators ravaging the environment. I think a more accurate perspective is that both AGI and molecular nanotechnology are potentially dangerous tools. As such, the danger will result from what humans do with them rather than the intrinsic technology itself. Investigate any future disaster involving these technologies, and I guarantee you'll find humans who worked really hard to make it happen.

Humans still control the levers of the physical economy upon which all human and artificial life depends, and only humans can give up those levers over a considerable time course. I suppose the bright side of AGI nightmare scenarios is that they may make humans more careful about AGI implementations, just as Y2K nightmare scenarios helped spur people into doing what was necessary to avoid that problem.

---BrianW

#17 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 07 May 2006 - 07:33 AM

Could a major issue be that a Turing test measures intelligence in a black-box way?

As far as I understand, it does not measure any internal method of reasoning (or feeling if this could be an AI feature) so that we only can get a glimpse of the externally visible behaviour of the thing?

It would be impossible for humans to analyse the quite complex internal structure and algorithms to compensate for that.

So, in essence, we are not able to judge whether the AI we developed is really performing to our standards and ethics using a Turing test.

The counter reaction could be that we even are not able to judge fellow humans that thoroughly. If we develop an AI, we are fully responsible for it. We cannot be made fully responsible for all the flaws of humanity.

Edited by brainbox, 07 May 2006 - 07:44 AM.


#18 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 May 2006 - 03:37 PM

It would be impossible for humans to analyse the quite complex internal structure and algorithms to compensate for that.


-.-

#19 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 07 May 2006 - 03:43 PM

hanconn,

I do not quite understand the nature of your reply, but let me explain my thoughts.

A sophisticated AI possesses some kind of self-learning mechanism, of which only the rules for storing and usage of this knowledge can be programmed and verified, if al all possible due to the huge amount of possibilities and complexity of these algorithms.

Add to that the unpredictable knowledge such an AI will gather, how on earth are we able to predict future behaviour of AI systems?

sponsored ad

  • Advert

#20 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 08 May 2006 - 04:38 AM

how on earth are we able to predict future behaviour of AI systems?


You are ignoring the basics and asking a specific question that is only going to obfuscate the answer.

Just because anything can happen doesn't mean everything is equally plausible, and just because a system is really complicated doesn't mean we are unable to understand the underlying mechanisms - in fact this is quite silly to say, because how can you build something so unique and complicated without even understanding the underlying components of the system?

You will not be able to predict your opponent's exact moves, but if he is smarter than you are, you can predict he will win. We may not be able to predict how the AI will go about attaining it's happiness, but we certainly would be writing the code upon which its "happiness" referent is functionally defined for the AI- and thus could predict to what ultimate end the AI's actions relate to the goals of humans and humanity.

Eliezer Yudkowsky has written extensively about AI Friendliness theory and related subjects, available to be read at www.singinst.org

Furthermore, a enormous abundance of discussion has taken place on the infamous SL4 mailing list over several years about Friendliness theory and related AI topics, available for your perusal via the SL4 archives.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users