• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI security.


  • Please log in to reply
16 replies to this topic

#1 Karomesis

  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 06 November 2006 - 03:14 PM


I've been looking through ALOT of threads on AI and have yet to come across what I see as imperative for the succcess of freindly AI....security.

I'm sure there are papers on it somewhere(hopefully)and was wondering if you guys know where they are?

I was thinking it would be an interesting startup for those with some cash. Setting up a company who's sole objective is to protect and maintain AI from malicious persons as well as having probably the best network security on the planet. From what I can see AI will be arriving quite soon, and if this issue is not addressed seriously, I will not sleep well at night.

Just like the thread on a company I saw where they were shouting from the rooftop that they're within 7 years of acheiving AGI...not a good idea unless you have a protection process setup and aren' t worried about the inevitable malicious intentions of those who understand the awesome power of AGI. If the comapny/companies do not have a security plan in place at the time of completion, they are sitting ducks.

.It's just too dangerous to allow anything less than optimum security for the unprecedented intelligence AGI wll represent.

Maybe I'm just paranoid about nothing, but I have a feeling i'm not. [mellow]

#2 amar

  • Guest
  • 154 posts
  • 0
  • Location:Paradise in time

Posted 06 November 2006 - 05:18 PM

It'll be hard to keep the technology innocent. Who can be trusted not to use the stuff as a weapon? The government? I don't trust WMDs in the hands of anybody, but the government is a superpower that will take it by force anyways and use it towards military ends. The best we can do is pray that they only blow up parts of the world and not the whole thing. [":)]

sponsored ad

  • Advert

#3

  • Lurker
  • 1

Posted 06 November 2006 - 11:26 PM

Fear not, the moment anything resemblling genuine AI begins to indicate the faintest signs of emerging the goverment will class it as a munitions device and seize control of it.

#4 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 November 2006 - 01:02 AM

Fear not,



Heh... heh... [mellow]

#5

  • Lurker
  • 1

Posted 07 November 2006 - 05:44 AM

Heh... heh...  [mellow]


... and if the government can't contain it, nobody can. Which would imply that steps would be taken to prevent it's emergence in all but the most of controlled environments.

#6 caston

  • Guest
  • 2,141 posts
  • 23
  • Location:Perth Australia

Posted 07 November 2006 - 06:58 AM

Whatever you do don't panic and try to shut it down that would just make it angry.

#7 Karomesis

  • Topic Starter
  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 07 November 2006 - 03:23 PM

Fear not, the moment anything resemblling genuine AI begins to indicate the faintest signs of emerging the goverment will class it as a munitions device and seize control of it.


which is almost as bad as a madman getting ahold of it. I trust the govt like I trust a meth feind. It must not be controlled by any one entity, a consortium is needed.

#8 amar

  • Guest
  • 154 posts
  • 0
  • Location:Paradise in time

Posted 07 November 2006 - 04:42 PM

which is almost as bad as a madman getting ahold of it. I trust the govt like I trust a meth feind. It must not be controlled by any one entity, a consortium is needed.

Sure, then we got a pandemic of madmen. If we were truly wise, we wouldn't develop it into weapons at all. We'd develop preventions and anti-weapons against it ever being turned into a war machine. In an ideal future, our wars will be restricted to video games.

#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 07 November 2006 - 05:46 PM

It'll be hard to keep the technology innocent. Who can be trusted not to use the stuff as a weapon? The government?


AI wouldn't be like a gun, where you can point it at something and tell it to go boom. It would have a complex, built-in goal system inextricably intertwined with all of its code structure. So using an AI for ends it was not created for would be like grabbing a fish and trying to reprogram it for flight. An AI is a mind, not a tool.

Fear not, the moment anything resemblling genuine AI begins to indicate the faintest signs of emerging the goverment will class it as a munitions device and seize control of it.


Besides overestimating the cluefulness of the government, this is unrealistic demonization of the government as well. This general attitude reminds me of the sort of people who envision Bush listening personally to their phone conversations on a wiretap, or think that the people in government literally only care about greed and power.

Whatever you do don't panic and try to shut it down that would just make it angry.


Not sure if this is anthropomorphization or just a joke...

It must not be controlled by any one entity, a consortium is needed.


How about we create AI such that it is morally competent and can be trusted with its own decisions? This notion is called Friendly AI.

#10 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 November 2006 - 06:31 PM

Most AI researcher think the AI will take care of its own security, or that the rush to AI is so important that they can't be distracted by secondary concerns like security.

Or that what they have isn't worth the cost of any more security than the security they have in place.

#11 Karomesis

  • Topic Starter
  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 07 November 2006 - 08:10 PM

I have never seen anybody seriously work on security (or even safety) on any project dealing with future technologies.


Harvey, from your point of veiw in the security feild, do you see it as a viable business model?

. Most AI researcher think the AI will take care of its own security, or that the rush to AI is so important that they can't be distracted by secondary concerns like security.



That's what I'm begining to think.



AI wouldn't be like a gun, where you can point it at something and tell it to go boom. It would have a complex, built-in goal system inextricably intertwined with all of its code structure. So using an AI for ends it was not created for would be like grabbing a fish and trying to reprogram it for flight. An AI is a mind, not a tool.


Michael, care to elaborate on the goal system? or theories you're consdering? I'm sure as an AI researcher you're quite familiar with evolutionary biology/psychology, so it will be very interesting to see how AGI sidesteps the evolutionary process itself. If not for purposes of selfishness or continued progress what will the goal system extrapolate from?

#12 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 November 2006 - 08:37 PM

Michael, care to elaborate on the goal system? or theories you're consdering? I'm sure as an AI researcher you're quite familiar with evolutionary biology/psychology, so it will be very interesting to see how AGI sidesteps the evolutionary process itself. If not for purposes of selfishness or continued progress what will the goal system extrapolate from?

Intelligence is definitively a, what you could call, sidestep of evolution. Its actually a more powerful fitness optimization process than evolution, where the actual course of evolution is a proper subset of its potential ability to control.

#13 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 November 2006 - 08:37 PM

Oh, and Michael isn't exactly an AGI researcher.

#14 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 November 2006 - 08:40 PM

If not for purposes of selfishness or continued progress what will the goal system extrapolate from?

Humans are an embodiment of (or product of) evolution, but their intentions are not necessarily representative of the intentions of evolution.

#15 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 November 2006 - 08:46 PM

By the way this argument is the same as for a recursively self-improving intelligence. RSI is a sidestep to intelligence. It's a more powerful fitness optimization process than intelligence, where the actual course of an intelligence is a proper subset of its potential ability to control (That actually sounds pretty weird! But think about it, it follows), and the intentions of RSI intelligence, though a product of or an embodiment of human intelligence, are not necessarily representative of human intentions.

#16 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 07 November 2006 - 09:04 PM

the intentions of RSI intelligence, though a product of or an embodiment of human intelligence, are not necessarily representative of human intentions.

To be sure they were, I really think you would need a proof that, at the very least, fits the RSI within the bounds of an extremely small space within a comprehensive probabilistic model.

Even that poses an existential risk, which is inherent in the probabilistic nature of the model. RSI is a process that is inherently constantly surprising, which makes probabilistic inference an unweildy tool to use.

The only way to be sure that the intentions of the RSI intelligence (RSII) is representative of human intentions is for the design of the RSII to be based on an actual proof that the design cannot logically allow otherwise.

sponsored ad

  • Advert

#17 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 17 November 2006 - 01:54 PM

I have been working in the security field for over two decades, during which I have also been involved with extropians and transhumanists. I have never seen anybody seriously work on security (or even safety) on any project dealing with future technologies. Most AI researcher think the AI will take care of its own security, or that the rush to AI is so important that they can't be distracted by secondary concerns like security.


Well, the most prominent AI-oriented transhumanist org is SIAI, and I can tell you from personal experience that Eliezer Yudkowsky is quite OBSESSED with the notion of AI security, and has been since 2000. In fact, he practically founded the whole field.

Lifeboat Foundation, the organization I am currently backing, is security-centric.

Michael, care to elaborate on the goal system? or theories you're consdering? I'm sure as an AI researcher you're quite familiar with evolutionary biology/psychology, so it will be very interesting to see how AGI sidesteps the evolutionary process itself. If not for purposes of selfishness or continued progress what will the goal system extrapolate from?


www.singinst.org/CFAI is a good place to start.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users