• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

The Hawking Protocol. Existential risk.

superintelligence existential risks extinction

  • Please log in to reply
3 replies to this topic

#1 Julia36

  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 27 January 2016 - 08:11 PM


The Hawking Protocol.

- Notes on surviving the coming intelligent machines.

email: eldrasATlondon.com



The Hawking Protocol is a LondonAIClub specification protocol for the containment of runaway artificial intelligence.


Posted Image

"In contrast with our intellect, computers double their performance every 18 months, so the danger is real that they could develop intelligence and take over the world." Prof Stephen Hawking August 27th 2001 Focus.

The danger is traditional inaction resulting in extinction.

The Collingridge dilemma:

" impacts cannot be easily predicted until the technology is extensively developed and widely used, by which time it is hard to make it safe."


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


"The survival of man depends on the early construction of an ultra-intelligent machine." I J Good 1963-4.

**********************************************************

Background and Contacts.

An increasing number of people are warning about intelligent machines. The late Oswald Minter, a collateral descendant of Sir Isaac Newton, had warned us for decades about the advent of machines though London Artificial Intelligence Club members regarded him with amusement.

Alan Turing had noted in 1951 'Intelligent Machinery':

"once the machine thinking method has started, it would not take long to outstrip our feeble powers. ... At some stage therefore we should have to expect the machines to take control,"

Famous high techies and philosophers early-foresighted existential risks.

1993: Vernor Vinge in his NASA address said within 30 years the human era will be over: The Coming Technological Singularity: How to Survive in the Post-Human Era,



Why The Future Doesn't Need Us "Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species."
Bill Joy founder of Sun Microsystems.


A.I. expert Hugo de Garis has noted that AI may eliminate the human race: humans would be powerless to stop them because of a technological singularity.


Solutions to the threat of artificial machine intelligence:

1. Merge with it: (Update human beings faster than machines).

2. Build and contain
Superintelligence: (to neutralize all risks).


The AGI conferences are for people trying actively to build Superintelligence, and some attention is given to dangers:

Conferences on Artificial General Intelligence.

Solutions like shields/barriers against it cannot logically work, as they would assume greater than or equal to Superintelligence defence systems.



The Hawking Protocol, is a system of protocols for building safe A.I. from philosophy through technical specifications of various machine and artificial systems to eliminate its dangers absolutely.
A theory for this was not achieved until 2007 under the B.I.D.I. at London A.I. Club, despite building our first prototype accelerating intelligence in 2001. Set up in 1999 to test the proposition A.I. was impossible to build:

The Club concluded (Superintelligence) was not impossible to build.
The Club concluded a restraint of intelligence amplification impossible.
The Club attended The Royal Society and warned of the dangers of A.I.
The Club attended AI@50 and warned that accelerating A.I. was not containable because of the logical proposition "A greater intelligence cannot be contained by a lesser intelligence."

The Club lobbied and warned governments of the UK, EU and the UN of the dangers of A.I.

The Future of Humanities Institute at Oxford and others like the The Cambridge Project for Existential Risk are seeking ways to avoid human extinction from Artificial Intelligence.


"Only a full solution to the problem of AI Friendliness would be guaranteed to work....However, I think that a full solution to the problem of AI Friendliness is almost surely impossible..." Ben Goertzel (to me) 2012.


The Association for the Advancement of Artificial Intelligence deals with risk as A.I. Ethics, and considers mitigation of risk in emerging machine and artificial intelligence systems.


Oxford Professor Nick Bostrom has written a paper analysing a date for the advent of Superintelligence

http://www.nickbostr...telligence.html

and has a book due "Superintelligence" in 2013 OUP.

Wemust soon race to prevent extinction by a breaking science that dwarfs runaway pathogens, nanotechnology and global warming.

The UK government commissioned a 100 scientist team to predict inteligent systems for 5, 10 and 20 years:

http://www.bis.gov.u...gnitive-systems

Note:


Cognitive Systems 2020 - European Foresight Platform

Centre for the Study of Existential Risk - University of Cambridge

Foresight Institute

Machine Intelligence Research Institute

Future of Humanity Institute - http://www.fhi.ox.ac.uk/ University of Oxford

@ which see:
Stuart Armstrong:


and the related paper:
http://www.aleph.se/...rs/oracleAI.pdf
" some ideas in our oracle paper http://www.nickbostr...pers/oracle.pdf
and some ideas from Roman Yampolskiy (see http://www.ingentaco...020001/art00014 https://singularity....Engineering.pdf ).

"Paul Christano had some very good ideas, that seem to be unpublished (to summarise one strand: if we have whole brain emulations, we can make AI safe in a specific way)." S.A.


The Machine Intelligence Research Institute
has been researching it since the 1990's, doesn't see containment as an option and is seeking to make sure it it friendly.



See also:

Lifeboat Foundation

Association for the Advancement of Artificial Intelligence (see it's ethics writings/publications and conference debates, eg http://www.aaai.org/...rticle/view/540)



Less Wrong has an article here which is important and has links to papers on risks and counter measures.:

http://lesswrong.com...l_risk_from_ai/


see also:

Global Catastrophic Risk Institute


It is hard to see how adequate safety can be built without a design for Superintelligence which only a few of us claim to have.

We should rush to a containable build of Superintelligence because of the accelerating dangers of technology and other existential risks eg asteroid strikes.


“If you will not fight for right when you can easily win without blood shed;
if you will not fight when your victory is sure and not too costly;
you may come to the moment when you will have to fight with all the odds against you and only a precarious chance of survival.

There may even be a worse case. You may have to fight when there is no hope of victory, because it is better to perish than to live as slaves.”


― Winston Churchill.

Link to old Kurzweil site on THE HAWKING PROTOCOL

 

http://www.kurzweila...D=18526#id18526


Edited by Innocent, 29 August 2013 - 10:23 AM.

  • like x 1

#2 Julia36

  • Topic Starter
  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 04 March 2016 - 05:18 AM

The meeting featured two prominent experts on the matter, Max Tegmark, a physicist at MIT, and Nick Bostrom, the founder of Oxford’s Future of Humanity Institute and author of the book Superintelligence: Paths, Dangers, Strategies. Both agreed that AI has the potential to transform human society in profoundly positive ways, but they also raised questions about how the technology could quickly get out of control and turn against us.

Last year, Tegmark, along with physicist Stephen Hawking, computer science professor Stuart Russell, and physicist Frank Wilczek, warned about the current culture of complacency regarding superintelligent machines.

vid:

 

a.i. @ 1.50

 

 

see also
http://www.cbrn-coe.eu/


Edited by the hanged man, 04 March 2016 - 05:37 AM.


#3 Julia36

  • Topic Starter
  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 15 March 2016 - 04:31 PM



sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#4 Julia36

  • Topic Starter
  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 19 March 2016 - 05:56 PM

https://www.shortoft...2016/03/18/ana/







Also tagged with one or more of these keywords: superintelligence, existential risks, extinction

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users