• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Superintelligence

a.i. superintelligence nick bostrom agi strong a.i. runaway a.i./

  • Please log in to reply
6 replies to this topic

#1 Julia36

  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 13 January 2013 - 09:47 AM


This article on wiki may get wiped which is a pity, so I'll post here as an opener for debate:
Superintelligence safe construction and containment is thought to be the only way Man can survive the Singularity.

======================================================================

SUPERINTELLIGENCE



A superintelligence, hyperintelligence or superhuman intelligence is a hypothetical entity which possesses intelligence surpassing that of any existing human being. Superintelligence may also refer to the specific form or degree of intelligence possessed by such an entity. The possibility of superhuman intelligence is frequently discussed in the context of artificial intelligence. Increasing natural intelligence through genetic engineering or brain-computer interfacing is a common motif in futurology and science fiction. Collective intelligence is often regarded as a pathway to superintelligence or as an existing realization of the phenomenon.
Definition

Superintelligence is defined as an “intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”[1] The definition does not specify the means by which superintelligence could be achieved: whether biological, technological, or some combination. Neither does it specify whether or not superintelligence requires self-consciousness or experience-driven perception.
The transhumanist movement distinguishes between “weak” and “strong” superintelligence. Weak superintelligence operates on the level of human brains, but much faster. Strong superintelligence operates on a superior level, as a human brain is considered qualitatively superior to a dog's.[2]
In plain language, profoundly gifted people or savants are called superintelligent. Clever search algorithms or the Semantic Web are sometimes considered to be superintelligent. While these outstanding people or machines have an advantage over average human brains, they don't qualify as superintelligence, as they don't have superior abilities in cognition or creativity. The scientific community is heterogeneous, not a singular entity, and cannot be called a superintelligence.
Realization

In Transhumanism, different currents disagree on the way to create a superintelligence. Roughly three different paths are outlined:
  • A Strong AI, which can learn and improve itself, could after several self-improvements achieve superintelligence.[3]
  • Biological enhancements (breeding, genetic manipulation, or medical treatments) could in several iterations induce the state of superintelligence or other superhuman traits. This is banned or at least strongly discouraged in most societies.
  • Cybernetic enhancements could increase the capabilities of the human mind considerably, at least in terms of speed and memory. Technical realization of neural human–computer interfaces have begun in the field of prosthetics.[4] Real enhancements of a human brain are still unimplemented.
Criticisms

Philosophical, cultural, and ethical implications of superintelligence are fervidly discussed inside and outside of the transhumanist movement. There are several forms of critique on the aim to build a superintelligence.
Skeptics[who?] doubt that superintelligence is possible and believe that the processes inside a brain are too complex to fully understand and simulate in a technological device. The merger of human synapses with electronic devices is considered problematic, since the first is a slow, but living organism and the second a fast, but rigid system. Advocates[who?] of transhumanism say that the function of a brain is not so complex that it could never be understood. Furthermore, artificial intelligence is not limited to simulating organic brains.
Other critics[who?] call it hubris to enhance humans. In particular, genetic enhancements may be outlawed as a form of eugenics. There is also fear that superintelligent beings will not benefit mankind, but lead to its demise.[according to whom?] Even as advocates[who?] say that a superintelligence is by definition of a better nature than ordinary humans, there are no guarantees that a malevolent intelligence cannot be a product of trying to create a superintelligence.
Another argument against enhancement is resentment towards being dependent on cybernetic implants, enhancing drugs, etc. Transhumanists argue that an enhanced avant-garde will leave behind those who refuse to upgrade. Critics argue that in conclusion the rich elite will purchase brains with higher capacity to suppress the lower social tiers. Such a process is (in a non-technology-related sense) already visible today in society: Higher social tiers achieve higher degrees of education since they can more easily afford it.


References
See also
External links


#2 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 13 January 2013 - 03:50 PM

Biological enhancements can be used in association with cybernetic enhancements for maximum benefit. I don't have much trust in AI. And yes, it is true that this will lead to some people enhancing themselves at the expence of others, although these will not necessarily be the rich avantgarde, it will be the enlightened avantgarde.

#3 Julia36

  • Topic Starter
  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 14 January 2013 - 09:37 PM

Biological enhancements can be used in association with cybernetic enhancements for maximum benefit. I don't have much trust in AI. And yes, it is true that this will lead to some people enhancing themselves at the expence of others, although these will not necessarily be the rich avantgarde, it will be the enlightened avantgarde.



Hi Marios,


1. We can enhance.

2. Trust in A.I. might be irrelevant: weak A.I. is already indispensable to civilization. it's in your mobile and computer.

3. The 'enlightened avant-garde' is a great term.

The ramifications are so extreme I find sci-fi alone useful for trying to guess what's coming.

But as runaways (like net viruses or 'Life') will be increasingly easy to build, only something that can monitor and make them ALL safe will give us a survival option.

That monitor must be massive in problem solving and accelerate faster than the runaways can.

Posted Image

sponsored ad

  • Advert

#4 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 15 January 2013 - 07:21 AM

Regarding monitoring of runaway net viruses, see here for some ideas:

http://www.cise.ufl....iles/websci.pdf

#5 Julia36

  • Topic Starter
  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 15 January 2013 - 09:37 PM

Thanks Marios.

The link is useful thanks.

Gateway checks are a fundamentally good idea.

I dont know if they are used generally in A.I., but they could become a net evolutionary system for many node classes as Trojans are an infection system.

Posted Image

Apt it's out of Los Alamos!
I wonder how they contained the bomb.
at random.
gital.library.unt.edu/ark:/67531/metadc4675/

The ideas of biocontainment models are cute.

For idiosyncratic systems you need tailor made stuff. IMO the containment will exist as a given within the architecture. A system that isn't thought to be 100% containable should not be launched, and shouldn't be discussed.

I know this was an issue in defense circles.

BRRRRR!

Edited by stopgam, 15 January 2013 - 09:47 PM.


#6 ceridwen

  • Guest
  • 1,292 posts
  • 102

Member Away
  • Location:UK

Posted 15 February 2014 - 01:41 PM

Could super intelligence actually be used to help those with diminishing intelligence/neurodegenerative diseases so that they can survive for longer?

#7 Julia36

  • Topic Starter
  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 15 February 2014 - 05:07 PM

yes. Pretty instantly from when it succeeds.

It would have been here by 2000 but USA, UK and Japan governments halted A.I. research:
http://en.wikipedia.org/wiki/AI_winter

1973 UK Chief scientist Lighthill got A.I. halted.
1973–74: DARPA halted AI
& Japan was pressurized to halt it.


Yup.

machine intelligence miles more capable than humankind (accelerating growth intelligence).

The problem was how to contain it. see Extinction risks thread.

Time is not linear growth in technology.

Posted Image

What took 100 to discover 1000 years ago takes 10 minutes today.


On present trends without a stand alone build of Superintelligence...Machine Intelligence will aggregate to Superintelligence from 2022.

2025 Age of humanoid Robots will dawn.

2017 Age of genetic medicine will dawn.

2022 Age of Machine intelligence will dawn.

Most Illnesses will be obsolete. Resurrection of the long dead will be possible


This assume we dont destroy ourselves.

My best judgement is we will destroy ourselves because mavericks who are usually pioneers ahead of the establishment cannot be herd by governments
eg the 3 bodies examined/examining risks of intelligent technology consult no mavericks at all.

1)Foresight cognitive systems systems

2) Future of Humanities Institute Oxford UK

3) Centre for Study of Existential risks Cambridge UK


USA has one run by a maverick Singularity Institute

More particularly governments dont reply to our warnings and we (I'm a an A.I. mavrick) are presumably regarded as crackpots.

"The United Kingdoms' policy is not to reply to communications where the sender is regarded as a crank"

Main problem is until you succeed in what your filed is >>within the establishment<<< you wont have any influence in government levels who are crisis management institutions.

The Yanks are more ahead at risk taking, but regard anyone outside the USA as irrelevant, as policy they held until Japan bombed their fleet, and Hitler declared war on them in the same week.

If you have a bad illness, I can tell you with certainty

1. It will not be here in 2020.

2 To fight. Fight with every thing you have. Fight with the secret bits you never showed a soul. Fight in the morning when you wake and as you demand your sleep.
Fight in spite of your perceived weakness, and against every reason.
Fight beyond play, beyond politeness, beyond sanity and beyond world war



Fight and live!


Edited by Innocent, 15 February 2014 - 05:14 PM.






Also tagged with one or more of these keywords: a.i., superintelligence, nick bostrom, agi, strong a.i. runaway a.i./

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users