• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Viability of AGI for Life Extension & Singularity


  • Please log in to reply
249 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 04:13 AM


This topic is devoted to answering the question:

How important can AGI be to Life Extension and launching a Singularity?

I, with the help of Dr. Ben Goertzel, promise to answer every question!

HOWEVER, before you ask, please read:
Artificial General Intelligence and its Potential Role in the Singularity
Posted Image

Posted Image
43 minute video of Ben presenting AGI and the Singularity:
http://video.google....132223226741332

Posted Image
Also, please check this graph and explanation:
Approaches and Projected Time Frames in Reaching AGI

Posted Image
Also, please watch this 10 min intro: http://www.novamente.net/video/

Also, here is a 55 min overview of Novamente's AGI design (more technical):
http://video.google....581743856443641

Posted Image
Also, here is Ben talking about Novamente & our biomedical work at ImmInst's 2005 Conf:
http://video.google....505614870506496

Thanks!

#2

  • Lurker
  • 1

Posted 25 June 2006 - 05:18 AM

In the attached paper, Ben Gortzel (BG) states:

There is also reason to doubt whether it will be possible to make humans that are dramatically more intelligent than current humans while still retaining their fundamental human-ness, any more than a dog with a 180+ IQ would still be a dog in any fundamental sense.  Ultimately, the creation of intelligence-enhanced humans turns into the creation of artificial intelligences based on a biological rather than silicon-chip substrate, and using current humans as an initial seed.  Given the complex and in many ways suboptimal architecture of the human brain, it is not clear that the guidance of enhanced humans would necessarily be more reliable from the perspective of ordinary humans than the guidance of digital artificial intelligences.


We have a numerous examples of the extraordinary power of human genius -- including Shakespear, Einstein, Newton, Plato, Da Vinci and many others. In the case of Einstein, a histological analysis of his brain revealed an unusual number of glia -- neurons traditionally associated with metabolic function but more recently discovered to also be directly participating in the formation and modulation of neural networks. Additionaly, of the numerous single nucleotide polymorphisms that are being discovered daily, some appear to be associated with the heritability of intelligence. Given that we are beginning to understand some of the factors that influence various aspects of cognition we may shortly be able to enhance the human brain for people of average intelligence to achieve the physiological status of individuals characterised as having "genius" level intelligence.

Does not the prospect of an increased number of intellectually extraordinary individuals who are dedicated to the biosciences commensurately increase the possibility of achieving escape velocity?

sponsored ad

  • Advert

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 05:30 AM

Sure... no doubt bioscience will eventually lead to more enhanced biological intelligence which should lead to more biomedical discoveries. However, what seems the more important question is how much more advanced AGI will be compared to biologically rooted intelligence.

If one can see that AGI has near unbounded capacity for problem solving, then this is a compelling reason to focus effort toward this end.

#4 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 05:46 AM

The graph I have in mind looks like:
Posted Image

#5 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 25 June 2006 - 05:59 AM

Basically any function of intelligence that is nonAGI is a liberal art.

#6

  • Lurker
  • 1

Posted 25 June 2006 - 06:03 AM

From the attached paper, BG talks on the simplicity of designing an artificial mind:

The first thing one has to realize is that, at its heart, intelligence isn’t such a complex thing – in fact the essence of intelligence is very simple.  A mind is a system for recognizing patterns in itself and in the world – nothing more and nothing less.  A mind learns to achieve its goals by recognizing patterns regarding which behaviors have helped it achieve similar goals in the past.

There you go – it’s simple – just plug in a supergoal to a pattern recognition engine, add in some sensors and actuators, and you’re done!  AGI is achieved!


That is indeed correct -- as much for the simplest of lifeforms as it is in humans -- and that is where the sheer magnitude and complexity of the ambitions behind this project are revealed. The premise behind NovaBaby for example is to seek to simulate the mind of a 1 year old human. Ironically, at this developmental stage, the human brain has more neurons than when it becomes older and the rate that it is producing synapses is greater than at any other period of life. We are talking about a quadrillion synapses. Clearly the physiology of this process cannot as yet be emulated irrespective of how much it is simplified (there are other numerous and as yet unidentified factors that modulate synaptic activity and neural network development and function).

Therefore one must rely on that magical word: "emergence". In its simplest definition this means the manifestation of a level of greater complexity than can be predicted from its component parts. In the human brain for instance, an individual neuron is incapable of anything more that a complex but defined set of outputs based on inputs, yet via a vast collection of neurons human thought, creativity, etc, emerge.

In AI, the hope has been to develop into a program a set of rules to enable self-programming. In this way a program is started, provided with a means to aquire data and left to run and evolve into a more complex version of itself. It sounds sensationally simple in theory but continues to elude in practice.

In what ways has Novamente solved the problem of implementing emergence and why has it not demonstrated its model of intelligence by emulating the cognition of simpler lifeforms?

#7 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 06:14 AM

The premise behind NovaBaby for example is to seek to simulate the mind of a 1 year old human.

To be sure, we do pull from cognitive science but we are NOT trying to simulate a human mind...

#8 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 06:25 AM

Compare a crow to a jet, both fly... but in much different ways. In similar fashion, compare biological brains to AGIs, both think, but in quite different ways.

So, rather than neurons, Novamente's AGI has perception, action & feeling "nodes" which then build up to specific objects and then abstract concepts:

Posted Image
To quote: Novamente has a special mathematical knowledge representation that combines aspects of the brain’s neural network representation with aspects of formal logic and probability theory.

#9 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 06:30 AM

In what ways has Novamente solved the problem of implementing emergence and why has it not demonstrated its model of intelligence by emulating the cognition of simpler lifeforms?

So, to answer your question more directly...

We have not tried to emulate simpler life forms because we are not trying to simulate exactly biological intelligence... rather, we are taking inspiration from nature and then implementing aspects into a fresh "thinking machine" framework.

#10 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 06:36 AM

As to emergence, this is a slippery topic, much like consciousness.

The best answer I have is that we are implementing NovaBaby in a virtual environment where we are seeking incremental progress toward fairly specific problem solving. We give NovaBaby a task and then it will try different ways to reach the goal. For example, NovaBaby, find the bunny!. In so doing, it learns how to move and pick up the bunny. As it gets smarter, we give it more and more general problems such as, NovaBaby, put the bunny behind the box. In so doing, NovaBaby learns abstract concepts like behind.

Posted Image

#11

  • Lurker
  • 1

Posted 25 June 2006 - 06:41 AM

In what ways is Novamente's mathematical knowledge model approach superior to traditional representations of neural networks, ie having input, hidden (described by functions such as sigmoidal or gaussian) and output layers?

#12 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 06:43 AM

In AI, the hope has been to develop into a program a set of rules to enable self-programming. In this way a program is started, provided with a means to aquire data and left to run and evolve into a more complex version of itself. It sounds sensationally simple in theory but continues to elude in practice.

True, but no reason to think it can't be solved... :)

In fact, we think we have the solution!

#13 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 06:49 AM

In what ways is Novamente's mathematical knowledge model approach superior to traditional representations of neural networks, ie having input, hidden (described by functions such as sigmoidal or gaussian) and output layers?

Quote: "Regarding knowledge representation, we have chosen an intermediate-level atom network representation which somewhat resembles classic semantic networks but has dynamic aspects that are more similar to neural networks. This enables a breadth of cognitive dynamics, but in a way that utilizes drastically less memory and processing than a more low-level, neural network style approach. The details of the representation have been designed for compatibility with the system’s cognitive algorithms."
http://www.novamente...file/AAAI04.pdf

I must get some sleep now, but I plan to answer more adequately tomorrow this last question...

#14 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 25 June 2006 - 02:13 PM

Excellent, and very thought provoking article. I'm going to have to give it some time to mentally digest as I'm forcibly pulled away from the computer for the day. However, I had to fire out a comment on one thing before that happens.

There is also reason to doubt whether it will be possible to make humans that are dramatically more intelligent than current humans while still retaining their fundamental human-ness, any more than a dog with a 180+ IQ would still be a dog in any fundamental sense.


I think the one major example we have of a rapid increase in intelligence, humans, suggests it would still remain a dog. For all that we've managed, we're still very much apes when it comes to behaviour. Chimpanzees and humans have had a fair amount of evolutionary separation from their common ape ancestor. Yet, even with the continued evolution along different paths, and the significant difference in intelligence possessed by humans, for the most part I think it's fairly obvious on close examination how similar the behaviour of both our species are. To be sure, our species has behavioural differences from chimpanzees. But for the most part, almost every trait that's important to the chimpanzee is maintained and mirrored in humans as well. So far, it seems to take rather huge amounts of intellectual leaps to modify even small amounts of instinct. And even that seems to come more as tempering rather than outright bypassing aspects of apeness. It's really even somewhat uncertain whether that happens as a result of intelligence itself, or by natural selection, particularly sexual selection, within the population at large. Admittedly that would still make it a byproduct of intelligence. Significant, but were it true than of little use for predictive effect in rapid artificial change in intelligence or the creation of new intelligence.

#15 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 25 June 2006 - 08:33 PM

The graph above outlines my opinion on this matter quite nicely.

There is a very serious age related perspective to the argument and it is based on the cost/benefit ratio of long term versus short term solutions.

Folks that are in my age bracket or even a few decades behind need the biotech revolution possible in the short term in order to reach any long term possibility AGI might provide. We can't get there from here otherwise. And yes that is discounting the current approach to cryo because that mandates a whole set of alternatives that can only be assumed at best to be possible.

I won't address the Singularity debate as I think it is vastly underestimated as to what is involved and overestimated as to the timeline for such an occurrence.

Regarding the possibility that advanced AI offers as a kind of 24/7 partner in the quest for biotech solutions I think we are underestimating the importance programs like Novamente represent. With the help of advanced computation power that is theoretical possible even in the short term a lot more modeling with respect to biotech solutions become more probable.

However the appeal of this idea is next to nil with respect to the general public and less so the older they are. If arguments over abstractions like *escape velocity* are made the central focus then we are basically "pissing in the wind" to be quite frank. These are ides that are way too esoteric and will continue to be so for some time to come. If we are seeking outreach we do need different packaging but that is not really the gist of this thread. We must learn to both package idea in language that more people can understand and also do a far better job of sorting out long and short term objectives and staying on track.

Lastly IMHO AGI should not be analyzed without also emphasizing the importance of BCI and that tech does appear to parallel AGI's rate of advancement. Once combined the potential for AGI and BCI combined offers to sidestep, or at least leapfrog biotech alone as a means of going beyond today's more obvious restrictions to longevity. So these are two pragmatic aspects of how AGI n the shorter term will have a benefit to our objectives but neither is exactly what you are promising Bruce, though both are important and deserve to be a part of the discussion.

#16 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 09:05 PM

As this may not be known to most ImmInst members, it's worth highlighting a current connection between AGI and biomedical advancement... although a more specialized (more narrow) approach...

Biomind LLC uses Novamente technology... software based upon machine learning algorithms integrated with biological databases. These algorithms learn "classification models" (nonlinear mathematical rules) which then explain experimental results.

For example, genes, gene combinations, gene ontologies, and protein families are highlighted if they appear more frequently in these classification models. Such results show the nonlinear interactions among gene (gene features) and produce diagnostic biomarkers.

So in other words, Novamente technology is currently being applied to solve biomedical problems!

Biomind has already applied algorithms for Gene Expression (microarray) analysis, SNP and DNA association studies, and Gene Ontology Enhancement. ArrayGenius, a Biomind product, is licensed to the NIH's National Institute for Allergies and Infectious Diseases, and the Centers for Disease Control and Prevention, for bioinformatics data analysis.

We've not yet had a chance to focus on biomarkers for aging (EX: CR data), but we've been actively trying to get our hands on this kind of data. From our experience, many scientists are reluctant to share data and can take a long to convince of the viability of this approach...
Posted Image
http://www.biomind.com (Novamente's sister company)

#17 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 25 June 2006 - 09:45 PM

(hankconn)
How can we be sure that you won't kill everybody with this thing?


The problem Hank is that when in history has this argument against progress ever been valid since it is always predicated on maintaining ignorance of an *unknown*?

Giving into fear of progress inevitably does more harm than good. It fails to prevent problems and usually means it weakens those that impose the restrictions on themselves beyond an ability to adapt to change and adversity.

The risks of every advanced tech of this level are great but the potential benefits even greater. What is more important is the development of safeguards and standards that allow the tech to go forward in a reasonable, transparent manner.

The truth is that none of us can stop these developments. All we can do is impose pressure on governments and interest groups that will make these developments move underground into more clandestine environments. The results of that approach are almost guaranteed to be far more hazardous to the health of humanity as it means the development will be aggressively focused on weaponry aspects rather than healing.

I suggest that we get a good grip on the tiger's tail Hank because we are all in for the ride of our lives.

#18 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 June 2006 - 09:52 PM

Hank, we are seeing a slow enough "takeoff" to allow for simulation and experimentation to provide adequate information to help us guide the development of safe AGI.

Posted Image

Also, spending excessive time theorizing about safe AGI is itself dangerous when other groups are already working towards it. Novamente is implementing a smart blend of theory and application to ensure a safe Singularity.

#19 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 25 June 2006 - 10:41 PM

However, before you ask, please read:

Artificial General Intelligence and its Potential Role in the Singularity
Ben Goertzel, PhD, June 2006

Thanks!


That is a lot of pretty complex information; I need a week or so to try to digest all of that information.

I like that the Novamente architecture is displayed similarly to the way microprocessor companies display their processing technologies when first explaining how they are used; I can understand a lot of computer language thanks to my previous computer obsession.

Novamente:

Posted Image
http://www.agiri.org...es/image026.gif

AMD:

http://www.anandtech...aspx?i=2768&p=1

Can't wait until the first Novamente brand CPU is introduced.

#20 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 June 2006 - 11:19 PM

Laz:

If arguments over abstractions like *escape velocity* are made the central focus then we are basically "pissing in the wind" to be quite frank. These are ides that are way too esoteric and will continue to be so for some time to come. If we are seeking outreach we do need different packaging but that is not really the gist of this thread. We must learn to both package idea in language that more people can understand and also do a far better job of sorting out long and short term objectives and staying on track.

This thread isn't the right place to discuss this, but you just reminded me of something. In keeping with the physics concept of escape velocity, I'm reminded of the debates over rocketry in the early part of this century. There were those who looked at the massive amounts of fuel required to lift a rocket, and there were those who said we wouldn't get to space because the rocket would have to be overwhelmingly large. However, it was found that multi-stage rockets (two and three stages) could overcome this problem to a certain degree, by having the first stage lift the "payload" quite a ways up, and impart a large fraction of the necessary velocity. Then the second stage would kick in while the rocket was already in flight, and provide the final burn to make it to orbit. For actual escape velocity, a third stage might even be needed.

SENS is like the first stage. It just gets us moving. It won't get us to the moon, and probably not even to orbit (whatever the biological equivalents might be), but it gets us moving. It might add 15-30 years, depening on how successful it is. As long as the second stage is developed--and we'll have 15-30 years to come up with that second stage!--we'll make it. We might even need a third stage, but we'll have another 20-30 years, plenty of time.

Edit: not the "rebates over rocketry", of course!

#21 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 25 June 2006 - 11:24 PM

And what about those who might say that, even if SENS might add 20 years to lifespan, it'll take even longer, maybe 30-40 years to significantly build on that?

Well, think about it. If SENS adds 20 years, and it takes 40 years to make the next significant breakthrough (far-fetched, but...), then that still saves lives. What if SENS hadn't been developed? 40 years' worth of people would die before that next big breakthrough. With SENS, only 20 years' worth wouldn't make it. It still saves lives, even if it's not enough for escape velocity for everybody. Adding years, and especially decades, will mean some large number of people "make it", even if not everybody makes it. 10 years' worth of people, who would otherwise have died, is half a billion people! 20-30 years could mean a billion lives saved, or more!

#22 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 26 June 2006 - 12:26 AM

For example, NovaBaby, find the bunny!. In so doing, it learns how to move and pick up the bunny

This is very exciting! I can't wait for the Baby to put me out of my job already [thumb]

How can we be sure that you won't kill everybody with this thing?

Unless bio people make some rapid progress on aging, one would kill everybody by not trying to build this thing.

#23 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 26 June 2006 - 12:32 AM

How can we be sure that you won't kill everybody with this thing?

Unless bio people make some rapid progress on aging, one will probably kill everybody by not trying to build this thing.

However, if an unFriendly singularity comes to pass, we may end preventing the births of trillions, even quadrillions of humans. If we delay a Friendly singularity by a decade, we may end up preventing a half a billion people from "making the cut". In my mind, the risk of an unFriendly singularity takes precedence over the risk of delaying a Friendly singularity. We must proceed cautiously.

The wild card, of course, is proceeding so cautiously that someone else sparks an unFriendly singularity before you can spark a Friendly one.

#24 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 26 June 2006 - 12:51 AM

Can we get some rough estimate roadmaps for hopeful dates for the the releases of these AGI technologies? I have not read the whole article yet, so I might have missed it...

#25 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 26 June 2006 - 01:33 AM

Adam, we currently have 23 votes by AGIRI members on timeline for when AI surpasses human-level intelligence. The average is around 2025. We also have a list of currently 19 projects actually working to build AGI.

We (Novamente) estimate that we can reach human-level within six years working full force (about a dozen programmers). We currently have 2 full time and 1 part time... that is paid programmers. We also have a few other intermittent/volunteer programmers.

#26 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 26 June 2006 - 01:53 AM

Hank: How do you define adequate information? What I'm asking what are the specific things are you taking into consideration when you decide "This probably won't kill us".

Hank, I share your concern. One of the main reasons why I decided to focus on AGI is to save the world from unfriendly AI. It's quite difficult to make practical decisions about specific actions now that will more greatly ensure safety at some future point... but we have created an outline called Safety Guidelines for AGI & Singularity which covers more than 20 topics such as AGI constraints (boxed, unboxed), training, ethics and business risks (security, governmental, environmental, etc). But most importantly, our main mission is as follows: Novamente’s ultimate aim is to create an artificial general intelligence (AGI) software system that will help entities of lower levels of intelligence safely transcend to higher levels of intelligence.

#27 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 26 June 2006 - 02:14 AM

Here's a visual that just came to mind:

Posted Image

#28 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 26 June 2006 - 02:23 AM

Some futurist thinkers with a less optimistic bent than Kurzweil have asked some difficult questions:  Couldn’t acceleration of technological and scientific advancement be dangerous – potentially posing an “existential risk” of annihilating humanity in toto?  Couldn’t a rogue AGI turn against us – perhaps taking inspiration from one of the numerous science fiction movies and novels with this theme?


Computer viruses exist, and can destroy elaborate networks -- causing catastrophic damage..we better have same damn strong anti virus program!

I can't even understand the core vs. K8 architecture...

#29 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 26 June 2006 - 06:58 AM

Computer viruses exist, and can destroy elaborate networks -- causing catastrophic damage..we better have same damn strong anti virus program!


The comment about a virus in AI, and exponential growth on top of it, reminds me of system shock.

"With only a few short years of evolution, they've been able to conquer this starship, mankind's mightiest creation. Where were we after forty years of evolution? What swamp were we swimming around in, single celled and mindless? What if SHODAN's creations are superior to us? What will they become in a million years, in ten million years? What's clear is that SHODAN shouldn't be allowed to play God. She's far too good at it."


I have to wonder though, if viral susceptibility would be such a bad thing in the long run. Humans, certainly, have profited as a species by our susceptibility to parasitic infection. Organelles were definitely a good pickup along the evolutionary road. And if we're focusing on the mind, many forms of mental illness bring along with them a predisposition for positive aspects whose strength might very well justify the continued presence of the malfunction. Just speculation, but it does drive home the point that the waters ahead are pretty murky.

sponsored ad

  • Advert

#30 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 26 June 2006 - 07:00 AM

one will probably kill everybody by not trying to build this thing

if an unFriendly singularity comes to pass, we may end preventing the births of trillions, even quadrillions of humans.

We would do the same by curing death. What difference does birth control or mass extinction make to the unborn?




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users