• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Viability of AGI for Life Extension & Singularity


  • Please log in to reply
249 replies to this topic

#31 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 26 June 2006 - 07:21 AM

Here's a scenario: AI decides that humans are so wicked, fraught with suffering, stupid and dull in the scope of their consciousness that it values us like we value bacteria. It wipes everyone out painlessly and uses all available resources to support AIs who are infinitely kinder, happier, smarter and more intensely aware than we are. The only alternative is that the level of kindness, happiness, intelligence and consciousness of the AIs is never achieved. (As it is "quantitated" by Bruce's first graph)

(1) Is this outcome good?
(2) Given AGI, is it likely?

I would propose yes and yes, any takers?

#32 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 26 June 2006 - 08:21 AM

one will probably kill everybody by not trying to build this thing

if an unFriendly singularity comes to pass, we may end preventing the births of trillions, even quadrillions of humans.

We would do the same by curing death. What difference does birth control or mass extinction make to the unborn?

Exactly. Therein lies a common ethical error that is a source of much hostility toward extended lifespans: The belief that potential existence of entities has precedence over the well-being of entities that do exist. Failed care hurts people. Failed procreation may hurt expectations, but not people.

sponsored ad

  • Advert

#33 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 26 June 2006 - 10:55 AM

(1) Is this outcome good?
(2) Given AGI, is it likely?

I would propose yes and yes, any takers?


No and no. The failure scenario you propose is way too close to success to be probable. If you throw a dart at a dartboard, and fail to hit the bulls-eye, it's way more likely that your miss will be somewhere else on the dartboard besides immediately next to that bulls-eye.

A failed AGI may certainly value us just as much as we value bacteria, but will it do so because it "decides that humans are so wicked, fraught with suffering, stupid and dull in the scope of their consciousness" or just because the goal-state it's trying to manipulate reality into happens to fall slightly out of the (tiny) domain that contains self-determining humans with continuous identities? The latter is more probable.

It was mentioned earlier that birth control isn't the murder of minds-that-could-have-been. By the same token, killing a person for the benefit of a mind-that-could-be isn't acceptable. What matters is the people in the here and now, and securing their self-determination rights into the future.

Given an AI with the goal system you're talking about, what's to stop it from wanting to continuously recreate new beings based on the latest mindcraft techniques, wiping the old variants at every step of the way? We could be left with a huge universe of disjointed agents with no continuous personalities or memories.

It's better to have an AI do what you're talking about than filling the universe with dumb computronium or whatever, but in practice I think that if you can build an AI that fails in this highly complex way then you can probably just build an AI that doesn't fail at all.

#34 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 26 June 2006 - 12:51 PM

bgwowk:

John Schloendorn:

jaydfox:

John Schloendorn:

one will probably kill everybody by not trying to build this thing

if an unFriendly singularity comes to pass, we may end preventing the births of trillions, even quadrillions of humans.

We would do the same by curing death. What difference does birth control or mass extinction make to the unborn?

Exactly. Therein lies a common ethical error that is a source of much hostility toward extended lifespans: The belief that potential existence of entities has precedence over the well-being of entities that do exist. Failed care hurts people. Failed procreation may hurt expectations, but not people.

I think both of you missed my point, but perhaps it's because I used the unborn, so you took creative license with my intent.

If you don't cure aging, and let's say humanity survived the next million years in much its present form (I know, completely ridiculous, but appealing to the masses here). What happens? Trillions of people are born and die.

What happens if we cure aging and stop having children (to prevent overpopulation), a condition lasting for millions of years, as the masses apparently fear?

Nobody new is born, but trillions of human-years of life continue to take place on Earth.

What happens if an unFriendly AGI sparks a hostile hard takeoff? Humanity is wiped out or enslaved (probably the former; I threw in the latter for its appeal to the masses [not appeal as in they want it to happen, but appeal as in they think it might]), and we don't have any human-years of life to look forward to.

There's a huge difference between saying no new children will be born, but human life continues, and no new children will be born, because humanity is extinct.

I admit I made it sound like I was focussing on the unborn, but I hope it's clear I meant human life in any form, alive today or not. I only brought up the unborn because I don't seriously expect that people will stop having children once we've cured aging.

#35 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 26 June 2006 - 05:31 PM

I admit I made it sound like I was focussing on the unborn, but I hope it's clear I meant human life in any form, alive today or not. I only brought up the unborn because I don't seriously expect that people will stop having children once we've cured aging.

Okay, understood. Such misunderstandings can be avoided by framing the question as how to maximize the health and life expectancy of people now living. Do nothing, everybody will die in short order. Build life-extending technologies with species existential risks, and everybody may die in short order. The key point is that "everybody" means real people affected, not hypothetical people.

As a practical matter, I don't believe we really need to make this choice because only a really stupid society would implemented AGI's without checks and balances along the way to minimize risk of harm. Furthermore, even without checks and balances, humans are still an essential part of the industrial "food chain" needed to support and perpetuate the hardware AGIs need to exist, and will continue to be so for decades to come. With industry as it currently exists, no AGI could hurt the human economy without hurting itself.

#36 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 26 June 2006 - 06:33 PM

In line with Hank and others, it seems likely that AGI will outpace human augmentation in level of intelligence... thus the need for careful consideration of safety. Here's the idea I have in graph form:

Posted Image

#37 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 26 June 2006 - 08:39 PM

What happens if an unFriendly AGI sparks a hostile hard takeoff? Humanity is wiped out

Not just that, it is also replaced with AIs that may have what we consider incredibly desirable attributes. I don't care at all if future people are AI or human, and would opt for the scenario with more total utility. As long as only the unborn are taken into consideration, I see nothing wrong with unconditional utilitarianism.

#38 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 26 June 2006 - 08:43 PM

It's better to have an AI do what you're talking about than filling the universe with dumb computronium or whatever

I guess what I was suggesting is that unaugmented or moderately augmented humans might exactly qualify as "dumb computronium", in comparison with such an AI... To what degee this is so seems to depend mostly on the slope of Bruce's human enhancement curve... Is it more like the first, or more like the most recent graph?

#39 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 26 June 2006 - 11:40 PM

To what degree this is so seems to depend mostly on the slope of Bruce's human enhancement curve... Is it more like the first, or more like the most recent graph?

John, I haven't a clue how this will turn out... but I'm going to do everything I can to make things more like the most recent graph. As mentioned earlier... Novamente’s ultimate aim is to create an artificial general intelligence (AGI) software system that will help entities of lower levels of intelligence safely transcend to higher levels of intelligence.

#40 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 26 June 2006 - 11:43 PM

It's relative to the utility function.

Yes. Since we are not perfectly matching what my "utility function" (and other commonly held utility functions) value, I feel that replacing us with something that matches it much better has some desirability.

#41 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 26 June 2006 - 11:44 PM

Heh, Bruce, that may be reassuring to those of us who prefer not to be replaced ;-) I think how the graph looks like would indeed be up to the AI. This would agian raise the question how well one can determine the will of a recursively self-improving entity without pissing it off or otherwise being self-defeating...

#42 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 27 June 2006 - 07:11 AM

Novamente’s ultimate aim is to create an artificial general intelligence (AGI) software system that will help entities of lower levels of intelligence safely transcend to higher levels of intelligence.


That's pretty harsh for me to take...what if the machines consider me to have a pretty "low" level of intelligence? What will I have to do to upgrade? Does it require part of my brain to be a machine?

Would data be "Downloadable" or something to the point where my "intelligence" would be a factor of how much data I've downloaded? What you are saying, if I am hearing it correctly, is that you are producing a real "nootropic" effect, without a pill. Maybe nootropic is the wrong term..

#43 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 27 June 2006 - 07:16 AM

Heh, Bruce, that may be reassuring to those of us who prefer not to be replaced ;-)


I've referenced The Animatrix before (check these short animes out if you have not seen them)...your body, in one possible scenario, could be the energy source for the machines...if we humans do it "wrong."

#44 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 27 June 2006 - 06:56 PM

That's pretty harsh for me to take...what if the machines consider me to have a pretty "low" level of intelligence?

When you were born, your parents probably felt the same way about you, no? They saw potential enough to care for you. If we get the code right, we will create an AGI with a similar innate propensity to care for us.

What will I have to do to upgrade?

It will be a mix... genetics then nanotech, etc... if you want to be early, start saving, it'll be expensive.

Does it require part of my brain to be a machine?

You are already a biological robot and your mind is encapsulated inside an articulated machine.

We like being human because we are used to it. This comfort is superficial, for we spend virtually our entire lives evading knowledge of our inner workings... Perhaps what people are really afraid of is being in a hard-skinned, cold-bodied machine with little in the way of senses and sensuality to stimulate the mind within. But what if the cyberbody was a warm, energized, super-sensual morphing device of graceful complexity and beauty, inside and out? In this regard, the human form will probably come to be seen for the articulated clunker that it really is. [Pg 345 "Beyond Humanity"]

Would data be "Downloadable" or something to the point where my "intelligence" would be a factor of how much data I've downloaded?

In general, yes, but the key to intelligence is not how much data, but how optimally one uses the data.

What you are saying, if I am hearing it correctly, is that you are producing a real "nootropic" effect, without a pill. Maybe nootropic is the wrong term..

Correct.

Attached Files



#45 amar

  • Guest
  • 154 posts
  • 0
  • Location:Paradise in time

Posted 27 June 2006 - 07:07 PM

It's unfortunate that the military is one of the primary advocates of artificial intelligence, because the military hasn't been very intelligent, or at least it hasn't been very wise. Artificial intelligence should be developed so that it can neutralize any threats, but not become a threat itself. That should be the sole function of the military too, but unfortunately, judging by human history, the highest degree of military wisdom seems to echo the X-man Wolverine's philosophy: "The best defense is a good offense," which is actually kinda stoopid. AGI will be subject to our own human intentions, especially at first, and I just hope human will becomes radically more benevolent itself. I have some doubts that the singularity will be accomplished within a few years by a small band of lowly funded programmers. They might be unrealistically optimistic. Human intelligence is extraordinarily complex. How far along is it? Can it recognize simple objects? Sure it would be able to recognize and pick up a virtual bunny, but if it saw a video of a real bunny, would it be able to recognize that it's a bunny? It would have to be programmed to recognize many different shapes and forms, many different nuances of language, and many different nuances of action before it comes one wit close to full human intelligence. The singularity could be lifetimes away. I get the feeling that unrealistic optimism abounds, but I'm still interested and intrigued because I know that a speck of real hope lays at the bottom of Pandora's box. How truly advanced is this Novamente project?

#46 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 27 June 2006 - 08:27 PM

I have some doubts that the singularity will be accomplished within a few years by a small band of lowly funded programmers. They might be unrealistically optimistic. Human intelligence is extraordinarily complex. How far along is it?

We are about 40% along... as mentioned earlier, we estimate it will take 6 years w/ a full-time staff (about a dozen programmers) to reach human-level AI.

It's worth mentioning that many individuals on the Novamente team have been working together on AGI for some time. WebMind, an AGI company founded by Ben Goertzel in 1998, at one point had more than 150 employees working to create a thinking machine. A casualty of the dotcom crash, WebMind went bankrupt in 2001. Ben writes about this in Waking Up from the Economy of Dreams -or- The Intricate and Peculiar Torture of Taking One’s Tech Company Bankrupt.

Can it recognize simple objects?

To be clear, NovaBaby can "recognize" tasks and then strive to complete them. Each time it tries to complete a task, it can "recognizes" more successful ways to do so.

So, to answer your question more directly, our task for NovaBaby is to pick out a specified object within a group objects or landscape… then scale up the difficulty as NovaBaby grows in intelligence.

Sure it would be able to recognize and pick up a virtual bunny, but if it saw a video of a real bunny, would it be able to recognize that it's a bunny? It would have to be programmed to recognize many different shapes and forms, many different nuances of language, and many different nuances of action before it comes one wit close to full human intelligence.

Rather than create a program with millions of hard rules to recogonize shapes, etc... Novamente uses what is called Self-Modifying Evolving Probabilistic Hypergraphs (SMEPH) to learn what a shape is, etc...

Here is one aspect of the idea which highlights the probabilistic part:

Posted Image

This is a Natural Language Processing example, eventhough you see how each item is given by the system a probability (.6, .95) rather than made a hard rule... so imagine this approach multiplied millions of times over many different things... names, shapes, abstract concepts, etc... and you start to get a feel for how the Novamente system relies on probability rather than hard rule.

How truly advanced is this Novamente project?

Because I see us as having the leading AGI design, I estimate Novamente to be at least a few years ahead of other AGI system-building projects.

#47 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 27 June 2006 - 08:44 PM

Here is the Novamente Project Plan. Sorry it is hard to read, but this is purposively made so. There are 70 tasks which are needed to be completed in a specific order for us to meet the 6 year target to human-level AI.

Posted Image

#48 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 28 June 2006 - 04:07 AM

Of course, in the present state of the financial markets, the idea of starting a company with a goal like this would get you laughed out of any conversation with any serious businessperson.  Creating a thinking machine, and then commercializing it?  Well, fine, but how are you going to make money while you’re creating the damn thing?


This is a good point. Being the first to develop such an advanced technology that does not have a way to profit off of its development during its development is going to cause problems for many investors with little patience.

You would need also investors with true appreciation and understanding of the many innovations brought to us by the advance of the computer. Jeez, what isn't computerized yet? It's really only a question of time before the type of AI that can be implanted in humans exists.

I would assume the greatest potential for investment for future advanced AI technologies would be engineers who understand the way computers actually work. If I understood enough about the way computers work, and how Novamente works, I would assume my best bet to gather capital would be Intel and AMD's work force; maybe spread Novamente around University engineering departments to interested students?

#49 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 28 June 2006 - 06:07 AM

I would assume the greatest potential for investment for future advanced AI technologies would be engineers who understand the way computers actually work.  If I understood enough about the way computers work, and how Novamente works, I would assume my best bet to gather capital would be Intel and AMD's work force; maybe spread Novamente around University engineering departments to interested students?


The big pitfall there can come from their very understanding. People with advanced study in computer science, as a rule, tend to not have much background in biology or the cognitive sciences. So quite often they wind up with an image of computers as nothing special, and the human brain as something magic. Not all the time, to be sure. But it's a viewpoint I've seen a surprising amount of times.

#50 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 28 June 2006 - 06:04 PM

To highlight the connection between Novamente and biomedical work, Ben Goertzel gave his ImmInst 2005 Conf talk on a number of projects Novamente is working on to advance biomedicine... such as our work to understand the genetics of Parkinson's disease, understand the difference between old and young brains, our work on BioLiterate which is an AI-enhanced biomedical text-mining application the help scientists find relationships (direct and inferred facts, premises for inferred facts and related articles) between chemicals, genes, pathways, proteins and agents.

Posted Image
Ben talking about Novamente & biomedical work at ImmInst's 2005 Conf:
http://video.google....505614870506496

Posted Image
Here's a 2 minute movie of BioLiterate (a Novamente product):
http://www.novamente...literate/movie/

#51 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 June 2006 - 06:28 AM

Thanks, Marc!

You may enjoy Hugo de Garis' debate with Penrose here. Hugo has recently joined the Novamente team.

#52

  • Lurker
  • 1

Posted 29 June 2006 - 07:19 AM

Posted Image
Here's a 2 minute movie of BioLiterate (a Novamente product):
http://www.novamente...literate/movie/


Is there a version of this product available for a trial Bruce?

#53 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 June 2006 - 02:51 PM

No trial of BioLiterate... we are currently shopping it to the NIH. However, we'll be happy to demonstrate this to any company with an interest. One can fiddle with a non-working version of the product via our webpage here... which ONLY works in INTERNET EXPLORER currently: http://www.novamente.net/bioliterate (the tables break in FireFox and Opera).

Also, we have a public-web-access version of BioMind's (Novamente's sister company) ArrayGenius OnDemand, which is a product for AI-and ontology-based microarray data analysis. One just needs to register for a password here first.

#54 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 June 2006 - 03:08 PM

By the way, when I say "sister company", I mean that many programmers who work at BioMind have also worked with Novamente... and that BioMind relies on Novamente software as used in ArrayGenius... which has been licensed to the NIH's National Institute for Allergies and Infectious Diseases, and the Centers for Disease Control and Prevention, for bioinformatics data analysis.

#55 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 June 2006 - 05:41 PM

For those who'd like to see an excellent overview of AGI, please see Peter Voss' presentation during Terasem's Nov. 2005 Colloquium on the Law of Transhuman Persons.

It's worth mentioning that while I have great respect for Voss' work, after comparison, I've found Novamente to have a greater depth of programmer talent and, more importantly, the most advanced approach and architecture for creating AGI currently.

Posted Image
http://video.google....407586383523968

ImmInst's Susan Fonseca-Klein and Sebastian Sethe(Caliban), both presented at this event as well.

#56 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 29 June 2006 - 06:05 PM

Bruce, congratulations on bringing Hugo on board as an advisor! Does Novamente plan to use evolvable hardware in the future, or does it currently?

I have read the Novamente literature you wrote but have yet to read the pdf on hypergraphs. So far my position is very skeptical - how can you guys break down the sub-parts so cleanly, and say for certain each one will take a certain number of months, and at the end of the process you'll have human-level AGI without a doubt? I would tend to think that this sort of extreme certainty would scare away investors. I admire the work you have put into it, though.

I've written a blog post mentioning this thread here. Feel free to comment on it if anything comes to mind.

Great videos you are linking! It's amazing how video-enriched the Internet has become in the last year or two.

#57 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 June 2006 - 07:27 PM

Thanks, Michael.

Does Novamente plan to use evolvable hardware in the future, or does it currently?

We are not currently using Prof. de Garis' evolvable hardware but do have plans. FPGA's should provide considerable speed improvements in the evolutionary learning components of Novamente. Implementation would look like this, with Prof. de Garis in purple:

Posted Image

How can you guys break down the sub-parts so cleanly, and say for certain each one will take a certain number of months, and at the end of the process you'll have human-level AGI without a doubt? I would tend to think that this sort of extreme certainty would scare away investors. I admire the work you have put into it, though.

To be sure, for an explanation of the "6 years" to human-level AI number, please see Ben's reply here.

We are able to segment the project into discreet tasks because we've invested considerable time thus far and have a good understanding of how much more is needed. With that said, there are no guarantees, as Ben replied earlier, it's not 6 months and it's not 25 years either.

#58 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 June 2006 - 08:00 PM

By the way, Ben's newest book, which provides the fundamental underpinnings for his work on Novamente, has just been published... coming online to Amazon recently. The book is currently less than $30... comparably less than many of his other more technical works, but a fun book nonetheless.

Posted Image

Ben has also uploaded many of his books, papers and essays to his personal website as well.

#59 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 29 June 2006 - 10:08 PM

Hank, we are seeing a slow enough "takeoff" to allow for simulation and experimentation to provide adequate information to help us guide the development of safe AGI.


I'm trying to catch up slowly with current AI developments. My experience with it is limited to some lisp and prolog programming in the past. So I'm kind of a dinosaur when AI is concerned at the moment.

Anyway, having highlighted my weaknesses with the subject, I also have some strengths. Giving my experience with system integration of various types of information and automation systems, even with current simple and predictable systems, I know verification and validation is a cumbersome process.

I have 2 questions. Forgive me if my approach seems to be sceptical, I consider myself as a “positive scepticist”, but sometimes I find it quite difficult to find a balance using the English language.

1/ How are you planning to verify that AGI self learning algorithms are stable. That they are convergent towards some goal, that might even be not entirely clear in the beginning. I assume that some set of meta level rules on top of the highest layer of logical or functional abstraction are required to apply some form of ethical restrictive.

2/ The practical implication of 1 is: How are you going to validate a set of algorithms or even an integrated system against a predefined set of life scenario’s or challenges for the AGI. Are you able to test the effectiveness of the rules of ethics you implemented, even if you don’t know how your “child” will develop itself.

I assume for both relating issues that it is essential that human enhancement development grows in pace with the AGI development. In order for us to be able to understand what goes on in this machine. “We” must be the machine to get to know its powers, limitations and pitfalls. Or at least we need to have enhanced abilities that are very near to the capabilities of such a machine to be able to judge it’s actions (and thoughts).

Or in the less far future, the humans we are, are just incapable of understanding all implications of self-learning algorithms. Sometimes we even fail to understand our current simple “linear” if-then-else systems. And I know the developers of these systems are no fools either. :)

sponsored ad

  • Advert

#60 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 29 June 2006 - 10:41 PM

Novamente’s ultimate aim is to create an artificial general intelligence (AGI) software system that will help entities of lower levels of intelligence safely transcend to higher levels of intelligence.

Hmm, my way of reasoning in the post above is the complete opposite of this. I assume we need to be on top of AGI developments to be able to judge its validity… [:o]




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users