• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Viability of AGI for Life Extension & Singularity


  • Please log in to reply
249 replies to this topic

#211 attis

  • Guest
  • 67 posts
  • 0
  • Location:Earth

Posted 11 October 2006 - 03:02 AM

Actually, the non-turing operators can be emulated on a Turing system insomuch you can produce meaningful information on a turing machine. So, an AGI, if it were to be classified as such would have such a few tricks up its sleeve. One way to encode it could be the use of fractals[sic] to handle particular forms of data and code. Basically, fractals in this context would only allow one particular way for a program or data to be encoded, so if it's wrong the first time it'll tell you. Or if it some how passes the encoding process, it won't decode the same way and it will fault at the location in the code or data where there is a flaw.

It's been something I've been considering for a while, so I'll have to keep researching on it while I try to get a basic proposal together for it. o.O

#212 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 11 October 2006 - 03:32 AM

Good to see a couple of Mac notebooks.. ;)


I was thinking the same thing...amidst all the techytalky...a friend!

sponsored ad

  • Advert

#213 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 23 October 2006 - 02:03 AM

We hosted another workshop in the LA area (Costa Mesa, CA). Thanks to David Kekich (www.maxlife.org) for hosting the event at his place (photos):
http://www.agiri.org...t=ST&f=36&t=261

#214 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 28 October 2006 - 10:58 PM

Here's a July-06 conference presentation Ben gave w/ Ray Kurzweil:

Second Annual Geoethical Nanotechnology Workshop -- Dr. Goertzel
http://video.google....317178553527868

Attached Files

  • Attached File  ben.jpg   15.64KB   0 downloads


#215 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 October 2006 - 05:34 PM

Goerge Dvorsky blogs @ Ben and the Terasem event:
http://sentientdevel...ertzel-etc.html

#216 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 29 October 2006 - 05:36 PM

James Hughes recently posted an edited version of Ben's talk to IEET:
http://ieet.org/inde...e/csr200610281/

#217 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 29 October 2006 - 08:01 PM

"If your upload steals your wife, are you going to be happy?"

--Dr. Goertzel

LOL.

#218 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 29 October 2006 - 08:10 PM

Well, in this case, I would not worry to much. We are able to set priorities in the design of certain AI capabilities .... :)

#219 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 November 2006 - 04:56 AM

Dr. Ben Goertzel interviewed by RU Sirius on AGI, the Singularity, philosophy of
mind/emotion/immortality:

http://mondoglobo.net/neofiles/?p=78

#220 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 04 November 2006 - 05:40 PM

Dr. Ben Goertzel interviewed by RU Sirius on AGI, the Singularity, philosophy of
mind/emotion/immortality:

http://mondoglobo.net/neofiles/?p=78


I dont know if boing boing found this themselves, or if someone pointed them to it. It is good exposure though.

http://www.boingboin..._intellige.html

#221 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 07 November 2006 - 02:16 AM

It's definitely worth carefully reading what Hank has to say here, trying to understand it, and pointing out any possible weaknesses or holes.

Here's the facts: a medium-sized group of life extensionists, called Singularitarians, believe very strongly that AI is the technology that could make or break our entire future. I.e., if AI succeeds, we get radical life extension very quickly, if AI fails, we all perish.

Wow! So incredulous sounding and far-out! But really, it isn't, and there's a lot of careful and cautious thought behind this position.

#222 kgmax

  • Guest
  • 75 posts
  • 0

Posted 11 November 2006 - 03:00 AM

(5) We know that this positive feedback loop (let us label this, for understandable reasons, "recursive self-improvement", and let us label the event in which this mind achieves super-human intelligence, for specific reasons explained elsewhere, the "Singularity"), will either occur on one or more humans, or one or more AIs. We can make a distinction between whether this outcome will be Friendly to humans, or Unfriendly to humans, that is, outcomes including the annihilation of humanity, the descent of humanity into some horrific hellish scenario, or increases in over-all pain and/or suffering and/or death relative to their current levels (or however it is that we would really want to define “bad outcomes”, if we knew the actual consequences of making the definition in that particular way) would obviously be Unfriendly, and those that decreased the overall pain and/or suffering and/or death, gave humanity a truly optimal utopia and nearly (and, depending on the laws of physics, possibly) infinite life spans in which to live in them, or however it is that we would really want to define “good outcomes”, if we knew the actual consequences of making that definition in that particular way, would obviously be Friendly scenarios.


On this particular point I would like to say that it could be in effect neither. (as far as purely AGI not Intel amplification in a human) We have no way of knowing that it would interact with us in any meaningful way. A properly constructed AGI might go so far as to intercede in our building of other AGI but then that is assumption as well.

I am optimistic that if AI is created it will not be an extinction event. I am personally more worried about Nanotech research going wrong.

Honestly... being a human... I am more worried about getting a better job soon and getting out of this one tonight to go home and see my woman... but im just a human :)

#223 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 February 2007 - 09:10 AM

IEET fellow, Jamais Cascio, blogged recently about Vernor Vinge & Ben Goertzel concerning Uplift Academy's "Good Ancestor Principle Workshop" in San Diego (Feb 4-5, 2007):

blog post: http://ieet.org/inde.../cascio20070208

snip:

"Instead, we ran right past the “human++” scenario right into the Singularity—and with Vernor Vinge in attendance, this is hardly surprising. (Not that Vinge is dead-certain that the Singularity is on its way; when he speaks next week at the Long Now seminar in San Francisco, he’ll be covering what change looks like in a world where a Singularity doesn’t happen.) This group of philosophers and writers really take the Singularity concept seriously, and not for Kurzweilian “let’s all get uploaded into Heaven 2.0” reasons. Their recurring question had a strong evolutionary theme: what niche is left for humans if machines become ascendant?

The conversation about the Singularity touched on more than science fiction stories, because of the attendance of Ben Goertzel, a cognitive science/computer science specialist who runs a company called ”Novamente”—a company with the express goal of creating the first Artificial General Intelligence (AGI). He has a working theory of how to do it, some early prototypes (that for now exist solely in virtual environments), and a small number of employees in the US and Brazil. He says that with the right funding, his team would be able to produce a working AGI system within ten years. With his current funding, it might take a bit longer."

#224 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 05 March 2007 - 04:31 AM

Reference: http://www.mail-arch...m/msg00393.html

9 people responded to the AGI+LE poll question (How much does life extension motivate your interest in AGI? - full question posted at bottom of this email). I've taken a few sentences from each reply and posted below. I've also compiled a listing of % interest in AGI as motivated by Life Extension. Where the % was not stated explicitly, I have taken liberty (divined) to come up with a guess. Please feel free to correct me. Also, feel free to reply to this thread with more / newer answers, etc.

For me, I was surprised to find how low the % was (28%). However, on reflection I can understand that I'm fairly obsessed with the idea of physical immortality as compared to most others ;-)

- Bruce

==Results:

25% Joel Pitt (explicit)
50% Stephen Reed (divined)
00% Bruce LaDuke (explicit)
25% Matt Mahoney (divined)
25% Stathis Papaioannou (divined)
75% Ben Quirk (divined)
25% Mark N. (explicit)
00% Patricia Manney (explicit)
25% Vishaka Datta (divined)
---
28% AVERAGE

==Excerpts from replies to AGI+LE Poll

Joel Pitt said:

So my belief is that the singularity a) enables us to have longer/indefinite life spans with which to experience more. b) will allow us to experience so much more than our current human senses allow us. Of course I also think AGI is an amazing puzzle and will answer questions (and raise new ones) about self awareness, consciousness and intelligence. I also believe that humanity is currently heading towards collapse if some major changes don't happen soon - so if the singularity can help us survive I'm all for it! :) In summary I'd say life extension is only 25% of my interest in it.

--
Stephen Reed said:

Since the early 1970's I've had as my life goal participation in technologies that would lead either to Life Extension or to Artificial Intelligence, on the theory that if one of these is achieved, the other will follow in my extended lifetime. My confidence has grown over the years as others have taken up these goals and some, e.g. Kurtzweil have explored the connections between them.

--
Bruce LaDuke said:

My Life Extension motivation is 0% of the reason why I'm interested in AGI+Singularity. I'm interested in AGI+Singularity because I want to bring the knowledge creation process to AGI researchers. I believe that singularity is the realization of artificial knowledge creation.

--
Matt Mahoney said:

I don't know if I will live long enough to see the Singularity, but the more I think about it, the more I believe it is irrelevant. Once AGI can start improving itself, I think it will quickly advance beyond human intellect as humans are advanced over bacteria....

I believe the universe is simulated. I don't know why the simulation exists. Maybe there is an AGI working on some problem whose purpose we cannot understand. Maybe it is just experimenting with different universes for fun. Maybe there is no reason at all; the current universe is just one of an enumeration of all Turing machines.

---
Stathis Papaioannou said:

The important thing as far as survival goes is not that my memories are preserved or that aspects of my life can be repeated, but that I continue to have new experiences from here on, which experiences contain memories of me in their past and identify as being me. That is, if I had a choice between living for 200 years and living for 100 years repeated 10 times (so that I had no idea which cycle I was in), I would not hesitate to choose the 200 years. In block universe theories of time, the past and present are "always there", but this is no comfort at all if I can't expect future new experiences.

---
Ben Quirk said:

[Now] that I try to sit here and answer your question I find it extremely difficult to put into words. I keep erasing and rewriting what I've typed up... I think my interest [is] motivated [by] the fact that greater-than-human intelligence is our best shot at solving all those eternal questions such as what is reality, why does something exist instead of nothing, what is the nature of consciousness... I'm also extremely [in to] life extension and cognitive enhancement.

--
Mark N. said:

Life extension is about 25% of the reason I am interested in the Singularity. I do not want to live forever in a world like today's world. I am quite unhappy with the state of the world and this country, and it seems like every year I become more cynical. Who knows if this world as it is today is sustainable? My motivations are creating a sustainable and enjoyable world that everybody will like, and reducing the amount of suffering and problems that exist today.

As for what I would get personally out of this? It would be nice to party again without destroying brain cells :). But in all seriousness, I am not too concerned about personally being alive in a post-singularity world. The concern lies with it actually happening.

---
Patricia (PJ) Manney said:

I'm interested in AGI+Singularity because I acknowledge that it is a possible, if not probable, direction that the future is headed towards. I believe society needs to discuss the ramifications as much as possible or else be caught unawares.

My interest in Life Extension, perhaps oddly to you, has nothing to do with my interest in AGI. I see them as two different objectives, even if they end up being related, i.e., AGI solves the 'problem' of life extention, either through medical means or uploading or whatever. I think Life Extension could very well happen without AGI and visa versa.

--
Vishaka Datta said:

I am interested in the AGI and singularity because I want to take mankind closer to playing God.....creating a whole new race of sentient beings is the best way..

==AGI Motivation / Life Extension? (POLL QUESTION)

In June 2006, I started a topic called "Viability of AGI for Life Extension & Singularity" which grew to 252 posts. Lively discussion, including updates on Novamente here:
http://www.imminst.o...ST&f=11&t=11197

Along these lines, I was wondering the general motivation / attitude of [singularity] list subscribers toward AGI as it relates to Life Extension. If interested, please answer:

My Life Extension motivation is...
- 100% of the reason why I'm interested in AGI+Singularity
- somewhere between 0 and 100%

AND / OR

I'm interested in AGI+Singularity because I...
- find AGI an interesting puzzle
- want to save the world
- want to ____

--
Bruce Klein - http://www.novamente.net/bruce_blog

#225 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 05 March 2007 - 05:19 AM

If I was interested in AGI, then it would be >99% for life-extension. It seems to me that the removal of involuntary death is a prerequisite for most outcomes worth wanting.

#226 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 11 March 2007 - 11:01 PM

posted at [singularity] discussion list by Ben Goertzel:

If you have 2.5 minutes or so to spare, my 13-year-old son Zebulon has
made another Singularity-focused
mini-movie:

http://www.zebradill...inkularity.html

This one is not as deep as RoboTurtle II, his 14-minute
Singularity-meets-Elvis epic from a year ago or so ...
but, his animation technique has improved over time, and this one is
more visually hilarious (the visually
amusing part comes about halfway through...)

-- Ben

#227 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 12 March 2007 - 01:49 AM

Pretty bizarre stuff going on there... but very funny!

Great job Zeb [lol]

#228 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 17 March 2007 - 08:06 PM

Developments at Novamente: Pejman Makhfi (Silicon Valley technology veteran) and Dave Gobel (pioneer of 3-d virtual environments and CEO of MPrize) join our Advisory. Also, Novamente LLC donates 1% of its stock to the Methuselah Foundation!

Posted Image

Ben Goertzel and I will host the grand opening of Novamente's Second Life headquarters on Mar 22 @ 6pm Pacific time.

I think Second Life (or a similar platform) will become the "brower" for the metaverse. For more on this idea, see the Metaverse Roadmap: http://metaverseroadmap.org

The metaverse is the next incarnation of the internet and the opening of a new informational dimension to physical space. It is a permanent new space that incorporates all previous informational dimensions (text, etc.) of physical space and goes increasingly beyond it, an immense reservoir of information that is constantly being updated, a platform for easy and intimate contact with others, a place whose future is very bright and hard to predict in its specifics, but less so in its general trends.

Also, I think ImmInst's online social network will eventually merge itself into the metaverse. The web will become more 3-d... as such, Novamente aims to becomes a major provider of "Digitial Twins" (DT):

I don't know if that's true, but when I talk to my digital twin (the virtual person that represents me on the net), I know that our machines are becoming more a part of us every day, so pretty soon we won't see them as separate from us. As the futurist Ray Kurzweil said even back in 20C (The Age of Spiritual Machines), humans and machines are merging in a seamless union. As seamless as my slickskin bike racing suit, I think.

More on DT: http://www.accelerat...ureheroes1.html

#229 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 20 March 2007 - 01:23 AM

Looks amazing...I didn't know you had a son but he's pretty damn talented...go BRUCE! [lol]

#230 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 20 March 2007 - 01:34 AM

I have a question..what role will emotion play in AGI and with us during a working singlarity. Will we have nuanced emotions like that in the Hedonist Imperative? Or will it be more computational in nature. I think emotions even more intricate than human should be worked into a AGI.

Also if AGI happens first will that determine the outcome of the Singularity? Or would the Singularity happeing frist determine the outcome of AGI?

I'm having problems remembering stuff so it's hard to read seriously complex articles... so bear with my rather simplistic questions.

Bruce you and Ben are doing amazing, groundbreaking, stuff and you should be congratulated for novamente.

Edited by dfowler, 20 March 2007 - 10:19 PM.


#231 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 20 March 2007 - 10:09 PM

Given that we are beginning to understand some of the factors that influence various aspects of cognition we may shortly be able to enhance the human brain for people of average intelligence to achieve the physiological status of individuals characterised as having "genius" level intelligence.


This is what I'm hoping for because I feel my intelligence defniitely should be enhanced...hopefully AGI will get me there.

#232 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 12 April 2007 - 08:39 AM

Update from Ben....

Hi all –

Ben Goertzel here, with some minor news: I have recently written two articles for kurzweilai.net, and the first one has just appeared!

It’s called “ARTIFICIAL GENERAL INTELLIGENCE: NOW IS THE TIME”, and can be viewed at

http://www.kurzweila...es/art0701.html

It touches a bit on Novamente, and long-term applications thereof (artificial scientists, digital twins and such) but is mainly pretty general and high-level.

The second article focuses more specifically on the Novamente approach to AGI and should appear on kurzweilai.net fairly soon.

The theme of the first article may be inferred from the title: I make a case, familiar in essence to those who know me, that AGI at the human level or beyond could be achieved in a relatively brief period of time (say, 5-7 years … possibly less) with a serious, intensive, concerted effort by the right people.....

More: http://www.novamente.net/blog/?p=5

#233 eyu100

  • Guest
  • 9 posts
  • 0

Posted 12 April 2007 - 03:31 PM

What progress has been made since the beginning of this year?

#234 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 05 June 2007 - 09:05 PM

Thanks for the question, Evu100... Novamente has recently partnered w/ Electric Sheep to develop backend AI for Second Life (more soon) and Ben Goertzel, Novamente CEO, gave a Google TechTalk last week.

#235 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 21 June 2007 - 05:14 PM

In a strangely ironic event, firefox crashed while I was typing up a comment on Novamente's new business model. To sum up:

I am incredibly excited about Novamente's new direction. I tried the Second Life interface and was disappointed with it. However, after finding this article on Kurzweil's website, I became lost daydreaming about the many applications of Novamente's technology for a more open metaverse.

I think that Novamente's partnering with Electric Sheep is a great move. I don't know how many AGI architects read this forum, and while I know many of the luminaries in the field have disagreements on design, I more strongly believe Novamente is the company positioned to grasp the holy grail of artificial intelligence programming.

I don't have any money, but I'd like to help in any way I can. Novamente is not the next google. Just like google isn't the next microsoft. But the more I follow the Novamente team, the more I'm seeing Novamente is a good bet to become just as pervasive.

An unrelated question for Bruce or Ben: Has the Novamente team thought about partnering with professor Luis von Ahn of Carnegie Mellon? The training applications for AI of his programs (ESPGame, Phetch, Peekaboom, ...) are obvious. I don't know how far along the team is in implementing the capabilities necessary to play these games, but after seeing the fetch demonstration, I know you're working in that direction.

#236 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 31 July 2007 - 08:25 AM

Thanks for the reply and suggestion, modelcadet.

We are impressed w/ Luis' work as well, and have sent an outreach email.

As an update, Ben spoke at TV07 on panel w/ Marvin Minsky and Second Life's Philip Rosedale. Ben has written a fairly comprehensive Novamente update here on our focus of AGI for virtual worlds.

#237 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 August 2007 - 03:21 PM

I've created a poll "When will AI surpass human-level intelligence?"
http://www.novamente...index.php/?p=54

Shoot me an email (bruce -at- novamente.net) if you wish to participate!

#238 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 11 August 2007 - 12:25 AM

I've created a poll "When will AI surpass human-level intelligence?"
http://www.novamente...index.php/?p=54

Shoot me an email (bruce -at- novamente.net) if you wish to participate!


Wow, 2030-2050 seems to be the consensus so far, doesn't it?

#239 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 16 August 2007 - 03:48 PM

Yeah... updated again, and 30-50 is the consensus. I'm asking people from w/in my circle... most are futurists.

sponsored ad

  • Advert

#240 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 18 August 2007 - 03:49 AM

Perhaps could this be done once a year? I'm sure there would be plenty of people willing to vote once a year (myself included), that way we can see a progression of people's optimism.

I think it would be nice to see how people's viewpoints change over the course of at least one decade. I know some have claimed that your question is incorrectly built, but I think it serves its purpose just fine, it will (given enough time) allow us to see the stagnation or advancement of the field, I realized today that I've been mentioning I believe this sort of intelligence should come into existence within 25 years to my parents, (but that was over two years ago), so I should actually be saying 23 years now.

How many years in a row are people willing to stick to numbers like 25 years?




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users