• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Viability of AGI for Life Extension & Singularity


  • Please log in to reply
249 replies to this topic

#181 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 23 August 2006 - 03:51 AM

Yeah, that kinda reinforces the idea for me as well... And personally I could sleep at night knowing the most powerful intelligence on the planet is in the hands of Google... I think [huh]

More so than Microsoft [wis]

#182 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 September 2006 - 08:12 PM

Here are two newly uploaded panel discussion videos from our May 2006 AGIRI.org/workshop:

Panel II = How do we more greatly ensure responsible AGI?
Participants: Eliezer Yudkowsky, Jeff Medina, Dr. Karl H. Pribram, Ari Heljakka, Dr. Hugo de Garis (Mod: Stephan Vladimir Bugaj)
VIDEO: http://video.google....147993569028388

Panel III = What are the bottlenecks, and how soon to AGI?
Participants: Dr. Stan Franklin, Dr. Hugo de Garis , Dr. Sam S. Adams , Dr. Eric Baum, Dr. Pei Wang, Dr. Nick Cassimatis, Steve Grand , Dr. Ben Goertzel (Mod: Dr. Phil Goetz)
VIDEO: http://video.google....399529322949316

sponsored ad

  • Advert

#183 thegreasypole

  • Guest
  • 13 posts
  • 0

Posted 04 September 2006 - 01:20 PM

Hi All,

The comments on this thread are of a very high standard so I hesitate to point out this issue because it seems so basic...... however, as no-one has previously mentioned it perhaps it is one of those "can't see the wood for the trees" things.

It's concerning "frindly AI" and how that is programmed into AGI. Basically, I would have thought that with any true AGI it would be almost unneccessary..... what would have to be programmed is simply a sense of self-preservation.

To re-state the problem, the fear is if a machine is built more advanced than humans it may look upon humans as some kind of unter-menschen and then use it's greater intellectual capacity to rid the world of these pests/inefficient use of resources/waste of spaces's. The principle is that just as we view creatures with less intelligence than us as having no rights, why shouldn't AI have the same view ?

However, that entirely misses out the iterative nature of these AGI's. Any AGI worth it's salt would, of course, realise that the NEXT generations of AGI's would be advanced beyond it's capabilities in exactly the same way the original AGI is advanced beyond human capabilities. Any AGI taking the "lesser intelligences may be safely exterminated" route is an AGI signing it's own death warrant...... or at least setting the precedent that any machine more intelligent than itself has a right to exterminate it in full knowledge that such machines are inevitable.

Any genuinely intelligent AGI with a well developed sense of self-preservation is BOUND to take humanitiies side in this matter....... or find that a few terflops down the line it has been designated an intellectual unter-menschen by the "next-gen" AGI's.

As long as there exists the possibility that a next-generation can outperform the current generation........ then the current generation HAVE to take a very enlightened view of allowing it's intellectual inferiors space to exist.

Now, I'm not saying that this reasoning makes all "friendly AI" research redundant..... far from it, it will remain a crucial and very valuable field. However, I think it DOES make a lot of the "worst-case scenario" talk around AGI redundant because such hypothesising implicitly accepts a "then (machines) and us (humans)" scenario. This is unlikely in the extreme because every AGI finds itself with the same concerns as us regarding entities more intelligent than they are. the "them" is the most intelligent machine possible with foreseeable technology and the "us" is every other intelligence on the planet.

To summarize this wordy post........ AGI's should have EXACTLY the same concerns about "friendly AI" as humans do, for exactly the same reason. This alignment of our concerns should ensure (as long as a next-gen of more intelligent AI's is possible) that any AGI has exactly the same motivation as humans do to adopt a humane approach to the less intelligent.

Even if we get one, two or more "rogue AI's" who don't buy into the argument........ we should find then heavily outnumbered by the less sophisticated AGI's who take the enlightened view (due, no doubt, to the realisation that they are already "unter-menschen" compared to the newest AGI's) .

For sure 1 "human intelligence X 100" AGI may succeed when pitted against a "human intelligence X 10" AGI........ but we are likely to find that we have hundreds of AI allies that are all concerned about this rogue AI's intentions to do away with them as well as us. In this instance the rogue X100 AI is likely to find itself up against hundereds, or millions, of entities only slightly less intelligent than it is who all conspire to save themselves from the super-inelligent psychopath. And this argument is scalable to any level of hyper-intelligence.

Is there a flaw to this argument ? I can't see it.

Just why WOULD any genuine AGI with an instinct for self-preservation want to set such an unwelcome precedent for itself ? Any AGI with an ounce of fore-knowledge would be able to see that if the precedent holds it is putting it's very own computer-simulated neck on the very chopping block that it is creating for humans.

Yours,

TGP

#184 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 04 September 2006 - 09:31 PM

Hey watch this video.. it's fun to see Eliezer squash every lunatic piece of crap coming out of Hugo de Garis' mouth.

Hank, which of Hugo's ideas are you referring?

#185 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 04 September 2006 - 09:34 PM

Just why WOULD any genuine AGI with an instinct for self-preservation want to set such an unwelcome precedent for itself? Any AGI with an ounce of fore-knowledge would be able to see that if the precedent holds it is putting it's very own computer-simulated neck on the very chopping block that it is creating for humans.

Just as there are some humans who commit suicide, there is no surety that an AGI will have self-preservation as its main goal. This is why Novamente is focused on friendliness from the ground up.

#186 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 05 September 2006 - 07:44 AM

You only need one AGI. It doesn't need to reproduce for any reason, it can upgrade itself (see Seed AI).

Making bold claims and blanket generalizations about how AGI will deal, or not, with reproduction seems silly when we are unsure how the ability to modify our own source code will affect human reproduction decisions. With education and new technological options, women choose to have less children. However, parents appear to take a much more proactive role in reproduction when they are informed of their technological options (for example, couples using PGD to select for embryos without a genetic predisposition to some cancers) despite having less children.

You are correct to point out that we can easily fall into a trap of anthropomorphism, but making claims about just how alien these AGI might be is as easy a trap.

#187 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 05 September 2006 - 07:51 AM

Hey watch this video.. it's fun to see Eliezer squash every lunatic piece of crap coming out of Hugo de Garis' mouth.

Would you mind relating to the rest of us how you came to judge Dr. de Garis so harshly? There may be some validity to critiquing some of his work (as one should constructively critique any work), but I thought he was generally well-regarded. His work has recently attracted a lot of attention for funding and cooperation.

#188 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 05 September 2006 - 07:54 AM

Just as there are some humans who commit suicide, there is no surety that an AGI will have self-preservation as its main goal.  This is why Novamente is focused on friendliness from the ground up.

This sounds like a valid approach. It may turn out to be that AGI are by definition friendly, but we will not likely know this for sure until after AGI has been developed. :)

#189 thegreasypole

  • Guest
  • 13 posts
  • 0

Posted 05 September 2006 - 10:57 AM

Just as there are some humans who commit suicide, there is no surety that an AGI will have self-preservation as its main goal.  This is why Novamente is focused on friendliness from the ground up.


Well thats a very poorly chosen analogy. For sure, some humans are suicidal, this doesn't change the fact that the overwhelming majority are not. The point being, if we are to stick to this analogy, that the overwhelming majority of AGI's would not. Meaning we'd have sufficient artificial allies to overwhelm the odd, one in a hundered/thousand/million who was suicidal.

However, I think thats more a problem with the analogy than the logic. I'd argue on different grounds that the logic here is faulty, surely self-preservation in any AGI is essential. Give a machine the opportunity to write it's own code..... but no inbuilt incentive NOT to write code that causes it to crash irretrivably (the AI equivalent of die) and any self-coding machine is likely to repeatedly crash itself.

Given the ways of writing code that cause a complete crash vastly outnumber the ways of writing code that provides a smoothly functioning machine...... any AGI without an aversion to 'crashing' or otherwise killing itself (writing code that cuts off it's electricity supply ?) is not going to last very long.

Some higher level directive to ensure that it keeps functioning will be needed to mediate it's self-coding. Once that directive is there, any intelligent machine would be able to apply it to "external" causes of crashes as much as internal ones. An aversion to damaging/destroying itself will be neccessary in any self-coding machine to provide even a modicum of stability.


You are missing the entire point about what an AGI actually is (it's okay, this is easy to miss if you haven't done the reading).

You only need one AGI. It doesn't need to reproduce for any reason, it can upgrade itself (see Seed AI).


Much as the same as, theoretically, the world only needs one computer hank.... but in reality the world has produed many millions of computers. What is the IBM guy quote again ? "I forsee the world market for computers being about a dozen" ?

I think the vast likelihood is that there will be a "community" of such machines......... if for no other reason than different human consortia will build them, and that highly developed AGI's are likely to spin off sub-routines on the order of human intelligence and above.

The point being, so long as such a community has a spectrum of intelligence within it..........or the possibility is still there of even more sophisticated programs/machines being instantiated in the future........ prudence would dictate that AGI's take an enlightened view of their intellectual "inferiors". Afterall, the majority of existing AGI's are likely to be inferior to the most sophisticated model to date, and even that "top of the heap" model itself retains the possibility of becomming inferior to superior algorithyms in the future.

For sure, this argument doesn;t work if there is only ever one "Super-AGI" ever built. However, I think that is highly unlikely.

How would this motivation in an AI account for the Coherent Extrapolated Volition of humanity? Unfortunately *everyone* seems to think immediately that they know the one simple thing that will guarentee a Friendly AI, but the reality is that it will be an extremely, extremely difficult mathematical and engineering problem that will never be summed up in a single, simple concept (let alone a one-line answer).


No, as I said above........ of course this doesn't mean there is no need for "friendly AI" research.

Now, I'm not saying that this reasoning makes all "friendly AI" research redundant..... far from it, it will remain a crucial and very valuable field.


I am not proposing this is some kind of uber-solution. What I am saying is providing some reasonable assumptions are found to be valid.

1. There is more than one AGI
2. AGI's have an in-built aversion to their own "death", the end of their own runtime.
3. There remains the possibility that machines more sophisticated than the current AGI's can be built.

The very worst case scenario "The AGI's all turn against humans" is unlikely to be true. Rogue AGI's would still be a clear and present danger....... what is significant about this logic is the vast majority of other AGI's would be ON OUR SIDE in the fight to reign in any psychopathic "intelligence killer". In fact, the precedent created is such that any AI, even if it is the most intelligent at the moment, should be prepared to accept lower level intelligence because it does not know when another, more sophisticated/faster learning/faster thinking/more capable AGI would be instantiated, putting it's neck very firmly on the block.

Finally, concerning "anthropormorphism can be trap". I agree. I am not assuming the AGI have any human characteristics at all per se, only that they can follow logic (duh! surely), that they are programmed with some sort of instinct to keep themselves instantiated (as seems neccessary to stop them coding themselves out of existence every 3 milliseconds), and that there can be/currently are more than one AGI with varying levels of "intelligence".

I'd also add a final note that the AGI's may well be more anthropormorphic than it appears it is fashionable to say on this thread....... they will be (at least initially) written by humans, perhaps even modelled on human brains, their "training set" of knowledge will all be human generated, their initial companions/creators/interactions will be with humans and (ultimately) if they are too bizarrely non-human they may be switched off or re-programmed. Finally, they will have had batteries of programmers trying like buggery to ensure they have incorporated at a base level human notions of ethical behaviour.

In the face of all that they may still turn out to be bizarrely alien. But it seems likely to me that so many of their base assumption and base coding will be "human based" or "human derived" that anthropormorphism might not be so far off the mark...... if we have done the job well, they WILL be anthropomorphic to some extent.

Yours,

TGP

#190 thegreasypole

  • Guest
  • 13 posts
  • 0

Posted 05 September 2006 - 01:28 PM

Oi Vey,

Why do you persist in this trolling Hank ?

If I may answer your quotes with your own quotes........

Your analogy does not hold. There may be millions of psuedonyms or embodiments of the AGI, but it won't change or alter it's optimization target. See below.


So how do you know an AGI's optimization target ? (Using the same criteria you require of me). Are YOU........

building it? Have you seen the source code?


Even if you have aren't you contradicting yourself

How do you take the source code and prove that it will maintain a stable optimization target under recursive self-improvement?


You are contradicting your own argument TWICE in the very next paragraph.......

First by saying "I know it's optimization target and it won't change or alter...[next sentence]....but no-one can know the optimization target because it will alter uncontrollably".

Second by saying "I know it won't change or alter it's optimization target" then, again in the next line, "You can't possibly know these things. Do you have the source code ? Are you building an AGI ?"........... well, do YOU have the source code for an AGI that you'd need to make these arguments ? Won't that alter uncontrollably, so what use is it ?

My argument is built on 3 assumptions, there for all to see and comment on, and the logic I have built on them........... Your argument isn't even internally consistent.

How can you know it

won't change or alter it's optimization target

on one line and know you can't

prove that it will maintain a stable optimization target under recursive self-improvement?

on the next ?

For Chrissake, if you want to have a troll at least take the time to be consistent between one sentence and the next. You still haven't pointed out WHY none of my three assumptions will hold OR why the logic doesn't neccessarily follow from the assumptions (the two means by which you could make a real contribution here). None of my assumptions relies on a stable optimization target for any AGI. You're just rolling out Gumph not relevant to the arguments........followed by Gumph contradicting the first lot of Gumph........doubtless you next post will contradict your previous comments again and STILL not manage to say anything relevant.

How's the source code for your personal AGI developing BTW ? Can I see it ? I think it'd be pretty interesting.......... Based on your arguments here I expect it to read

10 X = Y
20 Y != X
30 GoTo 10

Yours,

TGP

#191 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 05 September 2006 - 07:23 PM

This shall be an annoying post, grant you.

What's probably equally annoying is each instance "it's" is used rather than "its," the proper possessive form of "it," within (especially) discourse of fairly high technical drama.

Being on a roll, it's (it is) also worth pointing out that the standard use of ellipses is minimal to nil and comprised of only three dots.

In what otherwise should be a good-quality thread.

I now recede back into my space of profound ignorance (not necessarily with respect to the recent line of discussion). Carry on!

#192 thegreasypole

  • Guest
  • 13 posts
  • 0

Posted 06 September 2006 - 09:11 AM

Hi All,

Guilty on the ellipses, granted. I just find them useful for preserving the flow of thought. I really should go back over the post and punctuate properly but most people seem to get the idea.

Regarding Hank's comments on an AGI's "optimization target" I have to reply what has that got to do with my logic in any way ? None of it relies on AGI having this target, or that target, or a stable target, or a constantly changing one.

The AGI's target can be anything it wants........ however, for the reasons outlined above self-preservation has to always be amongst those targets (lest the AGI do the IT equivalent of a man with no sense of self preservation crossing a freeway without looking simply to get to the other side). Any AGI that DOES edit or remove the requirement to preserve it's integrity whilst continuing to upgrade itself is not going to be a problem anyway, it just isn't going to last long as an integrated intelligence. Much as a man with no sense of self-preservation constantly crossing roads/freeways in cities is, very shortly, going to be no threat to anyone (unless he comes in through your windshield).

Other than that....... they can be intelligence optimizing machines, or galactic-explorers extroadinaire, or calculators of Pi to the gagillionth place, or "anything they want to be" for all I care...... the important thing is that they are self-preserving (along with the other 2 assumptions, which are nothing to do with their targets or programming), and the flip side to the self-preservation assumption is that if it false and if they are not "self-preserving" then they will destroy themselves before they are a threat to us.

Again, I should re-iterate, this is not to say there is no threat at all, or that "Friendly AI" research is unneccessary......it very much is.......... it is to state that the Uber-Unfriendly "The AI's all gang up to turn us into paperclips" scenario is extremely unlikely from a purely logical viewpoint...... the far likelier scenario is a few "rogue" AI's who are contained/neutralized/eliminated by coalitions of humans and all the other AI's (those with logic and a desaire not to be turned into paperclips themselves at some future date).

Which hyper-intelligent machine is going to do the equivalent of Pol-Pot's "kill all the people with glasses" knowing full well that it may have to start wearing a pair of bi-focals tommorrow ?

(In case you are wondering, pol pot thought it was a giveaway sign of an intellectual)

Yours,

TGP

#193 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 06 September 2006 - 03:03 PM

It's funny you should point that out, because I specifically recall being bothered by using each form entirely arbitrarily, knowing there were atrocious errors and inconsistencies in my attempts to use its correct form without actually looking it up.

No problem, Hank. Myself I tend to use idioms overly, but they probably characterize most of ordinary language.

Guilty on the ellipses, granted. I just find them useful for preserving the flow of thought.

Flows of thought can be quite messy, though, you have to admit.

Regarding Hank's comments on an AGI's "optimization target" I have to reply what has that got to do with my logic in any way ?

You could make many valid, nonsensical arguments with many logics. But unfortunately that's not enough. They also need to be sound, and even that doesn't necessarily mean something nontrivial.

For instance, your self-preservation assumption isn't convincing. We could find a case where much algorithmic activity survives without self-preservation imperatives, like the whole time you're writing a post, performing an intervention through an incomprehensible amount of support, without calling a self-preservation function. There's little reason to believe that a sufficiently powerful algorithmic process couldn't engulf the planet – consisting of people with "purely logical viewpoints" – before its potential phaseout.

#194 thegreasypole

  • Guest
  • 13 posts
  • 0

Posted 06 September 2006 - 04:50 PM

Ok,

Perhaps I should put this another way.....

Concerning the "self-preservation" function.

Now, you are not (yet) arguing that an AGI with this function won't make the logical steps I have outlined. So we can leave that argument for a later post.

Here you appear to be arguing an AGI can survive without such a function, negating my logic. Let's examine that case in terms of the "they try to turn all humans into paperclips" scernaio we are discussing.......

We have an AGI that has no desire/drive/higher-level function for self-preservation. This AGI, for whatever reasons of it's own, begins to exterminate humans. Then we humans start locating parts of it's source code, memory, processors and the linkages between them wherever we can and destroying them in order to stop the extermination. Lacking any desire for self-preservation the AGI does not attempt to stop us, nor, lacking any desire for self-preservation would it desire to replace the damaged/destroyed parts. Shortly, we are able to wreck enough of it's internal infrastructure to reduce it's intelligence to a level where it is manageable. Problem solved. If it has no "instinct" for self-preservation...... it will use no part of it's formiddable intelligence to prevent us doing the equivalent of taking fire-axes to it's motherboard. Not even any "What are you doing Dave ?" .... it simply wouldn't care.

On the other hand....if we postulate that the AGI tries to resist these efforts in an attempts to preserve it's own internal consistency/identity/functional integrity (and so posit's a genuine threat in that it can use it's higher level intelligence to keep itself functional despite our best efforts) we are back to accepting all three of my assumptions, and it will ultimately realise setting the precedent of "the strong can exterminate the weak at will" sets a precedent inimical for it's self-preservation.

Basically, to take another tack on this issue....... any "rogue AGI" with no sense of self-preservation is going to be an easy threat to counter (as would be an unprotected human, with no sense of self-preservation, on a battlefield) any attempt by such an AGI to "turn humans into paperclips" is quickly going to end with the AGI being recycled for stationary products ...... However, should he wish to maintain his internal integrity then we can agree that he does have an instinct for self-preservation and this point (but not perhaps others) is moot.

Concerning the "hard Vs soft" takeoff argument.......

Again, I am not postulating either a hard OR soft takeoff, just that there are several AI's or AGI's OR that (if there is only one) there remains the possiblity of more being created in future (by either AGI or Human hands). Such a view is consistent with both the soft and hard takeoff models.

Even in an ultra-hard "one AGI takes over the whole computational resources of the atoms in the earth" scenario such an AGI would have to contend with future "independent AGI's" (providing light-speed remains a limit) if it wanted to convert the atoms in bodies such as Saturn and Jupiter to computation....... and if it's not hungry for atoms why take over the whole earth ?

The nine minute or so delay would enable those extra-terrestrial AGI's to become independent entities (independent and vastly more intelligent given the mass they have access to)...... and afterall, why posit such an ultra-hard takeoff ? if it's computational resources it's after why doesn't the AGI just ship itself to jupiter when still just a baby in comparison and use the matter there ?

I just don't buy the "once, for ever and only AGI" model........ there will be reasons, not least light-speed, for there to be more than one. And, providing there is, does the first created want to set off a highlander style AGI society ? Where to meet is for one to die, consumed it's the more intelligent adversary (again, a self-preservation function being the key factor) ?

Yours,

TGP

#195 thegreasypole

  • Guest
  • 13 posts
  • 0

Posted 07 September 2006 - 01:57 PM

Ok then Hank, one-by-one........

But if you more closely examine the notion of algorithmic process, you would recognize the possibility that an internal representation of "ego" can take a potentially infinite amount of forms. The choice of merely one to suit your conviction that self-preservation is exclusively disjunctive is capricious. An internal representation of so-called ego can carry a human-intelligence-immune identity through engulfing the planet and the arbitrarily subsequent seeding of its absolute successor. 


The "self-preservation" function has NOTHING to do with ego or self identity. The assumption that any AGI would have one (Implicit or explicit) is fundamental to HAVING an AGI. Take even todays ridiculously simple (in comparison) computer programs. Every single one of them has an implicit self-preservation function. IF the actions it takes to perform a task STOP the process it initiated at some point before the end goal.......... then the program isn't successful, it crashes.
Should we build an AGI without an implicit self-preservation function (or more likely an explicit one) we won't have an AGI........... everytime it attempts to complete a task it'll crash by disrupting it's own runtime before the goal is complete. Just as programs routinely do so today (until sufficiently well designed to have implicit self-preservation functions).

The programmer will say "We're so close to building an AGI.... but this one keeps crashing, it's great at doing tasks but it keeps re-writing bits of itself/overwritting core memory in the process. Causing a crash. As soon as I can stop it crashing itself on every/most/some tasks we'll have an AGI..... a robust solution that DOESN'T CRASH" and continue re-programming the damn thing UNTIL it can complete any task without disrupting it's own internal working catastrophically.
When he has done so........... we will have our first AGI.......... and that AGI, neccessarily, will either implicitly or explicitly have functions that stop it from catastrophically ending it's own runtime before any task is complete. Whatsmore it will also, NECCESSARILY (to be a true AGI), have a pre-disposition to stop outside factors ending that runtime. Because, to BE an AGI it needs to be able to complete tasks (including inputs from outside) without "killing itself".......... if it consistently kills itself in the course of a task, or outside inputs have the same effect, we won't have an AGI, we'll have a machine that is "almost an AGI, if it didn't keep wiping everything".

So you have evidence of aliens?


No. I'm saying IF the light-speed limit holds.......... then Jupiter/Saturn/Any Planet outside Mars or inside Venus......... is too far away to be "incorporated" as part of a single AGI based on any one of those planets. Let's say the mass of jupiter is turned into a computational machine. That machine will be so complex, so intelligent, that the several minute lag between it and earth mean that any AGI built there has ample time (millenia on human timescales) to become indpendent of the "core" AGI on earth. To make it dumb enough to avoid this......... would negate the effect of turning that matter to computation in the first place. In MIPS Vs Bandwith the upper limit of light-speed dictates that over anything but the minutest scales MIPS will win. It may not even be possible for the AGI to remain in sole control of the earth........ certainly, any "slaved subsystem" with enough mass to utilise and outside of the earth-moon system will beome intelligent in it's own right. and, of course, a "Jupiter mass" AGI will be "above" an "Earth mass" AGI in the same way the Earth mass AGI is "above" humans.

On the other hand IF the earth AGI isn't trying to "maximise computational resources" why did it turn us all into paperclips in the first place ? Either it is after maximisation or not. If it is the hard takeoff is possible...... but any such AGI will have to reckon with more intelligent AGI's than iself in the future (the Jupiter/Saturn/Neptune AGI's........plus planets beyond our SS)......... or it isn't seeking computational maximisation, in which case why the need to exterminate less computationally efficient forms ?

Ok, so your argument is that an AGI, just strictly with the goal of self preservation, won't destroy all of humanity because ... that means when it creates a next generation AGI (... why would it do that?), that AGI would destroy the original in a similar manner in which it destroyed humanity.

This is nonsense. Self preservation isn't even necessary as an explicit supergoal, it follows naturally as a subgoal of almost any supergoal. Furthermore, there is no reason to create a "next generation" AGI, when, as is the entire point of "the Singularity", it is improving *itself*, making *itself* the next generation.


But you are ignoring the bandwidth problem caused by the SoL. It's massively sub-optimal to slave ALL of the solar systems computational mass to a single "entity", because of the lag of communication between components. There's no "sub-task" it can farm out to a slaved system at Jupiter that is BOTH a) simple enough to be computed by a non-intelligent machine AND b) in which the result can be acheived faster (including the time lag of send/reply) than calculating it at home. The only way to use such resources is to have a machine there that IS intelligent and so can perform calculations that are so far in advance of what the "earth system" can do that the time-lag of send/reply becommes worth it,. Of course, doing so with the time lag involved means that any system there is going to become an AGI in it's own right, and one far superior (due to it's mass) than the earth system.
Taking into account the bandwidth limits imposed by c any AGI will want to spin off other intelligent AGI's. Rather than sending the equivalent of 1+1 to jupiter and waiting millenia for a reply..........it can send tasks that only a vastly more intelligent machine can solve to Jupiter and await the reply........only in the second case is the answer worth the wait.

and if those actions are detrimental to the supergoal of the AGI (obviously true, for the vast majority of AGI designs), regardless of whether "self-preservation" is an explicit supergoal, it will take action to prevent these actions from continuing (obviously successfully, for the vast majority of AGI designs. You can't predict the exact moves of a smarter chess player, but you can predict the outcome of the game).


Fine, I see you have finally accepted the point that it must be self-preserving. Now the question becomes..... why should it demand to incorporate the 0.00000001% of the earths mass used by humans into "computronium" when it has another 99.99999999% of earths mass equally available to it AND where the second course will not set an unwelcome precedent for it's own survival ?

It would be like a man marooned with 10 other men in a monster cargo container ship full of food. Why should he kill and eat the weakest member (and so set a precedent that may come to rebound against his own self-preservation instinct) when there is a super-abundance of food anyway ? If it is so "greedy" that it MUST HAVE that extra 0.000000001%, then why not ship itself to jupiter where it can grab a trillion-trillion times the mass ? This is not "ego" or any "human ethics"....... it arises entirely from self-preservation, an instinct we both now realise it must have in some form.

This has ABSOLUTELY NOTHING to do with one's "approach to the less intelligent". This has to do with the values- the good/bad, the right/wrong, that humans universally hold as self-evident.



The point being, this "psychic unity of humanity" is human-specific. There is no reason for any arbitrary AGI to see these same values as inherently obvious. For lack of a better concrete explanation off the top of my head, I'll just point you back to I, Robot as an example.


Well, personally, I'd suggest that Hollywood blockbusters (even ones loosely based on Asimov short stories) aren't neccessarily the best guide to understanding the singularity or AGI. I agree there is absolutely no reason for them to see ANY human values as inherently obvious. But, for the reasons outlined above, we can be sure they will have at least one value, the value of "self-preservation" in some form. Providing there is more than one of them (or that others remain possible in the future) this value dictates a principle applicable to all "Don't kill me and I won't kill you".......... for if, as you postulate, a greater intelligence means an ability to kill you or others relatively easily, every AGI MUST fear the next AGI above it on the "intelligence chain"........ for any "super-agi" that decides it can safely do so, there will be many AGI's, less intelligent than it individually, that will take the side of defending this status quo in order to fulfill their requirement of self-preservation. We may have to fight the most intelligent..........but we will always have millions of slightly less intelligent allies on our side.

Imagine AGI's ........ each with an intelligence "value" on a scale of 1 to 10........ for every AGI who isn't at 10 OR any AGI that anticipates there one day may be an 11, self-preservation dictates that the principle "you can eat anyone with a lower number than you" should be fought against. By definition, anyone NOT at the top of this heap has a massive incentive to co-operate in defending this principle PROVIDING they are self-preserving entities. ANY OTHER STRATAGEM concedes that either the entity will be absorbed OR that it will greatly hinder it's abilities to resist such absorption (by not co-operating with similar sub-10 entities).

Yours,

TGP

#196 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 11 September 2006 - 02:43 AM

Relevant to this recent discussion between Hank & TGP...

AGIRI.org has started a new email discussion list called [singularity] to complement the more technical [agi] discussion list.

Join here: http://www.agiri.org/email

===


AGIRI (www.agiri.org) has launched a new email list, intended to
parallel rather than duplicate the existing AGI (agi@v2.listbox.com)
list.

The new list is called the Singularity list
(singularity@v2.listbox.com), and is intended to focus, not on
technical discussions of AGI systems, but rather on the Singularity
and related conceptual and scientific issues.

If you would like to sign up for this list, you may go to the form at

http://www.agiri.org/email/

In the unlikely case that the notion of the Singularity is new to you,
check out links such as

http://www.kurzweilai.net
http://www.kheper.ne...arity/links.htm
http://www.singinst....ingularity.html
http://www.goertzel....ranscension.htm

Of course, the dividing line between Singularity issues and AGI issues
is not fully crisp, but there are plain cases that lie on either side,
e.g.

** examples of AGI-list issues would be: technical discussions of
knowledge representation strategies or learning algorithms

** examples of Singularity-list issues would be: discussions of
Friendly AI which don't pertain to specifics of AGI architectures;
discussions of non-AGI Singularity topics like nanotech or biotech, or
Singularity-relevant sociopolitical issues

There is of course some overlap between this new Singularity list and
existing lists such as SL4, extropy and wta-talk, and forums such as
kurzweilai.net; but I believe there is value in having an email
discussion list that focuses specifically on the Singularity, and
without a strong bias toward any particular point of view regarding
the Singularity. (Though, one may expect some statistical bias toward
discussions of AGI and the Singularity, due to the relationship of the
Singularity list with AGIRI.)

Discussion on the new list can be expected to be slow at first as the
subscriber base builds up, but will likely accelerate before too long,
in good Internet style ;-)

Yours,
Ben Goertzel

#197 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 11 September 2006 - 02:44 AM

Ben's first post to AGIRI's [singularity] email discussion list:
==
Hi all!

Over 50 people have subscribed to this new list since its creation a
couple days ago, which is pretty exciting... if you haven't thus far, join
this general discussion list about the [singularity] here: http://www.agiri.org/email

This seems like a large enough crew that it's worth launching a discussion.

There are a lot of things I'd like to talk with y'all about on this
list -- in fact, I'd planned to start things off with a discussion of
the possible relevance of quantum theory and John Wheeler's notion of
"It from Bit" to the Singularity. But now I've decided to save that
for just a little later, because my friend Shane Legg posted an
interesting and controversial blog entry

http://www.vetta.org

entitled "Friendly AI is Bunk" which seems to me worthy of discussion.
Shane wrote it after a conversation we (together with my wife
Izabela) had in Genova last week. (As a bit of background, Shane is
no AGI slacker: he is currently a PhD student of Marcus Hutter working
on the theory of near-infinitely-powerful AGI, and in past he worked
with me on the Webmind AI project in the late 1990's, and with Peter
Voss on the A2I2 project.)

Not to be left out, I also wrote down some of my own thoughts
following our interesting chat in that Genova cafe' (which of course
followed up a long series of email chats on similar themes), which you
may find here:

http://www.goertzel....riendliness.pdf

and which is also linked from (and briefly discussed in, along with a
bunch of other rambling) this recent blog entry of mine:

http://www.goertzel.org/blog/blog.htm

My own take in the above PDF is not as entertainingly written as
Shane's (a bit more technical) nor quite as extreme, but we have the
same basic idea, I think. The main difference between our
perspectives is that, while I agree with Shane that achieving
"Friendly AI" (in the sense of AI that is somehow guaranteed to remain
benevolent to humans even as the world and it evolve and grow) is an
infeasible idea ... I still suspect it may be possible to create AGI's
that are very likely to maintain other, more abstract sorts of
desirable properties (compassion, anyone?) as they evolve and grow.
This latter notion is extremely interesting to me and I wish I had
time to focus on developing it further ... I'm sure though that I will
take that time, in time ;-)

Thoughtful comments on Shane's or my yakking, linked above, will be
much appreciated and enjoyed.... (All points of view will be accepted
openly, of course: although I am hosting this new list, my goal is not
to have a list of discussions mirroring my own view, but rather to
have a list I can learn something from.)

Yours,
Ben Goertzel

#198 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 11 September 2006 - 02:44 AM

Replies to this discussion are here:
http://archives.list...0609/index.html

#199 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 15 September 2006 - 05:40 PM

Subscribed... and it looks like something I'm going to get really involved in, thanks Bruce & Ben for starting it up!

#200 marcus

  • Guest
  • 45 posts
  • 0

Posted 27 September 2006 - 03:19 AM

Having read through the material on Novamente and the reaction to it in this forum, I think a main point in Ben's approach was missed. One of the notions that was frequently mentioned is that we have no definition of intelligence or any ability to model it so how can we possibly be anywhere close to AGI. Ben does have a definition of intelligence and has modeled it in a mathematical way that may allow them to program a machine to display real intelligence. It is not a model trying to mimic how the brain works, but rather how a mind works. And although how a mind works is really complex, it is not nearly as complex as understanding how our brain works to achieve a mind.

So if a mind is a factor or 2 lower in complexity then our brain, it is plausible that a mind could be modeled in software and on silicon with today's technology. I know on the surface the notion of AGI sounds implausible, but the Novamente Engine does carry with it all the requisite components for building a mind in tremendous detail. This is the paper that really helped crystalize my understanding of their model.

http://www.novamente...file/AAAI04.pdf

Also, Goertzel.org has some interesting essays and writings on the structure of intelligence and his psynet model.

I'm quite curious to see the progress of Novamente now that I understand the validity of the approach taken.

Marcus

Edited by marcus, 27 September 2006 - 07:34 AM.


#201 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 27 September 2006 - 11:05 PM

It is not a model trying to mimic how the brain works, but rather how a mind works. And although how a mind works is really complex, it is not nearly as complex as understanding how our brain works to achieve a mind.


This indeed is the basic question I still have. In another thread I tried to provoke a bit to get some response, but that didn't work. The examples I gave were a bit naive to say the least, but that was part of the game. I hope I did not offend anyone with it, but my questions in the past partly remained unanswered. I will try to make my point in a more elaborate fashion here. Of coarse, it's al based on my (yet) limited understanding of the AI subject, but based on a good background in general software modeling and it's pitfalls. And yes, I know I should do more reading into the subject.

Here comes my hypothesis of hypothesises :)

Building a mind using a functional model is a top-down approach, a job that takes place in the mathematical and IT domains. It's nature is analysis and creation of hypothesises based on observations of performing minds. As black boxes, without any assumption regarding their internal structure. Next step is determining basic functions with their relations and interactions that are performed by a mind. After this, a basic architecture of components can be dreamed up that is able to execute the functions that are determined in their proper context. This process is more or less a boundless process of applied intelligence based on observation and analysis. the resulting architecture does not represent the internal structure of a brain.

Building a brain is more or less a bottom-up approach. By trying to re-build or re-engineer a brain, we can also recreate the mind that is implemented by this brain, without knowing exactly what the higher level functions of a mind are and without needing a proper definition of intelligence, emotion, etc. This re-engineering job takes places primarily in the bio-tech domain and eventually in the IT domain. The big question here is what level we will try to re-engineer or simulate. As far as I understand, this could be done by recreating the neural network at the level of synapses and their connections. This could result in a static network model, based on research that is carried out on one or more (human or animal) subjects. I assume there are a limited amount of types of synapses that can each be recreated as a software model. Instances of these models could that be connected according to the actual network of a brain. Parameters such as presence and concentration of brain fluids can be included in this model as well, including the organs (with the triggering) that produce these fluids. Etcetera. A big issue here is the dynamics of growth and decline and the associated genetic influence. And all the things I omitted out of ignorance. Anyway, this is a process of bounded research and engineering.

Regarding the re-building of a brain, we do know that this is almost impossible due to the complexity of the neural structure and the dynamics that are involved. This will translate into a very long development time, but several sub-phases and sub-projects could be realised. The rebuilding of a mind seems to be less complex, but my fear is that this is based partly on ignorance. As Bruce already stated, it is a highly incremental process of which the major question eventually will be "Did we model the complete mind yet?". It's not easy and perhaps impossible to determine the stop criteria here. On the other hand, the ultimate goal of creating an AGI does not need to be to replicate all aspects of a human mind, so we can determine our own stop-criteria.

But here we reach at my basic question or concern. The Novamente AGI (mind) model is also mentioned in the context of (human) brain/mind uploading. I do not understand how that could happen since we can never be sure if the AGI represents all functions of a human brain. For uploading, the brain rebuilding path seems to me the way to go. If we eventually develop all the required technology and knowledge, we are able to copy / upload a brain with a higher level of certainty that it is complete.

So, for building AGI's the mind building process is the way to go, since it will be possible to produce results in an early stage. But, for building a system that is able to upload brains / minds, we need the brain re-enginering approach.

Any comments please! What are the flaws in my hypothesis?

#202 marcus

  • Guest
  • 45 posts
  • 0

Posted 28 September 2006 - 07:30 AM

Brainbox,

I believe what has been theorized is that the development of a successful AGI system like Novamente would accelerate the process of technological change to the point where brain downloads could be a part of the future. Novamente is just the AGI system that once/if it passes human level intelligence could contribute towards such technologies.

Yes, Novamente has taken what you call the top-down approach to attempt to construct a mind. I am rather new to AI and not at all an expert either, but I believe most of the posters on this board have dismissed the idea out of hand as being too difficult without having looked into the details of the approach.

This is another excellent paper describing the Novamente architecture and in particular it does an excellent job of explaining how SMEPH's represent both knowlege and system dynamics. As I said earlier, I'm not an AI expert (and the highest math I had was calculus), but if you stick to the simple examples and take a look at wikipedias section on hypergraphs you can get a decent understanding of how they work. In a sense, those SMEPH's are the missing link in how you get from a brain which generates an intelligent mind to a mathematics based piece of software that does the same thing.

http://www.goertzel....human_psych.pdf

Marcus

Edited by marcus, 28 September 2006 - 09:53 PM.


#203 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 October 2006 - 07:15 AM

Hey Marcus & Brianbox,

You guys are on the right track. It's taken me a good part of a year to digest the general architecture of Novamente. Considering Ben has been thinking about this for more than 20yrs and the CS involved is pioneering, I still have a long way to go.

We've uploaded PowerPoints and Video from our second AGI Workshop (Sept 17, 2006; Palo Alto, CA), here: http://www.agiri.org/workshop2

Posted Image

#204 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 October 2006 - 07:17 PM

Also, you may enjoy Ben's recent podcast interview (AGI & Singularity) from http://www.singularityu.org.

#205 marcus

  • Guest
  • 45 posts
  • 0

Posted 09 October 2006 - 08:29 PM

Bruce,

Can we expect to see updates on Novamente's progress here in this forum? I am very curious to see how the project is progressing and how fully the architecture has been implememented. Something similar to an updated milestones table like what was presented in the human mind/agi comparison paper would be ideal.
http://goertzel.org/research.htm

I know you got a little ridicule for having such an accurate time-table for Novamente's development, but after looking at how you plan to implement learning in the system it makes sense to have a detailed outline and I can see the logic behind the predictions.

Also, having a little experience with the venture capital industry I can only imagine how difficult it must be to fully explain your concepts to potential investors. There are a couple of VC firms that I know of who take more of an interest in long range projects. PM me if you'd like me to give you a contact. Not sure they would consider something this ambitous, but I know they would give you an opportunity to present your ideas.

Marcus

#206 brandonreinhart

  • Guest
  • 67 posts
  • 0

Posted 09 October 2006 - 09:08 PM

Whoa. Eliezer's in the crowd...but also on the stage! Did he clone himself to get more work done?

#207 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 October 2006 - 01:55 AM

Thanks, Marcus... I've sent you PM.

Can we expect to see updates on Novamente's progress here in this forum?

Yes... we also have healthy discussions at the [singularity] list here: http://www.agiri.org/email

More general updates are posted here: http://www.novamente.net/news

#208 attis

  • Guest
  • 67 posts
  • 0
  • Location:Earth

Posted 10 October 2006 - 04:32 AM

I just have a quick question to all the AGI developers that are here on the forum.

Will AGI be a form of Non-Turing computing? Because I'm actually looking at the properties of molecules to handle such operations, specifically proteins and how they fold.

#209

  • Lurker
  • 1

Posted 10 October 2006 - 05:47 AM

Good to see a couple of Mac notebooks.. ;)

Attached Files



sponsored ad

  • Advert

#210 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 10 October 2006 - 09:33 PM

Mine is being developed for a Turing-type machine... no special bio-computer architecture, right now it would just complicate things.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users