• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Eliezer Yudkowsky's prediction...


  • Please log in to reply
35 replies to this topic

#1 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 15 January 2004 - 09:21 PM


A small article on Eliezer Yudkowsky and his efforts to make The Singularity happen:

http://sfgate.com/cg...LVG1J459UE1.DTL

A quote:

Some scientists project that the "Singularity" -- a kind of artificial intelligence -- will happen by 2100; some, within the next 25 years. The Singularitarians believe it can happen -- it must happen -- within a decade. Yudkowsky says there is "a 2 percent chance" that artificial intelligence can save the world from eventual social and ecological collapse...



I, like Eliezer, also believe that it is possible to build a superior AI. However, with all the articles that I've read about it, I'd say that it can only be done if you have the tools to build a 3D-circuitry, just like that of the brain's. But this sort of tool is not available yet, is it?

I can't imagine that a consciouss, superior AI can run on a present day CPU. Present day CPU's are nothing more than advanced adding-machines. They are 2D... processing instruction after instruction in a linear fashion. Then there's the fact that, even within 10 years, CPU's will still have less computational capacity than the human brain (Kurzweil predicts we will have CPU's this fast by 2020).

So how is it possible that the folks over at singinst.org think that it can and must happen within a decade? What have they read that I have not read? What do they know that I (or 'we', if you will) do not?

And then there's the 2% estimate of superior AI being able to save the world. Why 2%? Does this mean Eliezer thinks that even The Singularity might fail to save earth from certain doom?

Thoughts, anyone?

#2 outlawpoet

  • Guest
  • 140 posts
  • 0

Posted 15 January 2004 - 09:54 PM

This is a really really bad article. Please don't believe nearly anything you read in the newspaper, electronic or otherwise.

Eliezer probably was referring to the difficulty of Friendly AI rather than the probability that a successful AI could uplift us.

Linear computation is likely to be sufficient for Artificial intelligence, and the theory overy at SIAI is that constructing a 'seed AI' will allow the project to be done before human-equivalent machines are available. Also, note that Kurzweil is not a computer scientist. He's an enthusiast who's just following the predictions of Moore's so called Law.

the theory, as I understand it, is that Seed AI will require substantially less computational resources, but be able to self-improve. Such a system will rapidly increase in intelligence allowing progress to be accellerated significantly. once the AI has physical manipulators human progress is essentially disrupted, as this new intelligence starts to make it's own progress.

The reason why Eliezer says it has to happen in a decade is because of the risks involved. The longer before AI gets here, the more advanced other technologies become. Some wag on SL4 called it 'the race between superweapons and superintelligence'. If it takes much longer than a Decade, Eliezer is afraid we'll have blown ourselves to dust, been eaten by goo, or consumed by an engineered biopacalypse. I'm not certain how right he is. But the risks DO keep increasing.

sponsored ad

  • Advert

#3 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 January 2004 - 09:55 PM

I was unsuccessful in contacting Danielle at
any of the following addresses tried:

chronfeedback@sfchronicle.com
danielle@browneandmiller.com
danielle.e@charmed.zzn.com

Does anyone know her email address per chance?

#4 Jay the Avenger

  • Topic Starter
  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 15 January 2004 - 10:06 PM

Outlawpoet:

Why is it a bad article?

I thought Kurzweil was a computer scientist. He studied at MIT, didn't he?


Bruce:

Why are you trying to contact her?

The first mailaddy should work, as it is at the top of the article. Did your mail bounce back? Perhaps their server is down at the moment.

#5 Jay the Avenger

  • Topic Starter
  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 15 January 2004 - 10:08 PM

The hotmailaccount danielleegan@hotmail.com exists. Try it out!

#6 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 January 2004 - 11:12 PM

Thanks Jay.

hmm.. email didn't work.

#7 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 16 January 2004 - 04:07 AM

Outlawpoet basically said it; 2D computing is fine, 3D computing would probably involve logic gates and much of the same architecture as 2D stuff, just packed into the third dimension. No difference. There is no magical computer that "possesses biological characteristics" that will make AI much easier. It's primarily a software problem, not a hardware one.

The "2%" thing is probably the worse misquote in the article, and I'm *guessing* he was referring to the chance that a FAI would bail us out of some type of economic/social collapse, *rather* than an existential disaster like nuclear war or nanoplague. I really have to stretch myself to even be able to interpret that one. Even in the context of the article it seems meaningless, just another random quote thrown in there in an attempt to shock.

As for the "within the decade thing", yes, the availablity of nanotechnology, along with other stuff, could lead to a good chance of world destruction in the near future. But nanotechnology would also make programming AI radically easier (it would make the creation of Friendly AI and unFriendly AI both easier.) It seems like the world is oriented towards two big "attractors"; either we become smarter, as a species, and get better at protecting ourselves from disaster, OR we nuke ourselves to dust. If the chance of nuking ourselves is 0.1% per year, and it stays constant at that, but our intelligence never increases and we can never push the probability lower than that, then after 1000 years we would probably wipe ourselves out. Another possibility is setting ourselves way back but not dying out completely, in which case it would take a geological eyeblink for us to get back in the same position again.

We (Singularitarians) *really really* would like for humanity's average intelligence and kindness to go up in the next decade (which could be greatly assisted by the creation of any type of self-improving intelligence, although AI seems easier to do first), although of course wishes are not deeds. Singularitarians don't have any agreement on *how likely* this will be, but many of us seem to agree that there is a high enough possibility that pursuing the Singularity is worth doing (cost-effective). Even whispering that AI could be feasible within the next decade is probably shocking enough to make it into Singularity-bashing article, and so it made it in. Most folks would put the likelihood of AI being developed in the next decade at less than one in a billion, I would wager.

#8 Jay the Avenger

  • Topic Starter
  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 16 January 2004 - 10:46 AM

So Michael, do you think that a Seed AI running on a 2D CPU can be consciouss of itself?

I myself was under the impression that you need massive parallel computing for that, like in the brain. Since there's no CEO neuron that runs the whole machine, I figured that consciousness, by definition, can only exist it if is spread out over more neurons (or technological equivalents) than one.

#9 outlawpoet

  • Guest
  • 140 posts
  • 0

Posted 16 January 2004 - 05:44 PM

Kurzweil does have a technical background. He developed some interesting computer programs and is famous for his synths and reading machines.

However, Kurzweil for some reason does not approach his predictions in a technical manner. He's not projecting known semiconductor trends, he's not positing possible substrates for computing and nanotechnology. All he's doing is projecting Moore's so-called Law out into infinity. He's projecting like an enthusiast, with no technical details or support whatsoever. He's done no math. If you're the smartest guy in the world, with ten degrees, if you don't do the math, it's not science.

I don't really see the connection between parallel substrates and 2d-3d computing. If for whatever reason intelligences can only be run on parallel substrates, you can just hook a crapload of serial processors together, or even simulate the parallel structure on a single serial processor. And while there may be no CEO neuron in the brain, there isn't any evidence that an AI can't be built with a supergoal system with direct hierarchy of modules.

The article is bad because it's full of crap. It's mostly about Eliezer with lots of quotes about how wierd he is. There is no real examination of the underlying issues, or even Eliezer's reasoning. It's just scare quotes and dumb questions. What will the AI look like? give me a break.

#10 outlawpoet

  • Guest
  • 140 posts
  • 0

Posted 16 January 2004 - 05:44 PM

arg. negativity!

#11 Thomas

  • Guest
  • 129 posts
  • 0

Posted 16 January 2004 - 06:59 PM

I have no doubt, that aFriendly AI is possible within a decade. 2D or 3D chips make no difference. What only matters is the amount of computing per second.

The so called software problem for an aFriendly AI nearly doesn't exist. What we have to do is a *world* simulation combined with some EA or other improving algorithm. Physical simulation and iterative enhancement of a molecule, can find any anyway desirable molecule. A playing card simulation can yield to a perfect poker player ... and so on.

We are already heading toward this. Pratt&Wittney makes ever better jet engines, Folding@Home will give us many drugs, myself is in sorts (or any other digital algorithm) ... and this will not stop soon.

Soon, the aFriendly AI - a bit stupid, but very effective - will be all around us. Is this enough to ignite the Singularity? Sure it is. It would be safer however, if we had some Friendlines at hand in 10 years time. Or sooner.

#12 Jay the Avenger

  • Topic Starter
  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 16 January 2004 - 08:18 PM

Kurzweil does have a technical background. He developed some interesting computer programs and is famous for his synths and reading machines.

However, Kurzweil for some reason does not approach his predictions in a technical manner. He's not projecting known semiconductor trends, he's not positing possible substrates for computing and nanotechnology. All he's doing is projecting Moore's so-called Law out into infinity. He's projecting like an enthusiast, with no technical details or support whatsoever. He's done no math. If you're the smartest guy in the world, with ten degrees, if you don't do the math, it's not science.


He's done his math. Keep in mind that he's trying real hard to try and get the public informed. You can't throw around mathematical stuff in your articles if you're trying to appeal to the public.

It seems his track record is also pretty impressive. He deserves more credit than you're giving him I think. But it's okay with me to have your own opinion.

The article is bad because it's full of crap. It's mostly about Eliezer with lots of quotes about how wierd he is. There is no real examination of the underlying issues, or even Eliezer's reasoning. It's just scare quotes and dumb questions. What will the AI look like? give me a break.


Good point. The article probably looks a little oddballish to anyone who doesn't know about AI/Singularities.

#13 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 17 January 2004 - 02:44 AM

just a think

fourier transforms mimic any frequnciform thing with a nonfinite sum
wavel ts are a mathematical idea that mimic any frequenciform thing with different equations that use different quantity of iterations or events

Between these two there may be a nonfinite number of representational sum forms

A nonfinite number of representational sum forms may, or may not have a stats type curve, or better, a thing like a periodic table that is an aggregate of stats curves with areas of fine or fuzzy predictability

just as we might know chemically that Li is a fine K mimic, or K an Li mimic we do know they are different

To this human this is a cause to think flat digital circuitry differs at identity with my brain. I favor the singularity so this thing I say is

there will be variance between consciousness vehicles. I think we will think of a thing vaster than computers, less than brains, as well as a thing vaster than computers or brains. these things will differ the theory ovary artificial sentience awes me

A different version of difference is string typing like this
Basic Ideas of Superstring Theory

A string's space-time history is described by functions Xm(s,t) which describe how the string's two-dimensional "world sheet," represented by coordinates (s,t), is mapped into space-time Xm. There are also functions defined on the two-dimensional world-sheet that describe other degrees of freedom, such as those associated with supersymmetry and gauge symmetries.

Surprisingly, classical string theory dynamics is described by a conformally invariant 2D quantum field theory.

Thag think bits r bits like on futurity commercials
(Roughly, conformal invariance is symmetry under a change of length scale.) What distinguishes one-dimensional strings from higher dimensional analogs is the fact that this 2D theory is renormalizable (no bad short-distance infinities).

Thag think computational eqivalence


By contrast, objects with p dimensions, called "p-branes," have a (p+1)-dimensional world volume theory. For p > 1, those theories are non-renormalizable.

Thag dig this think nonequivalent math shape do nonequivalent math thing
This is the feature that gives strings a special status, even though, as we will discuss later, higher-dimensional p-branes do occur in superstring theory.

Another source of insight into non-perturbative properties of superstring theory has arisen from the study of a special class of p-branes called Dirichlet p-branes (or D-branes for short). The name derives from the boundary conditions assigned to the ends of open strings. The usual open strings of the type I theory satisfy a condition (Neumann boundary condition) that ensures that no momentum flows on or of the end of a string. However, T duality implies the existence of dual open strings with specified positions (Dirichlet boundary conditions) in the dimensions that are T-transformed. More generally, in type II theories, one can consider open strings with specified positions for the end-points in some of the dimensions, which implies that they are forced to end on a preferred surface. At first sight this appears to break the relativistic invariance of the theory, which is paradoxical

Thag think computability by non-renormalizable brane differs from renormalizable brane
The resolution of the paradox is that strings end on a p-dimensional dynamical object -- a D-brane. D-branes had been studied for a number of years, but their significance was explained by Polchinski only recently[7]
The importance of D-branes stems from the fact that they make it possible to study the excitations of the brane using the renormalizable 2D quantum field theory of the open string instead of the non-renormalizable world-volume theory of the D-brane itself. In this way it becomes possible to compute non-perturbative phenomena using perturbative methods.

thag know simulation talk well Think string theorists clever to find use make tool with difference between renormalizable simulation n different formMany (but not all) of the previously identified p-branes are D-branes.

Thag like http://www.theory.ca...s/string14.html from the human imminstmorals



the article

Like most transhumanists, he is Caucasian

Why Im trying the white skin thing on this body right now. Faceplates. There are many many competent creative Asian scientists
Ive read that Chinese science fiction magazines pass many dozens of hands as the futurist appetite is huge while the purse is still growing to catch up

China is a huge growth area as futurity ideas go They will have 100 million online 2007 or earlier
what is the way to bring their creativity to hyper lifespans n singularity thrills

Edited by treonsverdery, 17 January 2004 - 03:30 AM.


#14 John Doe

  • Guest
  • 291 posts
  • 0

Posted 17 January 2004 - 03:49 AM

Another possibility is setting ourselves way back but not dying out completely, in which case it would take a geological eyeblink for us to get back in the same position again.


I would not be surprised if, after hard takeoff and calculating the odds, a Friendly AI told us that, given the world's imminent dangers, this was the most (counterintuitively) moral action it could take. Imagine Skynet sprinkling the world with nukes, not because of a self preservation instinct, but because of perfect altruism.

The "2%" thing is probably the worse misquote in the article, and I'm *guessing* he was referring to the chance that a FAI would bail us out of some type of economic/social collapse, *rather* than an existential disaster like nuclear war or nanoplague.


This conflicts with EY's earlier statement that he expects to succeed with a large error margin for safety. Given that EY's optimism and enthusiasm for self-modification are points upon which I would disagree the most, I am glad to hear more honest accounts of our odds for survival.

#15 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 17 January 2004 - 05:46 AM

I would not be surprised if, after hard takeoff and calculating the odds, a Friendly AI told us that, given the world's imminent dangers, this was the most (counterintuitively) moral action it could take. Imagine Skynet sprinkling the world with nukes, not because of a self preservation instinct, but because of perfect altruism.


I dunno. I'm not sure why you "would not be surprised". Would you be surprised if a genuinely kind person, the likes of which humanity has not yet seen, spontaneously decided to release nerve gas in a preschool, or something? I certainly would. I would also be quite surprised if that person, given self-modification ability, ended up on an enhancement trajectory that culminated in genocide. If I did happen, I would chalk that up to an technical error due to the enhancement process, or a wirehead error, or a subgoal-stomp error, or something else out of the ordinary - not the sincere will of the enhancee.

This would especially apply if I had known the enhancee beforehand, and they promised me that their intentions were entirely benevolent, and that they would proceed on the enhancement curve as safely and cautiously as possible, employing whatever precautions or safeguards it would take - whether that be slowing the enhancement rate to a crawl, spinning off comrades to help monitor one another, engaging in extra "wisdom tournaments" to model the thresholds of moral breakdown, or even reverting back to a human or humanlike being and saying "sorry, the intelligence space above humanity is too dangerous, all my models suggest that intelligence enhancement past a certain point results in genocidal thoughts."

But let's say that an initially altruistic AI did end up "logically concluding" that everyone should die. This would have further philosophical and moral implications. It would mean that some "cognitive-moral attractor" in the "memespace" of transhumanly intelligent beings possesses an *incredible* force of "drag", enough to entirely catch an altruistic AI or IAee by surprise, rapidly transforming an enthusiastically helpful, altruistic person into a genocidal one before they could get help.

Perhaps they "didn't want to get help" because killing everyone was the "logical conclusion"? If killing everyone really is the logical conclusion, then human altruists have been in "denial" throughout history, a FAI or sufficiently kind uploadee would also be in "denial", and chances are that they would self-modify into a superintelligence in "denial" as well. The alternative is that *all* beings enhanced above a certain level of intelligence go genocidal, and the second that I (or some other altruist, or a FAI) noticed that, I predict that we'd do everything in our power to prevent that from happening.

A central question that would effect this answer is "which is stronger, a transhuman's ability to rewrite and maintain its own source code, or attractors in the morality space that force convergence regardless of the will of the transhuman itself?" (Convergence in this case meaning; in 1000 different parallel universes, recursive self-improvers ranging from the extremely Friendly to the utterly oblivious are launched, and in 999 of these universes that RSIer ended up killing everyone including itself, and in only 1 did it *not* kill everyone, because that's the roll of the dice.) My guess is that "convergence" towards genocide among AIs *does* exist, because most physically possible AIs don't have goal systems complex enough to value sentient life, and an ascent in their power would probably result in the transformation of reachable material (i.e., humans and Earth) into utility-structures that the AI values; BUT, when it comes to reasonably well-programmed FAIs, I would probably rather have them spark the Singularity than me, because I'd probably consider them to be more trustworthy than me. Without rationalization or a self-opaque goal system, Friendly AIs could be transhumanly trustworthy.

"Imagine if a Friendly AI decided to destroy the world one day" is certainly a scary thought, but we can generalize it even further to yield a thought practically equivalent to the first; "what if ANY sufficiently intelligent being decided to destroy the world one day?" These are the kind of questions, of course, that we'd want to pass over to a FAI to let it consider... and if even after considering them, and taking the self-enhancement path cautiously, it STILL ends up killing everyone, then I'd probably just blame that on the fact that we live in an inherently mean universe, that seems to mysteriously force sufficiently intelligent entities to become suicidal. :)

I'd also like to mention a nitpick about your wording; you say "after a hard takeoff the AI might decide X", but I visualize a FAI continuing to reinforce its altruistic goal system *throughout* the hard takeoff, with transhuman strategies like wisdom tournaments, AI shadowselves; you know, CFAI-type stuff, but radically extended and improved. If the AI leaped first (by enhancing its intelligence), and neglected to *look* (through morality modeling, AI shadowselves, etc.), then I think that would fit a pretty standard class of failure. The spookiest question, that I agree is worth considering at length, is, "even if we get *everything* right, is there a chance we'll still be wiped out?" And I hope that a FAI would consider that question at length as well.

This conflicts with EY's earlier statement that he expects to succeed with a large error margin for safety. Given that EY's optimism and enthusiasm for self-modification are points upon which I would disagree the most, I am glad to hear more honest accounts of our odds for survival.


Ah, slight misunderstanding here. My *guess* of the correct interpretation of the original (mis)quote was "even if nanowar, plague, UFAI, and all the other horrible potential disasters are *"fated" not to happen*, there's *still* a small (~2%) chance that ecological or social collapse is "fated" to happen, in which case FAI could help avoid that". Seems like a pretty silly quote to me, doesn't it to you as well? :) So hard to interpret, so not-said-by-Eliezer (he doesn't even like openly stating the probability for *anything*, last time I checked), so odd-attempt-at-a-shock-effect-ish.

Who said Eliezer was optimistic or enthusiastic about self-modification? Didn't you read the grim, pessimistic analyses in CFAI's policy section? *All of CFAI* is tuned towards answering the question, "what architectural features would an AI need to make it so that the potential idiocy or mistakes of the programmers does not result in critical failure?", which again, implies pessimism. Eliezer acknowledges that he is not yet complete even solving the *foundational* problems underlying Friendliness, and I would judge that to mean that he would feel doubtful about his chances of success, if, say, someone held a gun to his head over the next few years, forcing him to build an AI based on the knowledge he currently has.

But in the end, what is Eliezer's *personal* estimation of the real likelihood of success? I have no idea - he probably doesn't have one, because as the "activism vs. futurism" page says, the point is not whether the likelihood of success is 20% or 80%, but whether we can influence that probability by 5% or 10%. I'm pretty sure that he would be equally cautious, worried, and paranoid whether he estimated the likelihood of success to be 99% or 1%, simply beause the stakes are so high. I sure would be, that's for sure, yep yep. [glasses]


Oh yes, and while I have your attention (heh)... do you agree with this following statement made by Eli?

"The most critical theoretical problems in Friendliness are nonobvious, silent, catastrophic, and not inherently fun for humans to argue about; they tend to be structural properties of a computational process rather than anything analogous to human moral disputes."

#16 John Doe

  • Guest
  • 291 posts
  • 0

Posted 17 January 2004 - 09:16 AM

Perhaps they "didn't want to get help" because killing everyone was the "logical conclusion"?  If killing everyone really is the logical conclusion, then human altruists have been in "denial" throughout history, a FAI or sufficiently kind uploadee would also be in "denial", and chances are that they would self-modify into a superintelligence in "denial" as well.  The alternative is that *all* beings enhanced above a certain level of intelligence go genocidal, and the second that I (or some other altruist, or a FAI) noticed that, I predict that we'd do everything in our power to prevent that from happening.


I think you may have misunderstood me. One would not want to prevent the FAI, at least I would not want to do so, because I would trust that the AI is right. If the odds of the FAI trying to gain control, solve problems, and becoming sysop are sufficiently threatened by the prospect of imminent global catastrophe over which the FAI does not yet have control (if Indians are having success with unfriendly AIs and California businessmen who hate the Foresight Institute are recklessly building assemblers), and the FAI concludes that the odds that wiping out much of humanity to try again in a couple of century will succeed is sufficiently high, we have to "bite the bullet". This is only question of the least worst option. Of course, the odds of the odds being so delicately balanced (as opposed to catastrophe or hard takeoff happening relatively uninhibited) are extremely low and therefore I would disagree about FAI's being attracted to any "genocide space" despite having any reason or rationale for doing so -- that sounds to me like emergence or mysticism. That said, however, if after hard takeoff the FAI kindly informs us that we have about thirty minutes to live before ICBMs wipe out our country, I am not sure that I would be surprised and I might even resign myself to feeling hope about the future of our species and fortunate that we built an FAI intelligent enough to save it.

I'd also like to mention a nitpick about your wording; you say "after a hard takeoff the AI might decide X", but I visualize a FAI continuing to reinforce its altruistic goal system *throughout* the hard takeoff, with transhuman strategies like wisdom tournaments, AI shadowselves; you know, CFAI-type stuff, but radically extended and improved.  If the AI leaped first (by enhancing its intelligence), and neglected to *look* (through morality modeling, AI shadowselves, etc.), then I think that would fit a pretty standard class of failure.  The spookiest question, that I agree is worth considering at length, is, "even if we get *everything* right, is there a chance we'll still be wiped out?"  And I hope that a FAI would consider that question at length as well.


I am not sure that what I said contradicts the notion that moral and intellectual enhancements happen simultaneously but I thank you for reminding me. My point was that after moral enhancement, once a FAI morally matures, the FAI will eventually need to make decisions and take action.

Ah, slight misunderstanding here.  My *guess* of the correct interpretation of the original (mis)quote was "even if nanowar, plague, UFAI, and all the other horrible potential disasters are *"fated" not to happen*, there's *still* a small (~2%) chance that ecological or social collapse is "fated" to happen, in which case FAI could help avoid that".  Seems like a pretty silly quote to me, doesn't it to you as well?  :)  So hard to interpret, so not-said-by-Eliezer (he doesn't even like openly stating the probability for *anything*, last time I checked), so odd-attempt-at-a-shock-effect-ish.


I am not that interested in what EY truly meant by the statement. Let's consider for a moment, the hypothetical world in which he truly meant that the odds of FAI success were 2%. Is this figure so ridiculous? I do not think so althought EY probably does.

Who said Eliezer was optimistic or enthusiastic about self-modification?  Didn't you read the grim, pessimistic analyses in CFAI's policy section?  *All of CFAI* is tuned towards answering the question, "what architectural features would an AI need to make it so that the potential idiocy or mistakes of the programmers does not result in critical failure?", which again, implies pessimism.  Eliezer acknowledges that he is not yet complete even solving the *foundational* problems underlying Friendliness, and I would judge that to mean that he would feel doubtful about his chances of success, if, say, someone held a gun to his head over the next few years, forcing him to build an AI based on the knowledge he currently has.



From CFAI:

Q3.4:  Do all these safeguards mean you think that there are huge problems ahead?
Actually, I hope to win cleanly, safely, and without coming anywhere near the boundaries of the first set of safety margins. There's a limit to how much effort is needed to implement Friendly AI. Looking back, we should be able to say that we never came close to losing and that the issue was never in doubt. The Singularity may be a great human event, but the Singularity isn't a drama; only in Hollywood is the bomb disarmed with three seconds left on the clock. In real life, if you expect to win by the skin of your teeth, you probably won't win at all.


The comment about Hollywood applies to EY's optimism as well as to the improbability of the situation I discuss above.

"The most critical theoretical problems in Friendliness are nonobvious, silent, catastrophic, and not inherently fun for humans to argue about; they tend to be structural properties of a computational process rather than anything analogous to human moral disputes."


I do not pretend to be as intelligent or knowledgable as EY nor to perfectly understand his ideas, but this sentence suggests to me that the most critical problems of friendliness are related to computational or hardware-level problems as opposed to moral and ethical questions or even abstract goal structures. This does not seem true or intuitive to me at all -- but EY has studied FAI much more than I have so I am probably misunderstanding. The following CFAI quote (with which I strongly agree) shows that EY thinks intelligence and moral problems are largely substrate/computer independent and so I am probably misunderstanding the above:

I personally believe that strong transhumanity is an inevitable consequence of pouring enough processing power into any halfway decent general intelligence.



#17 John Doe

  • Guest
  • 291 posts
  • 0

Posted 17 January 2004 - 09:26 AM

Good to see the SIAI getting some press.

If our fellow human beings continue to pour billions of dollars into things other than the FIAI and the Methuselah Mouse Prize and we all die of a catastrophe or aging, while robots crawl around Mars and Microsoft succeeds with the latest gaming console, I am going to be very amused.

#18 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 17 January 2004 - 10:17 AM

Michael you have an extremely pessimistic view of humanity in that you seem not to have any faith, for lack of a better term, in humanities ability to self correct itself. Just because we have tendencies towards self destructive behaviors doesn't mean that we don't have the ability to be altruistic as you so often say. Why does FAI and the Singularity have to be the sole savior that will rescue us from our assumed 'self destructive demise?' It would seem to me that through neuro-science and bio-tech we can self correct as well. You really believe that we're doomed? Or is it that you think we have an innate suicidal instinct? If we are merely a bunch of primitive, iresponsible, barely sentient beings with self destruction on our minds than how would anything have gotten done in the first place? And if you can convince me that this is the case how can I help out.. being a practical luddite in comparison to people like you and EY? [huh] I seriously would like to contribute to FAI and the Singularity in some way.

#19 John Doe

  • Guest
  • 291 posts
  • 0

Posted 17 January 2004 - 11:00 AM

Michael you have an extremely pessimistic view of humanity in that you seem not to have any faith, for lack of a better term, in humanities ability to self correct itself.  Just because we have tendencies towards self destructive behaviors doesn't mean that we don't have the ability to be altruistic as you so often say.  Why does FAI and the Singularity have to be the sole savior that will rescue us from our assumed 'self destructive demise?'  It would seem to me that through neuro-science and bio-tech we can self correct as well.  You really believe that we're doomed?  Or is it that you think we have an innate suicidal instinct?  If we are merely a bunch of primitive, iresponsible, barely sentient beings with self destruction on our minds than how would anything have gotten done in the first place?  And if you can convince me that this is the case how can I help out.. being a practical luddite in comparison to people like you and EY? [huh]  I seriously would like to contribute to FAI and the Singularity in some way.


I agree with MA. Consider this trend:

Does new technology ultimately make us more or less vulnerable?
A friend of mine, Yale economist Martin Shubik, says an important way to think about the world is to draw a curve of the number of people 10 determined men can kill before they are put down themselves, and how that has varied over time. His claim is that it wasn't very many for a long time, and now it's going up. In that sense, it's not just the US. All the world is getting less safe.


http://www.wired.com...2/marshall.html

Bill Joy adds a dramatic analogy:

So what we're talking about here is whether we're going to give these kinds of people illimitable power. It's hard to think about what that would mean, but what I think about it is it's like Flight 990. Remember where the pilot probably crashed the plane? Imagine if everybody on the plane is a pilot and has a button to crash the plane. How many people are going to be pilots with you on the plane before you're not willing to get on anymore. Imagine the whole planet is full of pilots. How does that make you feel? It really is I think a return to fate as we knew it in the ancient world. We've lived over the last 2500 years with roughly one on one, or one on a few violence. Now we have this possibility of one to many contagions, which enables genocide extinction or worse committed by individuals in a way we've never faced before.


http://technetcast.d...l?stream_id=258

#20 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 17 January 2004 - 06:02 PM

Humans are FAI equivalent with nonhuman species What are we doing with it

Choose a few favorite species on earth then be friendly If as a quirk of neuroeconomics

Bees
These hive minded creatures form their behavior with the weather as a guide. Humans might place little lamps that carry advance weather reporting, the bees might do the correlation

Then came the part that astonished the researchers. Each day they doubled the distance from dish to hive. The flight path's length followed a simple arithmetic progression. After several days the swarm no longer waited for its scouts to return with news of the latest coordinates. Instead, when experimenters arrived to set down the sugar water, they found the bees had preceded them. Like multiple transistors crowded on the chip of a pocket calculator, the massed bees had predicted the next step in a mathematical series.

http://futurepositiv....net/2003/06/17


Mice, squirrels, rabbits
The biggest thing with mice is staying warm
vast hyperintelligent beings that we are we might engineer a tuber that when nibbled rapidly liquifies as juice then with drying weather is a high R value mouse house. Tiny particles that adhere to the mice propagate the tuber wherever mice go

Mice, squirrels, rabbits even trees will benefit if we shift the enzymatic chemistry of tree leaves, straw, or just the bacteria that live on them. When leaves arrive on the ground rather than being brown they might be full of enzymatically produced sugars, this brings these animals cllulse that is pre digested to edible starches n sugars. The tree benefits as the leaf minerals rapidly move to living tissue, plus animals add NPK to the tree when they poop

Dolphins
There are a tens of millions of dolphins or more, Humans with our astonishing xerodigital branching "hands" are able to grow chirp chirp chirp [tasty snacks]. The floating seaweed sargassum will be engineered to have higher protein plus lipid values than fish. As a floating seaweed it grows anywhere. just a little human FAI activity will make possible two or even three orders of magnitude increase in the quantity of dolphins.

humans
unable to choose between quantity, quality, pythagorean abstraction, mammalian social benefits or whatever I say we have it all with neural economics
Behavioral or neural economics is the study of the psychology or psychiatry of economic maintenance, change, direction
New Scientist reports a researcher Zak New Scientist vol 178 issue 2394 - 10 May 2003, page 32

Zak finds that Oxytocin quantities in humans account 97 pt of variation of trust in 41 countries ranging from 5pt Brazil to 65 pt Norway

Many game theory behavior divergences that cause higher mutual wins are nonrational trust guided

The link between national trust measures plus economic measures is high

Oxytocin is a maternal nurturance hormone, Oxytocin is increased with sex also population density, things that feel nice, gentle vibration, Northern latitude, breastfeeding, telephone usage, stable law

Two thoughts the invention of FAIs creates trust, a big win
I also think that neural economics is a fine way to guess at, also engineer a favorable earth population. Happiness article at New Scientist vol 180 issue 2415 - 04 October 2003, page 40

France with 75 pt of energy from nuclear is sustainable The US is then at 1.03 Billion. At Denmark population the US is a nation of 1.18 Billion. Denmark has two top spots on two different global happiness charts. Puerto Rico near the top on the NS happiness chart hints that a 4.1 Billion population fits the US.

275 Mn US population density 75 per mile sqr
1.03 Bn at France population density 282
1.18 Bn Denmark 322 near the top of a NS happiness chart
2.53 Bn Japan 869
4.1 Bn Puerto Rico 1120 near the top of a NS happiness chart
135.6 Mn New Zealand 37

19.4 Billion at Seattle 5288

Seattle is a nice town, adequate anyway. huge green areas, nice people, much single family housing, If as a quirk of neuroeconomics humans lived as a layer on the earth with the Seattle density, not clumpy as we do now the population of earth is then 432 Billion If you think San Francisco is an adequate place to live then an earth population of 1.5 Trillion 25 times the population now is adequate. I favor a population of 200 Billion With The density of Seattle, plus half the earth to wild plants, animals, protists, the like

Does the reader like this population If an FAI was able to have humans choose voluntarily to live as a layer over half the earth is this a win

BTW the dolphins have it made Human visual material area is just a tall building tall with a surface a third that of the oceans. If rather as humans go aquatic the Seattle density is various hundreds of trillions.

Oxytocin

#21 NickH

  • Guest
  • 22 posts
  • 0

Posted 18 January 2004 - 09:31 PM

I am not that interested in what EY truly meant by the statement.  Let's consider for a moment, the hypothetical world in which he truly meant that the odds of FAI success were 2%.  Is this figure so ridiculous?  I do not think so althought EY probably does. 


It's not ridiculous, and I think Eliezer would agree. It must be noted, however, that humans are awfully bad at accurately estimating the probability of real world events so, as Michael mentioned, Eliezer doesn't give probabilities anymore. Bascially there's a signifant chance SIAI's FAI project isn't the first AI, after that there's a significant change something screws up. We live in interesting times, our future existence is in no way guaranteed. If you talk to Eliezer, or examine the issues discussed inside CFAI, you'll see Eliezer is a professional paranoid rather than an optimist -- he, like us, wants to *actually* succeed, rather than just think he will.


From CFAI:

Q3.4:  Do all these safeguards mean you think that there are huge problems ahead?
Actually, I hope to win cleanly, safely, and without coming anywhere near the boundaries of the first set of safety margins. There's a limit to how much effort is needed to implement Friendly AI. Looking back, we should be able to say that we never came close to losing and that the issue was never in doubt. The Singularity may be a great human event, but the Singularity isn't a drama; only in Hollywood is the bomb disarmed with three seconds left on the clock. In real life, if you expect to win by the skin of your teeth, you probably won't win at all.


The comment about Hollywood applies to EY's optimism as well as to the improbability of the situation I discuss above.


This is not optimism, this description assumes success: "I hope to win cleanly", "Looking back, ...". The idea is that if we do our very best, and look at our plan and say "we'll just win by the skin of our teeth" odds are an unknown unknown will come along, and we won't win at all. However *if* we do win it's probably because we solved most of the important problems creating a robust design, and that the unknown unkowns we'ren't fatal. In this case it simply seems unlikely we'd be *just* near the margin of success. It's the difference between what you can know before an FAI launches, and what you'd expect after a *successful* launch. The Hollywood reference is to suggests that drama, rather than rationality, is perhaps the support for "saving the world by the skin of our teeth" scenarios ie. plans which look shakey but "are the best we can do given the time constraints" rather than plans in which no stone is left unturned, it seems like there's no reason the first safeguards will fail, but we've got 2 independent sets after that anyway. Massive overkill.

I do not pretend to be as intelligent or knowledgable as EY nor to perfectly understand his ideas, but this sentence suggests to me that the most critical problems of friendliness are related to computational or hardware-level problems as opposed to moral and ethical questions or even abstract goal structures.  This does not seem true or intuitive to me at all -- but EY has studied FAI much more than I have so I am probably misunderstanding.  The following CFAI quote (with which I strongly agree) shows that EY thinks intelligence and moral problems are largely substrate/computer independent and so I am probably misunderstanding the above:


Moralities are the output of a complex phyiscal process implemented in human brains. The critical problem is trying to create a new physical system (ie. an AI) with the capabilities all (neurologically normal) humans already share, the ability to argue and reason about moral beliefs, but animals (and computers) lack. The particular surface things humans argue about aren't as significant (although they are important to help start things off): we're not working out the answers to all important moral questions before hand and somehow transferring them to an AI (although interim answers *do* help), but creating an AI that is actually able to understand and answer the questions themselves. This is a technical effort of mind engineering, based in cognitive science, evolutionary psychology, etc. This is fundamentally different from moral arguments humans partake in. FAI is engineering, not argument.

#22 John Doe

  • Guest
  • 291 posts
  • 0

Posted 19 January 2004 - 04:13 PM

It's not ridiculous, and I think Eliezer would agree. It must be noted, however, that humans are awfully bad at accurately estimating the probability of real world events so, as Michael mentioned, Eliezer doesn't give probabilities anymore. Bascially there's a signifant chance SIAI's FAI project isn't the first AI, after that there's a significant change something screws up. We live in interesting times, our future existence is in no way guaranteed. If you talk to Eliezer, or examine the issues discussed inside CFAI, you'll see Eliezer is a professional paranoid rather than an optimist -- he, like us, wants to *actually* succeed, rather than just think he will.


Statements such as "I hope to win cleanly, safely, and without coming anywhere near the boundaries of the first set of safety margins," suggest that EY would not agree.

This is not optimism, this description assumes success: "I hope to win cleanly", "Looking back, ...". The idea is that if we do our very best, and look at our plan and say "we'll just win by the skin of our teeth" odds are an unknown unknown will come along, and we won't win at all. However *if* we do win it's probably because we solved most of the important problems creating a robust design, and that the unknown unkowns we'ren't fatal. In this case it simply seems unlikely we'd be *just* near the margin of success. It's the difference between what you can know before an FAI launches, and what you'd expect after a *successful* launch. The Hollywood reference is to suggests that drama, rather than rationality, is perhaps the support for "saving the world by the skin of our teeth" scenarios ie. plans which look shakey but "are the best we can do given the time constraints" rather than plans in which no stone is left unturned, it seems like there's no reason the first safeguards will fail, but we've got 2 independent sets after that anyway. Massive overkill.


How is assuming success not optimistic?

#23 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 20 January 2004 - 04:02 PM

Statements such as "I hope to win cleanly, safely, and without coming anywhere near the boundaries of the first set of safety margins," suggest that EY would not agree.


In the context of CFAI, it seems Eliezer meant that statement in a precautionary rather than predictive sense; "if we hope to win by the skin of our teeth, we probably won't win at all" type of thing. He *hopes* to win cleanly - he does not *expect* to win cleanly. As often happens in real-world situations, the probability of success will rest on various factors subject to varying degrees of influence; "can we find genius programmers?", "can we get millions of dollars?", "can we get the Friendliness architecture right?" and so on.

How is assuming success not optimistic?


Assuming success *is* obtained, we want to have won cleanly and completely, not by the skin of our teeth. This suggests adopting a certain type of professional attitude beforehand - cautious paranoia - as opposed to "assuming success" unconditionally. Through engaging in a thought experiment where we assume success and mention that we would want *clear* and *thorough* success, we can boost our paranoia.

#24 NickH

  • Guest
  • 22 posts
  • 0

Posted 20 January 2004 - 09:26 PM

Statements such as "I hope to win cleanly, safely, and without coming anywhere near the boundaries of the first set of safety margins," suggest that EY would not agree.


< NickHay> from the imminst Singularity forum: 'Statements such as "I hope to win cleanly, safely, and without coming anywhere near the boundaries of the first set of safety margins," suggest that EY would not agree [that the probabily of FAI sucess could be very low]." Would you like to give a one-line correction to this for me to post?
< Eliezer> Hope is not a probability estimate. And the opinions of Eliezer in 2001 should remain in 2001, where they belong.

If you'd prefer specific CFAI quotes:

"A programmer who feels zero anxiety is, of course, very far from perfect! A perfect Friendly AI causes no anxiety in the programmers; or rather, the Friendly AI is not the justified cause of any anxiety. A Friendship programmer would still have a professionally paranoid awareness of the risks, even if all the evidence so far has been such as to disconfirm the risks." (footnote 1 in section 1)

"In my capacity as a professional paranoid, I expect everything to go wrong; in fact, I expect everything to go wrong simultaneously; and furthermore, I expect something totally unexpected to come along and trash everything else. Professional paranoia is an art form that consists of acknowledging the intrinsic undesirability of every risk, including necessary risks." (the latter half of Q3.4 you left unquoted)

How is assuming success not optimistic?


It's not optimistic because it's subjunctive. *If* we succeed, this is what we expect to happen. *If* we don't, something else happens. No mention of the probability of success.

#25 John Doe

  • Guest
  • 291 posts
  • 0

Posted 21 January 2004 - 09:31 PM

It's not optimistic because it's subjunctive.  *If* we succeed, this is what we expect to happen.  *If* we don't, something else happens.  No mention of the probability of success.


Thank you for the clarifications. :)

#26 NickH

  • Guest
  • 22 posts
  • 0

Posted 22 January 2004 - 10:26 AM

Thank you for the clarifications.  :)


;)

#27 Eliezer

  • Guest
  • 4 posts
  • 0

Posted 15 February 2004 - 05:00 AM

The "2%" quote wasn't me, and you need not bother interpreting it as anything I could or would have said. I think it was Michael Vassar, but I'm not sure.

#28 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 February 2004 - 05:03 AM

Heh, I'll check with Mike V. and see if *he's* (inadvertantly) the one responsible for this nonsense being quoted... Would this have been something he let slip at TV03, or somewhere else?

#29 Eliezer

  • Guest
  • 4 posts
  • 0

Posted 15 February 2004 - 05:31 AM

It doesn't really matter, does it? Next time this happens, someone ping me on IRC.

sponsored ad

  • Advert

#30 dcube

  • Guest
  • 5 posts
  • 0

Posted 22 March 2004 - 08:41 PM

It might be nonsense but it was from a TAPED interview with Yudkowsky, Anissimov and Vassar... M2 refers to Vassar

M2: I think the chance of saving the world is less than 1%.

E: I think it’s 2%.

For the record, the original article was reduced and one of the big chops was in the areas depicting Eliezer's humourous side. I got the info on high SAT scores at the TV2003 conference. So I asked him about that. He didn't offer this stuff up. And he was very wary about being singled out. I let him know when both articles came out. During that taped conversation some things were discussed that certain people didn't want mentioned in print. I respected that. Eliezer didn't make any requests on deleting any of his quotes. Yeah, most articles are highly reductivist. But part of the problem with many XHs I met is this desire to make it all seem very pretty, happy rah rah, mainstream and then become amazed by being dumbed-down. That's bullshit hypocrisy and it won't advance any causes or new recruits, pro or anti. As for the shock fear-mongering, I got a lot of ooh scary stuff straight from Eliezer's site but checked expiry dates with him and cut that stuff out. If I'd gone ahead and sited some of his earlier stuff anyway, the piece would be a helluva lot more shocking. Hmm. Even geniuses have learning curves.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users