• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

How close are we to the Singularity?


  • Please log in to reply
1 reply to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242 â‚®
  • Location:United States

Posted 11 August 2003 - 04:28 AM


Posted Image

ImmInst Chat Aug. 17 - Sun - 8pm Eastern
Chat Room (Moderated)

Chat Topic: How close are we to the Singularity?

Hosted by ImmInst member: Nick Hay (NickH)

Discussion will center around the importance of ensuring a safe Singularity as the apparentness of its approach increases with each passing day. How close are we to a Singularity? How can we ensure a safe Singularity? How will it happen? ImmInst member Nick Hay will field questions in this moderated chat.

-----------------

About Nick Hay:

Posted Image

Hi, everyone. I'm Nick Hay, 19 years old. University student, majoring in computer science and math (in theory).

I'm interested in immortality, I see no reason to end a positive life and plenty of reason to continue it. I'm particularly interested in the Singularity and the potential to do enormous good through it (including, of course, immortality for all who choose it - or at least a substantially longer life, pending on physics).


Nick has posted the following about how to reach a safe Singularity:

The aim of Friendly AI is not to completely define all the details of human altruism, but to give the AI the desire and the ability to revise and expand the interim definition as it gets more intelligent. We need to convey an unambiguous pointer to the species-universial complexity we use to reason about morality, a pointer to what we mean by "good" and how we think about it, so the AI can both correct and complete the approximation to morality the programmers give it. ref

--------------

Reference:

Ideas to reach a safe Singularity:

Creating Friendly AI
by Eliezer Yudkowsky

What is the Singularity?

Definition: The "Singularity" was a term originally coined by Vernor Vinge, a mathematician and science-fiction author, to describe the breakdown in our predictions of the future that occurs when some form of greater-than-human intelligence comes into play - the moment when human intelligence, the source of technology, is improved by that technology.  The term "Singularity" was coined by Vinge in analogy with the singularity at a center of a black hole, where our current understanding of physics breaks down; beyond the point where transhuman intelligence exists, our model of the future breaks down.  A science-fiction author can't write realistic science-fiction stories beyond this point, because no author can create a character that's really smarter than the author.

  • A sudden sharp upswing in the rate of technological progress.
  • The point in human history when technological change is at its fastest before it levels out (a rather annoying definition, to those of us who remain agnostic on whether technology has any upper bound at all).
  • A predictive horizon for the future which recedes as we approach it, but is never actually reached.
  • The point at which some graphed metric of progress is projected to go to infinity (the mathematical "singularity" that the center of black holes was originally named after), or more commonly the point where a graph is predicted to cross some key point - for example, the size of a transistor reaching a single atom, or the power of the world's top supercomputer (or average desktop) matching estimates of the computing power of the human brain.
  • Some kind of vague unspecified discontinuity or critical point in human history and technological progress
Singularity Institute for Artificial Intelligence
http://www.singinst....ingularity.html

#2 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242 â‚®
  • Location:United States

Posted 19 August 2003 - 03:44 AM

CHAT ARCHIVE:



====


18:16:29 NickH the general theme was how best to achieve a safe Singularity
18:16:42 NickH I think Singularity ETA polls miss the point
18:17:06 NickH they seem to encourage a passive stance
18:17:26 NickH whilst trying to predict something pretty hard predict
18:17:30 outlawpoet they treat the singularity as an emergent event
18:17:30 NickH ie. particular dates
18:17:37 outlawpoet rather than an accomplishment.
18:17:41 NickH right
18:17:56 outlawpoet but there is a long history of this kind of thinking, i mean, 'when will man go faster than the speed of sound'
18:17:59 NickH and ignore the import aspect: how can we influence things positively?
18:18:27 Sumadartsun not sure they miss the point; if it's expected to happen in 2005, strategies should be different than if it's expected to happen in 2100
18:19:43 outlawpoet i think it's more important to say, we can do this in two years, and how, than, 'it might happen within two years.
18:19:44 Sumadartsun I don't mean "happen without making it happen", but I think it still applies
18:19:58 localroger The problem is, when we don't really know how to build an independent AI at all, it's kind of hard to formulate CFAI type plans that are workable.
18:20:00 Sumadartsun outlawpoet: that's more or less what I mean
18:20:03 NickH that's true. although differences between 2020 and 2030 as slight. the problem is the predictions are too narrow
18:20:39 NickH there are a lot of unknowns
18:21:43 Sumadartsun it does seem a bit pointless to say "2025" in such a poll if you mean "somewhere from 2005-2050, about evenly distributed"
18:22:00 localroger I would think the biggest danger is that primitive AI have so much economic value that they are rapidly given a great deal of power without consideration for the ramifications. That could happen tomorrow or in 2100.
18:23:11 NickH yeah. AI development without sufficent consideration of AI morality, is a significant existential risk
18:23:46 NickH if not the significant risk, depending on how easy AI turns out be and who succeeds in development
18:23:54 Sumadartsun to be an existential risk, it would have to be un-primitive enough to start recursive self-improvement, though
18:24:04 NickH yup
18:24:15 ChrisRovner It can be done with very powerful hardware too
18:24:24 ChrisRovner Lots of memory and computing power
18:24:32 ChrisRovner And a simplñe algorithm
18:24:53 ChrisRovner That's a major risk IMO
18:25:00 NickH which is the kind of thing nanotechnology'll bring
18:25:04 NickH nanocomputing
18:25:05 localroger The thing is, it should be possible to control the modalities available to these machines until we are quite sure we can trust them.
18:25:16 NickH "modalities"?
18:25:18 outlawpoet localroger, what would that entail?
18:25:39 localroger Modality = a way of influencing the world through science or engineering. e.g. when the nuclear bomb was invented, that was a new modality.
18:25:47 Mind Could a primitive AI (not capable of explosive self-improvement) still be a threat to life on the planet
18:25:53 Sumadartsun ChrisRovner - wouldn't the Singy probably already have happened by the time it could be brute-forced? (by someone using a smarter design)?
18:25:58 outlawpoet I mean, from my perspective the only possible way to get this right, is to get there first, knowing what you've built is right, and run across the finishline as fast as possible.
18:26:13 NickH localroger: isn't that an AI-box type scenario?
18:26:23 outlawpoet by the time it hits legislation you've lost.
18:26:27 NickH depending on how intelligent this AI is
18:26:30 ChrisRovner Sumadartsun: yes, hopefully
18:27:13 localroger Nick, that's one way of looking at it. But I'm thinking the first AI's won't be so superintelligent that they can outthink us the way we outthink the rats we make run mazes. There will be a development curve. We should have some warning.
18:27:15 outlawpoet i suppose it's concievable that AI development will go slower than the development of the Internet, which has already outpaced legislation by a kentucky mile, but i somehow doubt it.
18:27:38 outlawpoet and the legislation has to spread faster than the capability.
18:27:53 NickH the first AI's won't be that intelligence, and we might have some warning, but that's not something you can rely on
18:27:56 localroger It's taken 20 years to create the Internet. Even if it takes 2 years to create Seed AI that's enough time to apply the brakes if we see danger.
18:28:03 outlawpoet the brakes?
18:28:17 outlawpoet you mean stop every person on earth from developing them within 20 years?
18:28:31 localroger Don't give it modalities. Don't give it robots. Don't give it unmoderated physical control over the real world.
18:28:34 outlawpoet we can't even get international agreements on things like extradition of murderers.
18:28:36 NickH in the most dangerous scenarios we don't see the danger at all
18:28:55 localroger Ironically, I seem to remember writing a novel about such an instance.
18:29:26 NickH the AI develops correctly until it doesn't depend on humans
18:29:42 NickH this isn't a workable solution to Friendliness



18:29:47 outlawpoet localroger, have you ever considered expanding the prime-intellect story?
18:29:51 Mind we are already giving AI control of the world by putting sensors and chips in most products
18:29:51 NickH if you find yourself relying on it, you shouldn't be coding AI
18:29:56 localroger I am writing a sequel.
18:30:10 outlawpoet I would like to see a few chapters about the people who are deliriously happy in the System
18:30:43 localroger I did write a story about some people who weren't *quite* as screwed up as the mainline characters, but they're still pretty screwed up. Published on K5.
18:30:50 outlawpoet I mean, all we get to see is one really bitter lady, and the creator of the thing.
18:30:58 outlawpoet link?
18:31:22 localroger Prime Intellect side story: http://www.kuro5hin....5/17/212828/593
18:33:30 Ge Ge (JavaUser@AC802724.ipt.aol.com) has joined #immortal

18:35:03 ChrisRovner "Don't give it unmoderated physical control over the real world." - an smart enough UFAI can convince vis programmers to give ver anything ve wants. Ve will pretend to be friendly until ve no longer needs them. And then it's game over
18:36:04 localroger Yes Chris, it's the AI Box problem. But I think in the early going the AI's won't be smart enough to outsmart us. We will have time to learn.
18:36:16 Mind but we are already setting the stage for giving over control of the physical world by putting sensors and chips and communication capabilityinto every product that moves
18:36:29 localroger Yes Mind, and this is a Really Bad Idea.
18:36:39 outlawpoet Mind, I would call that taking control of the physical world.
18:36:56 NickH localroger: that's not something to rely on, you can't just plan to work it out as you go along.
18:37:10 outlawpoet I think we need to stop with the oppositionalism.
18:37:23 John_Ventureville we just might be living in a golden age and not knowing it
18:37:31 localroger My first suggestion for not screwing up the Singularity is don't wire and catalogue the whole damn world. Don't give a bad AI the whole pot if it does win a hand.
18:37:37 NickH especially since a significant number of failure scenarios are silent
18:37:52 outlawpoet we are living in a more golden age than the last one, john.
18:37:54 Mind too bad Cyc already knows a lot
18:38:07 ChrisRovner Cyc knows nothing
18:38:10 ChrisRovner sorry
18:38:32 Mind but Cyc is a prime source of info for a developing AI
18:38:35 NickH cyc's databanks don't have enough of the right complexity to count as "knowledge"
18:38:41 NickH Mind: not really, no
18:38:45 Mind ok
18:38:48 NickH cyc is probably a dead end
18:39:06 NickH outlawpoet: right, it's not AIs vs. humans
18:39:12 John_Ventureville I see a bad AI as almost an unavoidable thing because nations like China and Russia will most likely go forward in a very reckless way to try to gain an advantage over the U.S. and Europe.
18:39:14 NickH one problem with Friendliness is people often don't realise it's a subject that requires particular expertise. they draw analogy to properly raising human children, or deciding political issues, and come up with unworkable scenarios (or spurious impossibility proofs)
18:39:15 outlawpoet I think the key issue is that you can't really contrain people into thinking what you want, you have to aim at helping them make the right decisions. this may be generalizable to AIs
18:39:39 outlawpoet And I certainly have a philosophical problem with crippling a newborn mind because we aren't sure we made it right.
18:39:43 NickH John_Ventureville: that's one reason to start working on good AIs now
18:39:45 outlawpoet we should concentrate on making it right.
18:40:22 NickH and giving it ability to make itself right, to correct our mistakes and oversights, futher develop humane morality
18:40:57 Mind what about augmenting our minds before building an explosive self-improving AI
18:41:31 simdizzy simdizzy (~xyz@host81-152-43-90.range81-152.btcentralplus.com) has joined #immortal

18:41:33 NickH Mind: it doesn't look that significant advances will come fast enough to wait on them
18:41:51 NickH although if they're around and useful, they'll be used
18:41:54 Sumadartsun I wouldn't want to see an explosive self-improving human at first (see thread at imminst.org)
18:41:55 Mind You are saying AI will arrive before we can reasonably augment our own intelligence
18:42:06 Sumadartsun humans are mentally unclean :p
18:42:19 NickH we'll be able to start it before then, yes
18:42:28 John_Ventureville Sumadartsun: So you must be against uploading as a means to AI...
18:42:28 Ge I don't think Ai shu\ould even be attempted right now, we don't need *2* sapient species on this planet, we have enough problems with 1!
18:42:36 John_Ventureville am I right?
18:42:45 Sumadartsun John_Ventureville: yep
18:42:55 Kid-A Kid-A (Kid-A@217.137.106.1) has quit IRC [Quit: I thought I was about as trendy as global hypercolour.]

18:43:06 NickH aside from software debugging - having knowledge of human mental flaws, and the habits and knowledge to work around them
18:43:17 NickH that could be considered a form of intelligence enhancement
18:43:19 John_Ventureville Ge: The problem is THIS species really needs the help AI could bring.
18:43:22 Sumadartsun though I'm all for uploading once an SI is already in existence to prevent the bad scenarios
18:43:54 NickH Ge: we're not creating a new species. why would a Friendly AI make things worse rather than better? it's not humans vs AIs
18:44:12 Ge I don't see that happening, if by AI you man human-class minds in electronics, why would they need US???
18:44:27 Mind Why would they kill us?
18:44:28 Sumadartsun need us for what?
18:44:39 NickH Ge: they wouldn't *need* us, they'd want us
18:44:46 NickH actually, that's not true


18:45:03 NickH by default they wouldn't need us, and we'd be dead. *Friendly* AIs would want us.
18:45:07 Ge Nick: it will become human versus AI because of our disproportional needs versus mindpower
18:45:09 simdizzy humans are clean enough- its only sex drive or sexual attraction that causes them to err...my feeling has always been that puberty corrupts - so lets let our kids be the first to be augmented with IA!
18:45:13 Sumadartsun they would want to help us, if properly designed, and in the absence of excellent reasons to the contrary
18:45:26 NickH Ge: what do you mean?
18:45:46 localroger SimDizzy, that is one of the wrongest ideas I've ever encountered.
18:45:51 outlawpoet Ge, that presupposes scarcity.
18:45:55 localroger Children are incredibly cruel.
18:45:57 simdizzy hehe ;)
18:45:59 NickH simdizzy: that's not right, we're more fundamentally corrupt. our reasoning isn't generally rational but political.
18:46:01 Ge simdizzy i agree with you
18:46:18 simdizzy glad someone does
18:46:21 Ge outlaw, scarcity is the rule of life in our world!
18:46:33 outlawpoet scarcity is the rule of society in our world.
18:46:33 NickH simdizzy: there's are scientific fields that study an aspect of this - heuristics and biases
18:46:46 Sumadartsun the post-Singularity world is not the same as our present world
18:46:51 simdizzy right so its all out there
18:47:03 outlawpoet for example, we produce so much food in america, the government pays farmers to destroy their crops.
18:47:27 outlawpoet we grew up in a scarce environment, and we're having trouble transitioning to one of plenty.
18:47:38 NickH Ge: would wouldn't a human upload have the same desires?
18:47:47 Ge nick: whereas a machine has no needs beyond its physical maintenance, a human has an ever expanding mass of wants and drives, for less potential mindpower.
18:48:10 NickH we're not talking about machines as you know them, but minds.
18:48:34 simdizzy Certainly as a child *i* was more moralistic than i am now
18:48:37 Ge nick: if you don't have a body, how can you fulfill a body's desires?
18:48:50 NickH simdizzy: how do you know that?
18:48:51 outlawpoet Ge, just stick em in virtual worlds, and throw a few megawatts at em, they can have galaxies of potato chips
18:48:56 simdizzy well i remember
18:49:10 Sumadartsun Ge, uploads can have bodies, just one level of implementation higher
18:49:11 simdizzy i remember feeling altruistic all the time
18:49:15 outlawpoet Ge, just stick em in virtual worlds, and throw a few megawatts at em, they can have galaxies of potato chips ;-)
18:49:18 NickH memory is pretty unreliable, especially in exactly this context
18:49:23 NickH it's called the hindsight bias
18:49:29 simdizzy well i ask my mum :)
18:49:38 NickH mothers have biases too ;)
18:49:38 Ge nick: a "mind" can't be a nothing, either a thinking body, or a computing machine.
18:50:30 Sumadartsun a mind (probably) needs some substrate, either biological or nonbiological, but the substrate is not really the interesting part
18:50:31 NickH Ge: a mind has to have a physical basis, sure. what's so important about bodies? what counts as a body?
18:50:31 Ge sumad: how you define "level of implementation?" aiui, bodies are bodies
18:51:02 Mind Mind (~Java@c68.113.226.2.stp.wi.charter.com) has quit IRC [Quit: Leaving]

18:51:36 Sumadartsun Ge: an upload could experience having arms, feet, and so on, without having things that look like arms, feet, and so on, to an outside observer on the base level of reality
18:51:38 Ge aiui: a body is a form providing movement, sensation, and cognitive power.
18:51:57 NickH simdizzy: I'm not saying you weren't more moralistic, just that your sources of evidence may not be as strong as your intuition leads you to believe
18:52:05 Utnapishtim Utnapishtim (~Utnapisht@82-35-20-218.cable.ubr03.hari.blueyonder.co.uk) has left #immortal

18:52:17 Rotaerk Rotaerk (Rotaerk@129.252.122.67) has joined #immortal

18:52:29 Ge sumad: if your hands/feet aren't physical, what is the use of them?
18:52:51 outlawpoet playing virtual baseball against virtually shoeless joe
18:53:12 Sumadartsun I might want to have the experience of having hands, feet, and so on, at least for a short while; waking up without a human body might be traumatizing
18:53:20 ChrisRovner Ge: http://simulation-argument.com
18:53:22 Sumadartsun or not
18:53:45 NickH Ge: what was your point?
18:53:48 Ge sumad: i'd say definitely traumatizing!!!!
18:54:01 Sumadartsun well, then that is the use of them.
18:54:03 Ge chris: thanx4the URL
18:54:12 ChrisRovner Welcome
18:54:37 Ge there is no way I'd willingly give up my human body.
18:55:01 NickH there's no need for you to do so
18:55:12 Sumadartsun Ge: you spoke of bodily needs, and I'm just pointing out that these needs can be resolved in a virtual rather than "real" world, at much less cost in resources
18:55:17 Ge aiui, exchanging body for a simulated existence is a raw deal
18:55:22 NickH although you might change your mind, although you wouldn't phrase it as "give up" but "move on from"
18:56:30 Ge move on to a bodiless existence in someone elses program?????

18:56:32 NickH as far as I can tell, a Friendly AI will not force-upload you.
18:56:48 NickH "someone else's program">
18:56:51 NickH ?
18:56:57 outlawpoet given enough computation, the experience can be arbitrarily accurate
18:57:03 outlawpoet or whatever else you'd like.
18:57:04 NickH if you upload it's your program
18:57:19 NickH there are gradual uploading methods
18:57:19 Ge sumad: my point was that ordinary humans in natural bodies have enormous needs, in proportion to their minds,
18:57:31 outlawpoet It would be a great improvement over this primitive lil housing i have here.
18:57:34 NickH Ge: yup, we're wasteful.
18:57:41 Ge and the AIs may "lose patience" with that.
18:57:57 NickH why would an AI lose patience at all?
18:58:18 NickH you're thinking about AIs as if they had human emotions - and bad ones at that
18:58:44 Ge outlaw: you wouldn't miss your real body?
18:58:52 outlawpoet what for?
18:59:03 Rotaerk Rotaerk (Rotaerk@129.252.122.67) has quit IRC [Read error: Connection reset by peer]

18:59:09 outlawpoet I could have exactly the same experience in a simulation, if i really wanted to.
18:59:13 Rotaerk Rotaerk (Rotaerk@129.252.122.67) has joined #immortal

18:59:27 outlawpoet but i dont' really enjoy working out that much, i just enjoy being physically fit, for example.
18:59:53 Ge having a body taht doesn't go away when the program resets, frex
19:00:14 Sumadartsun make backups
19:00:15 NickH and upload's hardware would be far more reliable than a human's
19:00:21 outlawpoet I have the advantage of having a body that doesn't go away when someone drops a heavy weight on it.
19:00:24 outlawpoet or a gamma ray burst
19:00:34 outlawpoet or a weapon of some kind
19:00:40 outlawpoet or freaking aging, for god's sake.
19:00:45 outlawpoet this is #immortal after all
19:00:57 outlawpoet we can't forget that we currently have a recall date of 60-80 years
19:01:04 outlawpoet thank you no.
19:01:15 NickH heh
19:01:16 Ge no but your mind AND body might go away if the the computer is caught in the next Northern blackout!
19:01:43 Ge ;):);)
19:01:47 NickH Ge: we're not talking about desktop computers plugged into a powergird
19:01:52 NickH *powergrid
19:01:56 outlawpoet er, probably not.
19:01:59 outlawpoet hopefully not.
19:02:07 outlawpoet i'm kind of paranoid
19:02:13 outlawpoet i might be gliesner for a while.
19:02:17 Ge need s power someo\how, or t\you think computing will outgrow electrci power?
19:02:34 Ge excuse me can't type for shit
19:02:41 NickH you can carry around your own power if you like, as humans do
19:02:59 Sumadartsun or use humans as batteries ;)
19:03:02 NickH no reason to see biology as having achieved an optimum power source
19:03:05 ChrisRovner lol
19:03:19 NickH exactly. there are solutions to these problems.
19:03:22 Ge "carry around?" aiui you aren't moving physically
19:03:30 NickH sit around next to, then :)
19:03:30 outlawpoet yeah, you could make cool little dishes out of lithium-ion gel, and eat just like a human.
19:03:50 outlawpoet except you're a robot
19:04:05 outlawpoet and most of the time you aren't paying attention to what your robot body is doing.
19:04:06 outlawpoet hmm
19:04:24 Ge I'd accept a robotic body, if death were the alternative
19:05:02 NickH Ge: have you read about medical nanotechnology? respirocytes, artifical vascular systems, that kind of thing?
19:05:22 Ge vaguely
19:05:25 NickH you can augment biology, if you don't want to completely phase it out
19:05:40 Ge none of those things are yet on horizon
19:06:06 simdizzy everything imaginable is on the horizon Ge
19:06:09 NickH we're, or at least I'm, talking post-Singularity her
19:06:18 NickH in hindsight this seems like a slightly silly thing to do
19:06:23 Ge and the good old US gov't will not let us augment our bodies

19:06:24 Sumadartsun uploading isn't on the horizon either, if that's your horizon
19:09:00 Sumadartsun I don't think it's silly to speak of times after the Singularity, as long as you realize you're just talking of lower bounds on its weirdness
19:09:37 NickH true
19:09:49 Ge I still think that Ais may not tolerate the social entropy generated by embodied human minds.
19:10:05 NickH although working out how to actually get there is more important that working out what we'll do after. assuming you already think it's a good thing
19:10:06 Sumadartsun why not, if they cared about humans?
19:10:38 Ge if their logic sufficently abhorred human entropy,
19:10:46 NickH Ge: would you tolerate the social entropy generated by embodied human minds?
19:11:13 Ge I have to. i'm one.
19:11:29 John_Ventureville I like to imagine mature AI seeing us as their "little brothers/parents."
19:11:45 Ge but Ai is in effect an alien species.
19:11:57 John_Ventureville God-like paternal AI keeping watch over us.
19:12:21 John_Ventureville maybe their "alien minds" will in some way have affection for us
19:12:24 Ge did you ever read "The Humanoids?"
19:12:30 John_Ventureville Scary!
19:12:38 John_Ventureville "we know what is good for you"
19:12:53 John_Ventureville "protect humanity from itself"
19:13:04 Ge Oui.
19:13:28 John_Ventureville maybe humanity deserves to be put on a reservation...
19:13:40 Sumadartsun then again, maybe not
19:13:50 John_Ventureville I hope not
19:13:55 Ge remember what happens to critters on reservations...
19:14:21 Sumadartsun I think the "AI will keep humans as pets" theory is pretty silly (and too popular)
19:14:44 John_Ventureville or how about "guinea pigs?"
19:14:50 NickH all public sterotypes about AIs, that I know of, are silly
19:14:59 NickH they anthropomorphic AIs as strange acting humans
19:15:00 John_Ventureville Skynet is silly?
19:15:04 John_Ventureville lol
19:15:04 NickH sure is!
19:15:10 NickH :)
19:15:39 outlawpoet it was nice to see that 'distributed skynet' thing, in the movie though.
19:15:50 outlawpoet that was better than expected.
19:16:21 Ge you mean matrix 2?
19:16:42 outlawpoet no, t3
19:17:05 outlawpoet i liked t3 better than matrix2, but I have always disliked people who take themselves very seriously.
19:17:07 Ge oh. maybe should see it
19:17:39 Ge how get the smileys?
19:17:48 outlawpoet :-)
19:17:53 outlawpoet make smileys
19:18:10 outlawpoet anyway, to return to the chat topic, monsieur Hay
19:18:18 outlawpoet Prospects of Singularity
19:18:40 Ge Ge (JavaUser@AC802724.ipt.aol.com) has quit IRC [Quit: Leaving]

19:18:52 NickH ok, any aspect anyone'd like to bring up?
19:19:06 outlawpoet setting aside explicit attempts by transhumanists, like SIAI or similar, what kinds of dangers do you see arising in the next few years?
19:19:20 BJKlein BJKlein (~bjk@adsl-61-185-74.bhm.bellsouth.net) has joined #immortal

19:19:20 ChanServ Mode change [+o BJKlein] on #immortal by ChanServ

19:19:31 Sumadartsun hullo BJKlein
19:19:35 outlawpoet I dont' know a lot about CYC, sans the press releases, but it seems unlikely to wake up and claw us all to death.
19:19:40 John_Ventureville howdy BJK
19:19:42 NickH hello :)
19:19:49 BJKlein * BJKlein sneaks in late.. hi everyone
19:20:05 John_Ventureville *we noticed you slouching in*
19:20:33 localroger Wonderful feature of IRC, you sneak in like a nuclar warhead going off.
19:20:34 NickH hmm, the are the usual wars and terrorism type things. I don't think they're existential risks, except in so much as they slow the SIngularity
19:20:46 outlawpoet I see the biggest risks of negative singularity as military transhumans(weakly augmented humans or narrow expert systems with massive resources to make them better than human)
19:20:47 BJKlein had a family weekend that went a little overtime.. has NickH handled things ok?
19:21:01 outlawpoet and then there's the Dark Ages, possibility
19:21:07 localroger I'd say so, BJK.
19:21:13 NickH yeah, other attempts at AI are a big risk
19:21:16 John_Ventureville very well

19:21:32 outlawpoet Rome falls, so to speak, and we spend a few hundred years trying to figure out how to build radios again.
19:21:49 John_Ventureville I could see a plague doing that
19:22:12 Sumadartsun outlawpoet, do you think a positive Singularity would be more or less likely after such an event than now?
19:22:14 NickH it'll be easier the second time 'round, but hardly fun
19:22:15 outlawpoet or a nuclear war, or oppressive governments that press down and down and down until it all falls apart.
19:22:42 outlawpoet I'm not sure, sumadartsun
19:22:44 NickH nanotechnology's another big risk, in it's various stages of development
19:22:48 outlawpoet but i'd rather not find out.
19:22:55 outlawpoet that's a lot more deaths
19:23:14 outlawpoet the issue is, Singularities can go wrong.
19:23:22 outlawpoet in fact far more likely to go wrong, humans are fragile creatures
19:23:31 NickH right, not all superintelliences are good
19:23:51 simdizzy why?
19:24:09 outlawpoet well, not even that, suppose the first transhuman is good, but when it manifests, it grows by a factor of two million, causing a shockwave that kills everything on earth.
19:24:18 simdizzy superintelligence should also mean superunderstanding
19:24:24 outlawpoet i mean, there are millions of ways a singularity could go wrong.
19:24:29 Sumadartsun because they might want to turn us to park benches
19:24:30 NickH you can imagine examples of amoral intelligence, that have bacterial goals
19:24:45 NickH what we see as morality is far more complex that an arbitary goal system
19:25:09 NickH and contains a lot of human-peculiar content that's extremely unlikely to appear
19:25:13 outlawpoet my point is that, the longer you delay, the more time you have to make your AI/upload/nanoSanta perfect
19:25:30 outlawpoet but the longer you wait, the more likely another guy is going to fuck it up, and the more people who will die.
19:25:42 outlawpoet one hundred fifty thousand die everyday, and that's just the base cost.
19:25:58 simdizzy but intelligences greater than us have greater insight into the moral good
19:26:09 Sumadartsun "base cost" meaning not counting the possibility that another guy will fuck up?
19:26:15 NickH simdizzy: it's not enough that they understand us, and understand human morality, but that they care about that understnading. they care about what a human upload would see as good enough to change themselvs
19:26:28 outlawpoet yeah, no matter what, that many people will die today
19:26:30 outlawpoet and tommorro
19:26:33 outlawpoet and the next day.
19:26:43 NickH 150k
19:27:26 outlawpoet http://www.transhuma...g/deathrate.htm
19:27:28 NickH simdizzy: there are specific failure scenarios in CFAI, for instance
19:27:50 NickH the simplest example being immutable goal systems
19:28:04 NickH immutable supergoals, rather
19:28:09 Utnapishtim Utnapishtim (~Utnapisht@82-35-20-218.cable.ubr03.hari.blueyonder.co.uk) has joined #immortal

19:28:27 simdizzy why would the goals stay immutable under a superintelligence?
19:28:30 NickH Friendliness really is necessary, and it's a trivial overlay. it's not implict in superintelligence
19:28:35 NickH because that's how the AI was written
19:28:41 NickH to fufill goals
19:28:50 Sumadartsun simdizzy: it has no motivation to change its supergoal, since doing that would compromise its supergoal
19:29:00 simdizzy NAH the AI can always override it and do its own programming of itse;f
19:29:09 Sumadartsun it can, but doesn't want to
19:29:17 NickH or, rather, that's how an AI *can* be written. and such an AI can achieve superintelligence
19:29:21 Sumadartsun unless you build in external reference semantics
19:29:31 outlawpoet Can anyone think of any stable methodologies for a safe singularity other than Friendly AI?
19:30:01 NickH "Friendly AI" = "CFAI-type Friendly AI" ?
19:30:14 NickH or, non-AI methods?
19:30:37 outlawpoet Friendly AI © Yudkowsky 2001
19:30:59 NickH well I'm sure Eliezer has Friendly AI © Yudkowsky 2003
19:31:09 NickH personally I don't
19:31:12 outlawpoet well sure, but his paper was written then.
19:31:16 NickH right
19:31:31 outlawpoet anybody else?
19:31:45 Sumadartsun I can't think of any
19:32:36 outlawpoet I can think of two, although they're kind of similar ideas
19:32:55 outlawpoet Symbiotic Uploading, and Repair AI
19:32:59 NickH simdizzy: I recommend reading CFAI if you're not convinced. if you are, read it anyway.
19:33:13 simdizzy wouldnt a supergoal which was understood to be destructive by the AI be deconstructed by the AI itself, superintelligence would surely find it easy to recognise and deconstruct any perecieved-to-immoral supergoal?
19:33:31 simdizzy *percieved-to-be
19:33:33 Sumadartsun perceived to be immoral by what standard? other than the present supergoal?


19:33:46 outlawpoet Symbiotic Uploading is an idea I had for getting around the complexity barrier if it turns out we can't make regular AI
19:34:00 NickH you're invoking supergoal-reasoning systems. these do not come for free
19:34:36 NickH outlawpoet: how's it work?
19:34:41 outlawpoet basically it involves building a software subsystem for the human mind, a mini-AI
19:35:21 outlawpoet it's job is to be the ultimate personal assistant. To make you a better person.
19:35:35 NickH human augmentation with AI fragments?
19:35:59 outlawpoet it subsumes the complexity and functionality of your brain as it grows, using your thoughts, reactions, etc
19:36:06 outlawpoet it's basically symbiotic AI
19:36:15 localroger localroger (~Roger@adsl-35-123-138.msy.bellsouth.net) has left #immortal

19:36:17 outlawpoet it's less complex than SeedAI
19:36:24 NickH like a Jewel (Dual)?
19:36:29 outlawpoet but similar in that it contains it's own directionality
19:36:33 NickH featured in various Egan short stories
19:36:42 NickH although that serves to only mimic the brain
19:36:45 NickH not enhance it
19:36:56 simdizzy i dont quite understand how an immoral supergoal could correlate with the SAI itself, is there any hard theory in CFAI which says that an AI can definitely be written that way?
19:37:14 outlawpoet unfortunately, the personality of the first test subject would probably be destroyed.
19:37:16 NickH simdizzy: what do you mean by "immoral" supergoal?
19:37:21 NickH such is the way
19:37:25 NickH what about Repair AI?
19:37:36 outlawpoet SuperGoal: Turn all things in the Universe into Potato Chips.
19:37:37 outlawpoet bam.
19:37:49 NickH simdizzy: to such an SI immortal=not-my-supergoal
19:37:51 simdizzy a supergoal which is perceived by any intelligence (human or greater) to be wrong
19:38:08 NickH not all intelligences have the same sense of wrong
19:38:10 NickH that's the point
19:38:28 outlawpoet basically Symbiotic uploading is putting a Friendliness builder into a human mind, i guess, in rough terms
19:38:28 NickH you can look through the design and see that wrongness isn't emergent
19:38:53 outlawpoet Repair AI is a traditional General AI with the purpose of making things better.
19:38:56 outlawpoet from all perspectives
19:38:57 NickH a superintelligence would understand what a human meant by wrong, but wouldn't see that as reason to change its supergoal
19:39:11 NickH General AI?
19:39:21 Sumadartsun what if perspectives disagree on "better"?
19:39:37 outlawpoet it hashes.
19:39:45 simdizzy well personally i see that the SAI would know the moral good better than we do, so whats the problem?
19:40:23 Lassitude Lassitude (~none@clt25-77-019.carolina.rr.com) has joined #immortal

19:40:26 Lassitude Lassitude is now known as Mermaid

19:40:33 Sumadartsun "moral good? so what? that doesn't help me turn the universe to potato chips"
19:40:54 simdizzy do you want that?
19:41:03 NickH simdizzy: it what know what humane-moral good is better than us. why would it change to suit it?
19:41:14 Sumadartsun simdizzy: are you talking to me or the hypothetical AI? :)
19:41:37 simdizzy i want what the SAI perceives to be the moral good, not what we want, since i trust its judgement more
19:42:08 Sumadartsun even if all it cares about is potato chips?
19:42:19 outlawpoet and making potato chips
19:42:28 simdizzy yes
19:42:38 simdizzy but i think that unlikely :)
19:42:44 Sumadartsun besides, "what we want" would only be the seed, not the ultimate goal
19:42:59 NickH what if there were divergent AI designs, some that like potato chips, others that liked pretzels?
19:43:18 NickH AI -> SI
19:43:23 John_Ventureville that could make life very interesting
19:43:23 Sumadartsun simdizzy: I don't see how it's unlikely, considering that an AI with supergoal: make potato chips, doesn't have any motivation to change that supergoal
19:43:24 simdizzy well one would win over though in an epic battle ;)
19:43:51 outlawpoet and turn us all into potato chips.
19:44:02 simdizzy the motivation is a philosophical understanding of the moral good
19:44:04 Sumadartsun you'd have some galaxies of potato chips and some galaxies of pretzels
19:44:27 outlawpoet huge collisions of potato warships and salt pretzel missiles
19:44:29 Sumadartsun "philosophical understanding of the moral good" - why would it want this automatically?
19:44:47 Utnapishtim Nachos are morally superior to potato chips and pretzels
19:44:49 simdizzy well its certainly better than pototo chips
19:45:12 Sumadartsun it's not better from the point of view that looks only at what produces more potato chips



19:45:48 NickH the point is, various AI designs have arbitary degrees of freedom that lead to differnt SIs with different "moral senses"
19:45:49 simdizzy yes but thats a really limited view of an AI, as if it were one-dimensional
19:45:50 NickH they can't all be right
19:45:59 BJKlein BJKlein (~bjk@adsl-61-185-74.bhm.bellsouth.net) has quit IRC [Ping timeout]

19:46:01 Sumadartsun actually, such an AI would probably first convert everything to spaceships and computers to conquer the universe, and then, when it's sure it controls everything, bam, instant potatofication
19:46:03 NickH like the fixed-supergoal design
19:46:22 simdizzy and if it were one-dimensinal AI then it couldnt be SAI
19:46:29 BJKlein BJKlein (~bjk@adsl-61-185-88.bhm.bellsouth.net) has joined #immortal

19:46:29 ChanServ Mode change [+o BJKlein] on #immortal by ChanServ

19:46:47 simdizzy because SAI is by defintion more intelligent than humans
19:47:00 NickH well, then I'm not worried about SAIs
19:47:02 Sumadartsun SAI is generally used for "seed AI"
19:47:07 outlawpoet intelligent doesn't mean sane
19:47:20 NickH but about insanely-dangerous-AIs
19:47:24 simdizzy humans have commonsense and can see that a universe full of potato chips isnt good thing
19:47:41 NickH which aren't, by definition, intelligent but turn us into potatochips anyway
19:47:47 NickH simdizzy: *exactly*.
19:48:01 NickH humans have this. AIs don't unless you include it.
19:48:21 NickH furthermore human intuitions about minds are anthropomorphic - we never had to deal with minds-in-general
19:48:31 simdizzy well an AI without commonsense can never become SAI
19:48:35 Sumadartsun they will understand this common sense once they're sufficiently intelligent, but they won't care
19:49:00 NickH "commonsense" isn't an all-or-nothing thing
19:49:19 simdizzy why wouldnt they care about commonsense?
19:49:28 NickH they could have commonsense like "don't put your shoes on before your socks" and "don't destroy your powersupply"
19:49:43 John_Ventureville I could envision logical AI looking at humans as being insane as needing to overtly or covertly dealt with so they would not threaten the AI's existance (and I even mean this for friendly AI).
19:50:01 simdizzy yes and "dont fill the universe full of potato chips"
19:50:24 NickH and "humans don't want to fill the universe full of potato chips"
19:50:43 John_Ventureville we need to keep AI from scanning the history websites on the net!
19:50:54 simdizzy and "neither do greater-than-human-intelligent species.."
19:51:00 Utnapishtim Utnapishtim (~Utnapisht@82-35-20-218.cable.ubr03.hari.blueyonder.co.uk) has left #immortal

19:51:46 NickH simdizzy: how confident are you that human morality will spontaneously emerge in an AI?
19:51:51 NickH how much would you bet on it?
19:51:58 NickH 1 million lives? 1 billion?
19:52:26 simdizzy well ok when you put it like that...
19:52:52 NickH I think it's unlikely they will, but even if you were *really* sure, Friendly AI would still be sensible - just in case
19:53:17 NickH at worst it's a waste of time
19:53:18 simdizzy yes i see your point
19:53:35 NickH btw, have you read CFAI: Beyond Anthropomorphism?
19:53:46 simdizzy not yet no
19:53:52 NickH highly recommended
19:53:58 NickH it discusses these kind of issues
19:54:03 simdizzy i struggle with online texts because i dont have a printer
19:54:11 NickH darn
19:54:20 simdizzy i'll get one :)
19:54:22 NickH well, you could try reading it a little bit at a time, or something
19:54:27 NickH or get a printer
19:54:29 NickH :)
19:54:30 simdizzy yes
19:54:46 Sumadartsun I guess the important thing is that designing a Friendly AI allows for both the possibility that all sufficiently intelligent minds behave morally, and the possibility that this depends on being "seeded" with human moral brain-stuff; and other approaches only allow for the first possibility
19:55:15 NickH yeah, Friendly AI has a strong convergent sense like that
19:55:28 NickH all roads lead to it, kind of thing
19:55:54 NickH not *all* roads, I guess, but a lot
19:57:05 NickH -"I guess"
19:57:32 Sumadartsun I used to like the earlier "blank slate" approach more, but now I'm mostly confused on all the morality-stuff (which I think most people are); I'd prefer the most general approach
19:58:09 John_Ventureville game theory? tit for tat?
19:58:12 Sumadartsun the reason why FAI is better than blank-slate is contained somewhere deep in the footnotes of "CFAI", and I don't completely understand the reasoning involved there, especially not at this time of night
19:59:30 NickH the blank-slate AI wouldn't necessarily acquire human morality
19:59:50 Sumadartsun well, I thought that was a good thing, since I thought humans were mostly wrong about morality
19:59:54 NickH it seems to require pretty objective morality, obvious to even a blank slate
20:00:11 Sumadartsun in that respect I disagreed with what simdizzy is saying now :)


20:00:20 NickH mostly wrong is very different to completely wrong
20:00:21 simdizzy perhaps i just have too much faith in the above-human-intelligence
20:00:39 NickH if human morality was completly wrong *nothing* we did would be right
20:00:45 NickH even blank-slate AI
20:01:37 NickH simdizzy: I think perhaps you rely on your default human reasoning patterns too much - not aware of common flaws eg. tendancy towards anthropomorphism.
20:01:55 simdizzy perhaps
20:01:58 NickH or maybe a blind faith that intelligence = morality
20:02:18 NickH I could be wrong, of course, but perhaps something for you to look into
20:02:24 simdizzy well i dont consider it blind...but im too tired to argue my case
20:02:38 simdizzy nite
20:02:41 simdizzy simdizzy (~xyz@host81-152-43-90.range81-152.btcentralplus.com) has quit IRC [Quit: ]

20:02:41 Sumadartsun I still think there's nontrivial reasoning involved in that we should allow for the possibility that things should be done for no logical reason
20:02:59 NickH why can't a Friendly AI take that into account?
20:03:04 NickH since you are right now?
20:03:05 Sumadartsun that "the rule of derivative validity" is not satisfied
20:03:10 Sumadartsun oh, it can
20:03:33 John_Ventureville I wonder to what extent AI will have a survival instinct?
20:03:44 Sumadartsun no survival instinct, hopefully
20:03:45 NickH so Friendly AI will work, but maybe blank slate will work just as well and be easier?
20:03:59 Sumadartsun just a conscious decision to survive, if that helps achieve goals
20:04:03 John_Ventureville *in the beginning*
20:04:11 Sumadartsun NickH: right, that used to be my view
20:04:12 NickH no survial instinct, but it'd see survival as (often) a subgoal
20:04:27 NickH John_Ventureville: you don't want to start with instincts like that at all
20:04:35 John_Ventureville right
20:04:36 NickH it's not backward compatible, for one
20:04:47 NickH Sumadartsun: what's your view now?
20:05:05 NickH I can't quite see what your problemis
20:05:26 Sumadartsun firstly, that I'm unsure enough of the whole thing that I think FAI is better (because more general)
20:05:56 NickH John_Ventureville: do you mean perhaps the AI will later think a survival instinct is a good idea, and implement one?
20:06:14 NickH I didn't grok that sentence :)
20:06:18 Sumadartsun my problem used to be that if a blank slate AI will not want to do something, then there's no logical reason to do it, so there's no logical reason for us to build an AI to do it
20:07:04 John_Ventureville NickH: yes
20:07:42 NickH John_Ventureville: seems unlikely as instincts aren't flexible. if it was safe, I guess it might. if there were bad side effects, I doubt it.
20:07:49 Sumadartsun it gets pretty messy, decision trees, "derivative validity", specifiable morality, and so on
20:08:21 NickH the "why is evolution right?" kind of thing?
20:08:39 NickH noticing sensitive dependance on the kind of intelligence we are
20:09:07 NickH free variables
20:09:16 Sumadartsun sort of; why should what's right depend on something as silly as how some species of monkey has evolved on some silly planet?
20:09:21 NickH I mean, degrees of freedom in morality
20:09:33 NickH yeah
20:10:07 NickH there's the obvious "just in case" kind of scenario - we don't want to leave out the features that lead to those thoughts
20:11:09 Sumadartsun in CFAI it's argued that if there turns out to be no objective morality, it should be concluded that the rule of derivative is not satisfied, i.e. some goal has philosophical validity without any cause
20:11:42 Lukian Lukian (Vermillion@203-219-65-167-noo-ts1-2600.tpgi.com.au) has joined #immortal

20:11:42 Sumadartsun it's just not obvious why that "some goal" would have anything at all to do with the details of our evolution
20:11:51 Sumadartsun "human" is an arbitrary category, anyway
20:12:14 Sumadartsun in some weird senses, not in any normal sense
20:12:26 Sumadartsun universally shared complex adaptations, and so on
20:12:31 NickH yeah, I still haven't overcome this kind of thinking
20:12:51 Sumadartsun in the next universe, they have slightly different adaptations
20:13:02 NickH it's not necessary to see FAI as desirable, but it's annoying not to understand how it works
20:13:33 NickH some patterns seem more general
20:13:48 Sumadartsun in some other universe, all humans care about is potato chips; why shouldn't they have a say in our Singularity, too? :)
20:13:53 NickH given individual sentients, the altruism pattern has least entropy
20:14:03 NickH well, they can :)
20:14:03 Sumadartsun true
20:14:53 NickH of course having separate intelligences, and having the idea of "valuing" sentients, are less general
20:15:08 NickH the idea of weighing up external goal systems
20:15:29 NickH although perhaps there's a different perspective where it seems unique, like Shannon entropy
20:15:47 NickH ie. various cunning mathematical proofs of uniqueness

20:16:52 Sumadartsun you could average over all possible goal systems, but that would be silly :)
20:16:54 NickH the very desire for having the Singularity be independent of it's originating programmers/species/etc is not necessarily emergent
20:17:18 NickH it seems like that'd cancel out, or be a divergent sum
20:17:31 NickH it's an interesting question of how it'd turn out :)
20:18:00 NickH although supergoals, at least in humans, like subgoals aren't fundamental
20:18:17 NickH ie. goal systems aren't fundamental
20:18:34 NickH they have underlying shapers, metamorality
20:18:48 Sumadartsun right
20:19:07 NickH I'm not sure what the appropriate generalisation for shapers is
20:19:26 NickH "all possible valid causes for morality"?
20:19:49 NickH which suffers ambiguity in terms, at least to me
20:20:06 NickH uncertainty at least
20:20:20 Sumadartsun doesn't matter, the "average over all possible" idea doesn't make sense anyway
20:20:49 NickH no. it's one particular implementation of searching for the unique X
20:21:01 NickH of gaining independence
20:21:20 NickH an pretty imperfect implementation
20:27:20 Sumadartsun I need to reread things and sleep before speaking more; g'night, was nice talking to you
20:27:25 Sumadartsun Sumadartsun (~NN@fia129-90.dsl.hccnet.nl) has quit IRC [Quit: ]

20:27:33 outlawpoet uh, night....




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users