• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

A Singularitarian's View On Immortalism


  • Please log in to reply
3 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 14 March 2004 - 05:11 AM


Chat Topic: A Singularitarian's View On Immortalism
Gordon Worley, Singularitarian and volunteer for the Singularity Institute for Artificial Intelligence, joins ImmInst to discuss his views and projections for Immortalism.

Chat Time: Sunday Mar 21, 2004 @ 8 PM Eastern
Chat Room: http://www.imminst.org/chat
or, Server: irc.lucifer.com - Port: 6667 - #immortal

Posted Image

Gordon will finish a BS in computer science from the University of Central Florida this spring and begin a PhD program in computer science this fall. His most recent projects include SL4 chat master and Mac GPG Project founder and administrator emeritus. Future projects include research in quantum information theory and a wiki farm at kiwiwiki.com.

HomePage: http://homepage.mac....bird/about.html

#2 Gordon Worley

  • Guest
  • 2 posts
  • 0
  • Location:Orlando, FL

Posted 15 March 2004 - 01:00 AM

Here's some links that might be relevant (Bruce asked me to post some):

* Singularitarian Principles
* Shock Level 4
* SIAI

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 22 March 2004 - 06:30 AM

Gordon Worley - Mar 21 @ 8 PM Eastern - Chat Log
John_Ventureville: hello, Utnap!
John_Ventureville: FuturQ, I read the first several pages
ChrisRovner has joined the channel
John_Ventureville: very interesting, but it appears to be fairly old
news to me
John_Ventureville: I don't know what the conclusion will bring
millenthine has joined the channel
John_Ventureville: which you seem to indicate will be very shocking
FutureQ: It's well investigated so see what you think later
John_Ventureville: ok
8:00 PM
Gordon: hi everyone
John_Ventureville: hello
Gordon: I guess I'm doing the chat thing, although I don't know where
Bruce is
John_Ventureville: probably seeing a movie with his wife
John_Ventureville: lol
mporter has quit the server saying:
Gordon: well, anyway, I've got no set agenda, so feel free to ask me
anything and everything about the Singularity and how it relates to
immortalism
Ocsrazor: Hi Gordon, what is your role with SIAI
Ocsrazor: ?
John_Mcc has joined the channel
mporter has joined the channel
FutureQ: hhi G
Gordon: well, currently I run the SL4 chats, although those are not
necessarily specifically tied to SIAI, there's not much difference
between SIAI the organization and the people in the Singularitarian
community
Randolfe has joined the channel
Ocsrazor: where would you put yourself as far as philosophy of
singulatarian?
John_Mcc: Hm... where's BJ? Is he out tonight?
Gordon: I also do various odd things: help people with their writing,
talk about the Singularity with people, etc.
John_Ventureville: My classic question to you is...., when do you see
the singularity happening, and why that date?
John_Ventureville: *take your time answering*
Gordon: hmm, philosophy of singularitarian? can you explain your
question a little more, Ocsrazor
John_Ventureville: : )
Jonesey: my classic question is, aren't we already in a singularity
relative to the hundreds of thousands of yrs of our history as a
species?
Loren has joined the channel
8:05 PM
John_Ventureville: Jonesey, we are in the "pre-singularity" singularity.
John_Ventureville: lol
Gordon: John: if it happens at all, it'll happen sometime between now
and whenever we destroy life on Earth
ct has joined the channel
Ocsrazor: G I meant do you see a hard singularity with a very sharp
takeoff or a more slow gradual progression
FutureQ: Gordon, Do you think that the Singularty will arrive before
the right wing luddites ttake over or that even the tech to bring it
about will be allowed if the Bush admin wins this election?
John_Ventureville: Gordon, I want a hard date!!
John_Ventureville: c'mon!
John_Ventureville: ; )
Randolfe: What book or web site can most quickly give you a good
overview of the issues involved with singularity?
John_Mcc: I'd propose that a major milestone of the singularity would
be the achievement of IQ 100 by an AI. Everything after that will be
hard to measure.
Gordon: JV: sorry, don't have a hard date; right now I worry it won't
happen at all, which certainly lights a fire under your butt!
Gordon: Jonesey: no
Gordon: Ocs: hard takeoff
John_Ventureville: Gordon, when you get your Phd in computer science,
will you go into the artificial intelligence field?
Gordon: FutureQ: politics could certainly slow us down, but I doubt
anything short of serious financial depression or destruction of modern
technology would serious set us back
Ocsrazor: G: what type of system do you forsee as the likely progenitor?
John_Ventureville: Gordon, I think our ancient human drives to compete
between "tribes" will make sure the singularity happens.
Gordon: Randolfe: I'd suggest starting with sl4.org; you can find lots
of good stuff from there; or try singinst.org, if you want something
from an official organization
John_Ventureville: The U.S. would not stand for China or Russia getting
their first!
Gordon: JV: no, probably not, since oddly I'm not really all that
interested in AI programming
8:10 PM
Gordon: used to be, but now that I realize the size of the problem of
creating artificial general intelligence, my interest has waned (at
least for now)
John_Mcc: US vs Russia vs China? I think not... how about IBM vs
Microsoft? much more likely.
LazLo has joined the channel
Gordon: Ocs: I don't really know what's going to happen after the
Singularity, but if we get it right, it will be something like hugging
fluffy bunnies forever :-)
FutureQ: I found a document today that is well investigated and it
regards the backers of the current luf=ddites and ity scares the hell
outta me:
http://www.yuricarep...om/Dominionism/
TheDespoilingOfAmerica.htm#_ednref1 Take care to delineate the
laissez-faire tool of these guys from Libertarians when reading this.
Gordon: (although at the same time not at all like that, so it's really
a bad, if humorous, analogy)
Loren: Gordon, is SIAI still working on creating the Flare programming
language or something like it?
Gordon: Loren: no, at least not publicly
Gordon: Flare never really got popular and SIAI decided to just let it
die
John_Ventureville: I think those fluffy bunnies are going to have razor
sharp teeth and have us for lunch if we're not careful!
Loren: Is SIAI doing any software development work at all, or is it all
just planning at this stage?
Gordon: JV: well, yes, that's why I'm a big supporter for Eliezer
Yudkowsky's theory of Friendly AI
Randolfe: JV, why do those who talk about the singularity always assume
they are going to "have us for lunch" or have evil intentions?
Gordon: Loren: just planning, AFAIK, although there has been some
non-Singularity-related software development
cyborg01 has joined the channel
John_Ventureville: because humans have trouble trusting that which has
great power over them
Ocsrazor: G: I work on studying some of the largest biological neural
networks ever studied, it bothers me there is so little communication
between neuroscience and comupter scientists, any movements by SIAI in
that direction
FutureQ: I don't wanna hug luffy bunnies forver. I want problems to
saolve. I don't want a friendly AI making like too eaiy and preventing
me from exceeding it.
Gordon: Randolfe: because it's far easier to write an AI that turns
the Universe into paperclips than one that produces the Apotheosis
8:15 PM
Gordon: FutureQ: as I said, it's just a humorous analogy; you will
probably be able to do whatever you want so long as it doesn't hurt
anyone else (of course we'll have to figure out how to handle disputes,
but that's not something I've developed a good answer for)
FutureQ: Gordon, the job of a F
MRAmes is now known as MRAmes-afk
Randolfe: Why is "evil" or "destructive" AI easier to produce than
morally neutral or even "good" AI?
FutureQ: crap
Loren: FutureQ: Turning the entire known universe into intelligent
matter is a pretty big & exciting problem to solve -- I'm sure that
will keep even a Singularity busy for a long time.
FutureQ: The job of an FAI would be to prevent us from harm, yes?
John_Ventureville: Loren, you mean "computronium"?
Gordon: Ocs: agreed; I try to read lots of cog sci stuff to supplement
my formal comp sci education; it's interesting information you need for
better living as a human, regardless of other uses, so I think it's
worth reading about for anyone
Jonesey: how about friendly humans?
NorthLite: 'friendly humans' hahaha
Jonesey: now that would be a really cool thing instead of our constant
squabbling
Gordon: FutureQ: well, if you ask it to
Jonesey: i know northlite, a ridiculous fantasy. but i can dream..!
NorthLite: true
John_Ventureville: there are friendly humans within the jungle we live
in which has the veneer of civilization
John_Ventureville: : )
Gordon: (Randolfe, I'm coming to you) friendly humans and Friendly
humans are possible, although I don't know how popular that option will
be (and the extent to which something is Friendly and still human I
don't know)
FutureQ: Gordon, then would it allow a human to augment to be vreater
than it in ability? I see this as anathema to it's goal if it is to
protect me from myself and other from me and me from them.
Loren: GordoN: What are the short-term goals of SIAI? Get more
volunteers, raise more money, do more planning, or what?
Randolfe: I think each person projects onto AI their own concepts as to
the innate goodness or evilness of humanity in general. I find most
humanhs to be really very good and decent.
8:20 PM
John_Ventureville: Randolfe, I largely agree with you
Gordon: Randolfe: imagine the space of all possible artificial
intelligences. very few are what we would call good, in that they
preserve life and are nice to us. most of them are going to be ones
that do boring, bad things, like turn the Universe into paperclips
John_Ventureville: but it's too bad Eliezer is not here to shed some
insights
Randolfe: JV, that makes up optimists???
John_Ventureville: Eli would say we are trying to envision A.I. in the
human mold, which is a fatally flawed thing to do
Gordon: Randolfe: that's why the theory of Friendly AI focuses on
metamorality, developing a system that will produce good behavior
regardless of its starting point
Jonesey: wow, how do u even define good?
Randolfe: Gordon, you have mentioned turning "the Universe into
paperclips". This image must come from some sci-fir horror story or
something. Where does the "paper clip" imagery come from?
Jonesey: e.g. the president wants to ban gay marriage because it is evil
FutureQ: what is "good behavior" changes with time and fshion.
Gordon: FutureQ: this is an issue that has been debated and it seems
now that certainly a Sysop or whatever else is keeping things running
smoothly will have to be able to outsmart everyone else, although maybe
not
weirdnrg has joined the channel
Gordon: Loren: short term goal are to raise money, more planning, and
find programmers (a project that isn't going so well)
Jonesey: 40 yrs ago interracial marriage was banned in much of the
country, and 160 yrs ago black were not even allowed to marry each
other in much of the country.
Jonesey: how the heck is the AI supposed to keep up with all that stuff
cyborg01: Does the sysop takes away people's freewill
weirdnrg: Gordon, you need the inspirators with the right vision & the
tool to connect them on their terms?
weirdnrg: :)
Loren: Gordon: What kind of programmers will SIAI be needing?
Gordon: Randolfe: the paperclip stuff is just a common example that
popped up at some point in Singularity discussions; AFAIK it's just
because paperclips are a boring, everyday thing that is familiar to
English speakers (although the annoying Microsoft paperclip may have
had something to do with it, but I don't know for sure)
8:25 PM
weirdnrg: I don't think it's money, but rather connecting those who
manage large sums of money & connecting them to the vision & the tool
weirdnrg: to bring forth change
weirdnrg: I mean raising money directly
Gordon: Jonesey: just because people call something evil doesn't make
it evil
Ocsrazor: Gordon are you a programmer by trade?
Jonesey: how do u tell the software what is "Really" evil? utterly
depends on the values of the programmer
Gordon: Loren: this kind:
http://www.sl4.org/b...eedAIProgrammer
Gordon: BTW, if you're interested in helping us raise money, there's a
meeting going on right now in #siaiv (this server) that we have every
week about just such things
FutureQ: Gordon, that statement, made to Jonesy, sort of presuposes
that there is an absolute good and an absolute evil. This emans a god.
I disagree and say what was vil is now good and may not be tomorrow. It
depends on self interest amd that can change.
Jonesey: objectivists would disagree, but till they convince the rest
of us values are very subjective and time-varying
Gordon: Ocs: not really; calling me a computer scientist is more
appropriate, since I don't really enjoy programming so much as the
development of algorithms and the study of theory of computation
Gordon: although certainly I can program with the best of them; I've
been doing it for about 13 years now
Randolfe: FQ, some fashions change. However, evil that hurts or kills
or maims or causes human suffering is always evil. It doesn't change.
Murder has been evil since mankind began.
weirdnrg: Gordon, have you seen Stephen Thaler's views on his
"Creativity Machine" paralleling with human intelligence?
Gordon: FutureQ: I don't propose at all that morality comes from
something supernatural or even from something outside of what you're
familiar with on the everyday
Ocsrazor: G: btw I am not a cognitive scientist, and I think cog sci is
the worng place for AI/GI/SI programmers to be looking right now
FutureQ: Not so, Randolfe. It used to be a good thing to go murder the
men olf te local tribe and steal their women. The good putcome was
widin g the gene pool.
8:30 PM
NorthLite: Randolfe: in NAZI germany hurting jews wasn't considered
evil...
Jonesey: not so good for the murdered men, FutureQ
Gordon: I'd say that morality comes from you, and the panhuman system
of morality we live with is one created by dealing with morality
conflicts over evolutionary time
FutureQ: true Jonesy but the winners r]write the rujles and the history.
Gordon: although arguably all animals have morality, in that there are
certain things they perceive as good and bad
Randolfe: "Do unto others as you would have them do unto yo9u" was not
inspired by some mythological God, it was common sense that grew out of
generaqtions of human experience.
Gordon: it's just that they don't have the facilities to think about
morality
FutureQ: I agree Grdon.
FutureQ: I g
weirdnrg: Gordon, are you familar with Stephen Thaler's "Creativity
Machine"?
FutureQ: I agree Randolfe
Gordon: weirdnrg: I remember seeing something about it on SL4,
although I don't think I read the thread yet
Gordon: so no
Gordon: other than that I've heard the name before
weirdnrg: Definitely take a look at it..
cyborg01: Does the sysop take away people's freewill, and if not, then
how can it make sure morality is enforecd?
weirdnrg: www.imagination-engines.com
FutureQ: Cyborg, who is thbe sysop? The FAI r a human?
cyborg01: Sysop = FAI
Corwin has joined the channel
FutureQ: I want to be my own sysop.
weirdnrg: Ever think that achieving Singularity may cause another big
bang multiverse spawning different modes of consciousness? ie - we will
be the gods of their consciousness, not knowing the subjective feeling
of them.. :O/
weirdnrg: we may creat for them
Gordon: cyborg01: a Friendly Sysop wouldn't take away anyone's
`freewill'; just let them know that the person they want to kill has
chosen not to let themselves be killed, but you are free to kill a
simulation of them
8:35 PM
cyborg01: I think most sane people want to be their own sysops ;)
Gordon: (where `kill' could be any behavior involving more than one
party)
FutureQ: I want a freindly AI embedded within me as my ultra ego and it
identifies as being me. We together work as a whole and everyone else
has heir own as well.
cyborg01: So does it mean I can eat a burger?
cyborg01: *Can't
Gordon: Sysops explained: http://www.singinst....guidesysop.html
Gordon: cyborg01: please, if you're living in a Singularity, you won't
need to eat, although you could pretend to eat a cow if you wanted to
observer: Gordon: is there anything in CFAI that you disagree with?
Ocsrazor: Gordon looking over the SoYouWantToBeASeedAIProgrammer page -
have some suggestions for you
cyborg01: And bush will be punished for the war on iraq? like how?
Gordon: or you could remain on Earth and not join the Singularity and
continue to eat cows like you always did
FutureQ: BHut a reasonably good simulation _is_a person or if not then
tipler s off his rocker and masking a sim of me at omega point is after
all _not_ me.
Gordon: observer: it's been so long since I read it, I honestly don't
know
Gordon: FutureQ: maybe it'll look something like that, we don't know
observer: Gordon: do you remember if you agree with CFAI's definition
of Friendliness?
John_Ventureville: I would think the influence of the singularity would
also affect the earth, resulting in the protection and perhaps even
uplifting of our native cow population
Gordon: FutureQ: yeah, that discussion has come up and it's led to
some interesting answers, although I don't recall the thread, although
since observer is talking and I happen to know he is intimately
familiar with the SL4 archive, maybe he can help :-)
FutureQ: Why does the singulkarity have to be a VR space
8:40 PM
Gordon: JV: I have no idea if cows get to be uploaded; I think to be
uploaded you have to be able to understand what it means and be able to
say yes or no to it
serenade has quit the server due to a technical problem
John_Ventureville: *I predict an ugly war between Texas cattlemen and
the uplifted cows!*
observer: FutureQ: That means that there are some kinds of simulations
that are not allowed.
John_Ventureville: I'm talking about the "real world" and not the
virtual one.
Gordon: observer: the definition? well, at least what's there of it,
yeah, I think I probably do, but again I haven't read it in a while
Jonesey: do mad cows get to be uploaded?
John_Ventureville: I hope to God not!
John_Ventureville: can prions be uploaded??
John_Ventureville: a computer virus prion?
FutureQ: MooooOOOOOooo! Moo Mooo Mooohhhooohhoo!!!
Jonesey: viruses will make uploading quite tricky
John_Ventureville: Q, lol!
serenade has joined the channel
observer: FutureQ: A VR space is more efficient than a real space.
Gordon: and the difference between VR space and real space is in your
mind
Gordon: it's all real space
FutureQ: Nolt if someone isn't left behind to maintain the machine.
Gordon: the space in which the computer running the VR is just as real
as the rest of the Universe, just as the VR is a real thing inside the
Universe
FutureQ: Um, not wwqute.I want to explore the universe. I cant do that
in a VR space because the real universe could not have een mapped that
far, it's redundant.
8:45 PM
NorthLite: i agree with FutureQ
observer: If the "machine" is a superintelligent AI, then it can
maintain itself.
FutureQ: you hope
Jonesey: heh yep
Jonesey: maybe it will get depressed
FutureQ: lonely
FutureQ: brb
Gordon: FutureQ: you can't move any faster than the expanding sphere
of the Singularity, so basically you're in the Singularity if you
choose to partake in it
cyborg01: The FAI would actually take away people's free will...
otherwise people can exploit their freedom to commit infinitesimal acts
of evil
Jonesey: yep infinitesimal rape sucks
Ocsrazor: Gordon, mind if we get slightly technical?
Gordon: not at all
FutureQ: how do I give url so yhou can click it?
Ocsrazor: When I asked about systems above I meant what type of
computational system do you see being successful
FutureQ: http://www.fullmoon..../art.php?id=tal
Ocsrazor: My general impression is that most people are leaving out key
factors in what makes biological systems tick that will be necessary to
produce intelligence
FutureQ: this urtl goes to my lonely comment. It's good reding
Gordon: ah, well, I don't know about the computational substrate;
certainly current integrated circuit technology is going to hit its
limit, and then we'll have to come up with something more
8:50 PM
observer: cyborg01: you appear to be using a different definition of
"free will" than the definition I am used to.
Gordon: either that or the Singularity will just be slower than human
subjective time
John_Mcc: Ah... IC technology... my favorite topic.. 65 nanometer
silicon... on the way!
Ocsrazor: I'm assembling data for an article which is attempting to
show that the human brain is THE most complex physical object in the
universe due to its connectivity and dynamics
cyborg01: Ocsrazor: what key factors do you mean?
Ocsrazor: I think many of the people in the CS community working
towards general intelligence are ignoring the dynamics that will be
necessary to produce something on the order of human intelligence
FutureQ: complexity, dynamics, reprogrsmability autmonony?
Gordon: Ocs: yeah, you're right about that
FutureQ: oh and plasticity
Ocsrazor: human neural networks reprogram themselves in incredibly
short time frames
Ocsrazor: yep futureQ
Gordon: although I doubt there's some magical key biological factor
that we can't replicate in computation
Ocsrazor: completely agree Gordon...
mporter: ocs: are you going to look for stats about whale brains as
well? they're bigger.
Ocsrazor: just saying that most CS people are ignoring good clues from
the biology and so are most Cog Sci people
Gordon: yeah, it's depressing
Ocsrazor: mporter: have all the stats on cetaceans, their brains are
not as complex as ours in structure
cyborg01: I think biological neural networks reprogram slower than
digital circuits...
Ocsrazor: they have large areas which are highly stereotyped
8:55 PM
Ocsrazor: this is a very interesting area, but unfortunately difficult
to study because of the ethics
John_Ventureville: on "King of the Hill" they just had a cryonics
reference
Ocsrazor: absolutely not cyborg
Jesse has joined the channel
Ocsrazor: they have the potential too, but they don't right now
cyborg01: Ocsrazor: unless you count the effects of parallelism....
Ocsrazor: exactly
John_Ventureville: "Ted Williams was such a great man that even today
people are fighting over his frozen remains."
John_Ventureville: oh, well
cyborg01: Looking at components, electronics is much faster than neurons
John_Mcc has quit the server saying: Leaving
FutureQ: Where'd that come from John?
TylerEmerson has joined the channel
Ocsrazor: not quite yet cyborg, neurons have molecular switches which
are faster than anything in electronics right now
Ocsrazor: nano will catch up soon though
cyborg01: Those switches only contribute to consciousness in aggregate
John_Ventureville: the Fox tv show "King of the Hill"
TylerEmerson: Sorry to interrupt: Could someone send me the log of the
past hour?
Ocsrazor: as do the electronic components
TylerEmerson: emerson@singinst.org
TylerEmerson: thx in advance!
John_Ventureville: Hank was talking to his son about making a
difference in life by being a leader
Ocsrazor: I should say will
FutureQ: Te irony may be that to mimic the bilogical with nano we go so
small and dynamic that in essence we have silicon bio and it turmns out
not to be much more reslilient than before.
Gordon: Tyler: between loglady and Bruce (wherever he is), it should
show up on imminst.org, I think
TylerEmerson: Gordon: okay
Gordon: but I'll send you a lot anyway because I'm nice :-)
TylerEmerson: Aw tank-you :D
Ocsrazor: Gordon: Im interested in developing neuroscience/CS
collaborations, thinking about starting a nonprofit to do such

#4 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 22 March 2004 - 04:37 PM

Randolfe asks some very important questions about AIs, like "why is "evil" or "destructive" AI easier to produce than morally neutral or even "good" AI?" These questions definitely need better, longer answers.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users