• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Chat Topic: Transvision 2003 General Discussion


  • Please log in to reply
5 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242 â‚®
  • Location:United States

Posted 23 June 2003 - 09:15 PM


Topic: Transvision 2003
General Discussion From Those Attending
Chat Time: Sun. June 29th & July 6th, 2003 @ 8pm Eastern US Time
Location: http://www.imminst.org/chat

About Transvision 2003:
http://www.imminst.o...=99&t=1100&st=0

#2 Discarnate

  • Guest
  • 160 posts
  • 0 â‚®
  • Location:At a keyboard of course!

Posted 24 June 2003 - 12:14 AM

*IF* I'm near a terminal at that point, I'll be glad to attend. We'll have to see...


-Discarnate

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242 â‚®
  • Location:United States

Posted 30 June 2003 - 09:32 AM

As this Sunday's chat had little from the TV conference.. we'll have this chat topic next Sunday, July 6 also.

Sunday 29, June Chat Archive:

<BJKlein> any word from our TV2003 connections?
<Utnapishtim> nope
<Flux> hi BJKlein
<BJKlein> hi
<Flux> u not been to Transvision then?
<BJKlein> as i live in AL and don't fly, it would have been to long of a road trip for me this year

<Utnapishtim> BJ That is an irrational affectation of yours
<Utnapishtim> but we've been through that one
<Sumadartsun> I thought it was because of difficulty to retrieve corpse for cryonics
<BJKlein> i may have missed semething..
<InfernalDevices> BJ: Your sense of control driving could be an illusion to a large extent
<Sumadartsun> [03:25] <Flux> BJKlein, u dont fly, is that because of risk?
<Sumadartsun> [03:25] *** BJKlein has quit IRC (Read error: Connection reset by peer)
<Sumadartsun> [03:25] <ravi> bj doesn't fly b/c of risk
<Sumadartsun> [03:26] <Flux> rite
<Sumadartsun> [03:26] <ravi> he beleives he doesn't have any control when he's flying
<Sumadartsun> [03:26] *** BJKlein has joined #immortal
<BJKlein> ty Sumadartsun
<haploid> Couldn
<BJKlein> granted the rist of attending TV via plane was a consideration as attending may have more benefit of logevity than the risk of flying..
<haploid> 't possibly be risk.
<BJKlein> but, i thought better of this in the long run.
<InfernalDevices> I wonder if there are cryonicists who don't drive because of the danger involved?
<BJKlein> we have 4 ImmInst members attending and they can fill us in on the action
<InfernalDevices> statistically driving is much more dangerous than flying
<BJKlein> the main draw back InfernalDevices is that there's not much left after a plane crash
<InfernalDevices> I look forward to reading any articles they may write
<haploid> statistically, swimming and eating solid food are both riskier activities than flying.
<InfernalDevices> true
<BJKlein> InfernalDevices, have you read any of my earlier works?
<InfernalDevices> a few
<BJKlein> www.imminst.org/wiki/BruceKlein
<BJKlein> will be my main node from now on
<BJKlein> i'm working on an answer the question of selfishness now..
<BJKlein> and the general philosophy of immortalism
<InfernalDevices> interesting
<BJKlein> the wiki is a great tool for branching off ideas quickly
<InfernalDevices> selfish altruism?
<BJKlein> heh.. i believe we're all selfish inherently.. but that that's really not a problem.. it's a plus
<Sumadartsun> interesting, a wiki; is it supposed to be used yet?
<InfernalDevices> I like the term "intelligent self-interest" which includes marriage, business, etc.
<BJKlein> Sumadartsun, of course
<BJKlein> feel free to login to create your pages
<BJKlein> InfernalDevices, all this can fit into the immortalist philosophy..
<Sumadartsun> I don't think we're inherently selfish; humans have selfish and altruistic tendencies, and they can express themselves in many different degrees
<haploid> heh
<haploid> <---- reformed objectivist
<BJKlein> which basicaly, for me anyway, is that death is oblivion, and thus we should do what we can to keep ourselves and our fellow sentient beings alive
<InfernalDevices> BJ, how do you explain a wealthy U.S. doctor going to Africa to care for starving children? What does he really get from it?
<BJKlein> look to ep for an explaination
<BJKlein> ep = evolutionary psychology
<BJKlein> our ancestors got more mates by being alturistic
<BJKlein> it's hardwired into us
<Sumadartsun> that doesn't make it selfish
<BJKlein> or rather an overwhelming urge
<InfernalDevices> So he can "score with more chicks" by being seen as a good guy!
<InfernalDevices> lol
<Sumadartsun> our genes were selfish in making us that way; that doesn't mean we're selfish in acting that way
<haploid> InfernalDevices: Admiration of friends, social standing, etc.
<BJKlein> he may not make the logical connetion, but in general yes.. he can score with more mates and raise his social status
<InfernalDevices> and so even if the doctor's motives are "pure," it will be a social plus for him.
<Sumadartsun> no, I don't think this is right; something like altruism can have (has) come to exist for gene-selfish reasons, but that doesn't mean the subconscious motivation for them is selfish
<BJKlein> InfernalDevices, even if the Dr. thinks that it's 'pure' that in itself is a matter of question.. what is pure?
<Sumadartsun> *that there is a selfish subc. motivation for them
<InfernalDevices> BJ: to give with no expectation of getting anything back
<celindra> Infernal: not possible
<BJKlein> on a higer level, the Dr may think this.. but the action will nevertheless gain him something
<BJKlein> most likely standing..

<Sumadartsun> [03:39] <Sumadartsun> it doesn't even necessarily benefit a human in the EEA -- it benefits his genes
<Sumadartsun> [03:40] <Sumadartsun> which also exist in his kin
<Sumadartsun> [03:40] <Nick> right, but it's his genes
<Sumadartsun> [03:40] <Sumadartsun> it benefits his capacity to reproduce, which is not always the same as benefiting him
<Sumadartsun> [03:40] <celindra> Altruism vs. selfishness has been spelled out by Objectivists for years. No point in rehashing the same old stuff

<BJKlein> yes, selfish genes explain much or 'strange' selfless behavior
<Sumadartsun> celindra, I'm not completely sure what objectivists are claiming with respect to altruism; if they claim humans can't be truly altruistic, they're wrong
<Sumadartsun> though I do know they think altruism is bad
<haploid> Actually Objectivism does not say anything about altruism itself. Objectivism mainly addresses the "immortality of self sacrifice", which can be orthogonal to altruism.
<Nick> something about how rational selfishness leading to the greater good?
<haploid> e.g. one can do something seen as altruistic that does not involve self-sacrifice.

<Sumadartsun> celindra: the side that thinks it's impossible to be un-selfish is wrong, unless by a pretty uninteresting definition of "selfish"
<haploid> Right, but more specifically, what Rand would call an affirmation of life. not necessarily "egoistic". Most of Objectivism defines ethics in terms of actions in the context of whether it affirms life or death - or in a social context, whether it affirms or rejects individual rights.
<Sumadartsun> (the other side is wrong, too, but that's a different topic)
<haploid> A good primer would probably be "Introduction to Objectivist Epistemology"
<haploid> IIRC it's edited by Peikoff.
<InfernalDevices> thank you
<celindra> Right, wrong -- it doesn't matter. I'll ask the question I've been asking everyone lately -- why do you feel compelled to state what you think?
<celindra> Seriously.
<Nick> personally, part of the motivation is truth-seeking on both sides, although that requires more discussion than plain assertion. I imagine there are political adaptations which lead you to enjoy convincing other people of your point of view which you have to control for.
<celindra> OK
<Sumadartsun> that would be my answer, too (to convince others, to make sure my opinions are correct, to make sure what my opinions are, plus the adaptations)
<BJK> we're compelled by instinct and our evolutionary heritage to seek our and impress our piers
<InfernalDevices> especially fishing boat piers!
<InfernalDevices> : )
<BJK> peers
<BJK> heh
<celindra> Is there a base reason below "truth seeking"? BJ is on the right path with his answer.
<haploid> I doubt it has anything to do with "peer-reviewing" one's views. People who are willing to re-examine and change their belief systems are few and far between.
<BJK> it's was obviously important to keep us alive..
<InfernalDevices> you have to open your mouth to build social networks which can be a boon to an individual
<BJK> or rather for our selfish genes
<InfernalDevices> right
<BJK> InfernalDevices, logic would make one think that those that are most quite are the ones evolution would selct for..
<BJK> they would have all the knowledge

<hkhenson> wassail

<BJKlein> welcome keith
<InfernalDevices> hello Keith
<BJKlein> talking about your favorite subject.. or maybe second fav.
<hkhenson> hmm
<hkhenson> which one?
<BJKlein> ep
<hkhenson> ah
<BJKlein> all roads lead to it
<hkhenson> good tool
<hkhenson> true.
<hkhenson> did you see my latest rants on the memetics list?
<BJKlein> i was thinking, that it may be logical to assume that those of us that were more quite.. and listened would be selected for in evolution... they'd have all the knowledge
<BJKlein> but, we're not like that at all..
<hkhenson> not necessarily
<BJKlein> if anything we yearn to be heard.. we need to talk
<hkhenson> well, when we are talking others have to be quiet
<BJKlein> yes, i was in the process of proving myself incorrect
<hkhenson> which means the attention is focused on us
<hkhenson> attention being rewarding
<BJKlein> good point
<InfernalDevices> BJ: Both types of people have their descendants among us but I would say many more of the general population are of the "blabberers" variety.
<hkhenson> because the more you get the higher you think your status is.
<hkhenson> that's true even for chimps
<BJKlein> if it's knoweldge that keeps us alive.. then one would think the less talk and the more listen the better.. 'where the meat' where's the nuts..
<hkhenson> bj, staying alive is only part of the genes problem
<BJKlein> but, it's obvious that the ones that were totaly quite were selected out
<hkhenson> the bigger problem is making the jump to the next generation
<BJKlein> but success in getting to the next generation involves gathering food/info
<hkhenson> there is, of course, a need to listen
<Nick> for one those who didn't reciprocate by sharing information would be considered cheaters. assuming the information-sharing idea
<hkhenson> that being the only way knowledge was passed on
<hkhenson> but there is a need to talk to, or there would be nobody passing it on.
<hkhenson> and because most of the kids we were talking to were related, it made lots of sense to bring them up to speed as best you could
<hkhenson> which is probably why so many of us like to teach.
<BJKlein> Nick, yes, cheaters are weeded out quickly.. they don't get to reproduce because the female has a evolved sensativity toward selfish behavior
<haploid> I would think that ANY bar-room dating environment would demonstrate the absurdity of the idea that evolution has sexually favored knowledge over social interaction.
<hkhenson> heh heh hap.
<hkhenson> depended on the situation
* celindra notes that evolutionarily speaking, intellectual extremes are destined to die
<BJKlein> heh
<hkhenson> knowledge of how to hunt for example was critial
<celindra> Retards and geniuses, both the same in the eyes of natural selection
<hkhenson> and wooing the chicks in those days required large dead animals.
<haploid> I don't think it stops at the bar either. There's something limbic about the attraction of the general public to outspoken movie stars and "royalty".
<BJKlein> there is a great deal of information exchanged in dating, from what i understand
<haploid> How many members of the media mourned the passing of Richard Feynman ?
<haploid> How many executives of the Fortune 500 are noted introverts ?
<hkhenson> it is interesting that the likelyhood of male chimps hunting depends to a very high degree on one of the females being "pink."
<InfernalDevices> we live in a highly technological society where being a genius can be a great boon to reproductive success
<InfernalDevices> especially if you can use your genius to gain great financial success
<hkhenson> there may be exceptions, but in general I doubt it.
<hkhenson> how many kids does bill gates have?
<ravi> 2 i think
<InfernalDevices> I admit even Bill Gates does not have twenty brats of his own from different mothers!
<InfernalDevices> lol
<hkhenson> he could, of course.
<InfernalDevices> his "nerd factor" is so high even his billions could only help him so much!!
<BJKlein> we also live in a world where females can choose to have children when they want
<InfernalDevices> ; )
* celindra steps away for a while
<hkhenson> but to do so would take time that he doesn't want to spend on such stuff
<InfernalDevices> *sorry celindra*
<InfernalDevices> but this conversation does so how women at least have some taste or judgement in choosing mates
<hkhenson> the era of sultans is gone by.
<hkhenson> you have to wonder how much it affected the gene pool though
<InfernalDevices> what we have now is "serial monogamy"
<hkhenson> early proto states were much into breeding the leader
<hkhenson> several in the mid east
<hkhenson> the Incas were famous for that
<hkhenson> china of course
<InfernalDevices> a successful man now will have only one wife at a time but he will monopolize her most fertile/youthful years.
<InfernalDevices> *serial monogamy*
<hkhenson> right, but like jonny carson, he may have few if any kids
<hkhenson> anyone know about carson?
<hkhenson> i.e., how many kids
<InfernalDevices> I think he has two or three
<haploid> Before my time =)
<haploid> oh
<hkhenson> then there *are* people who have multiple wives.
<InfernalDevices> a dimly remembered episode of Biography comes to mind
<hkhenson> getty for example
<hkhenson> second family with three kids
<BJKlein> this is a tad off topic.. not that we have a topic tonight.. other that any Transvision News.. but,
<hkhenson> but they are fairly rare
<hkhenson> most rich people don't spend it in having kids
<BJKlein> In this pre-singularity world.. where there seems be so little time before it happens, it there really any need for movements such as Immortalism and Transhumanism.. Do we need some sort of leading organization to guide us through.. or will it just happen..
<hkhenson> bj, it will happen
<hkhenson> but the movements may shape it.
<hkhenson> one sure hopes so
<InfernalDevices> yes
<hkhenson> to a fair extent the internet was shaped for a long time by the hackers mentalities that made it
<BJKlein> what would be your dream organization..
* Sumadartsun goes to sleep
<hkhenson> in some ways it still is dominated by that mentality.
<BJKlein> take care sum
<hkhenson> to our dismay in some cases.
<hkhenson> like spam is directly due to the model of unmetered email
<BJKlein> like hackers being those that like to mess things up?
<hkhenson> original sense of the word, not the vandals
<BJKlein> as in opensource
<hkhenson> right
<hkhenson> eric raymond is an example
<hkhenson> drinking kickapoo joy juice tonight.
<hkhenson> fermented a #10 can of peaches.
<BJKlein> sounds like it contains alch.
<BJKlein> yep
<hkhenson> high concentration of alcohol
<hkhenson> about as high as it can go.
<BJKlein> heh
<BJKlein> does that help the ideas to flow?
<hkhenson> the peaches finally sunk :-)

<hkhenson> <hkhenson> not particularly
<hkhenson> <hkhenson> more wild ideas than I can deal with usually
<hkhenson> <hkhenson> for example, anyone want to work on a project where you get to smash locomotives into cars?
<hkhenson> <hkhenson> no takes, sheesh
<hkhenson> <hkhenson> takers
* BJK raises his hand
<hkhenson> air bags for locomotives
<BJK> why not locomotives into locomotives
<hkhenson> can send you the writeup if you wish
<BJK> please so
<BJK> do
<hkhenson> brb
<BJK> same here.. i'll be back a little later
<hkhenson> sheesh
<hkhenson> what email for you?
<BJK> bjk@bjklein.com
<BJK> i'll be back in about 20mins
<hkhenson> ok

#4 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242 â‚®
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York
  • ✔

Posted 01 July 2003 - 01:45 AM

This is a repeat of my post but I thought it is appropriate here as well.

I am finally getting a chance to sit down and start a reflection on what transpired at the WTA Transvision 2003 Conference but before I do I would like to thank a number of people starting with you Bruce. I was greatly honored to represent our organization and while I think I could have done better I am none the less please with this first trial. I don’t think I left a poor impression though I am afraid I did talk too much, but that is after all why I was sent. The Conference wasn’t good, it was actually much better than that, it was very impressive.

It wasn’t spectacular for reasons of special effects but of the content and the personalities that gathered to debate and address significant issues and explain to the best of their ability complex technological options and opportunities.

I would also like to continue by thanking the members of our group that gathered to support my efforts and participate in the larger specter of the activities. I would start by thanking Discarnate, (John Benner) is for his support and advice, and he was always there when needed even if I would sometimes forget to ask for help. Michael Anissimov was also a consummate professional and demonstrated maturity well beyond his years. I was deeply saddened that our schedules and endeavors contradicted one another so that we couldn’t spend more time together. I had hoped to spend more time in discussion with you Michael.

Hugh Bristic was a continuous valuable contributor to every presentation where I encountered his presence. The only major regret I have is that the manner in which the various talks were organized made attending all I might want to simply impossible.

With respect to Ocsrazor (Peter Passaro) I cannot begin to explain how grateful I was for his educated participation. He in concert with some of the presentations (in particular the one I was honored to moderate between Aubrey de Grey PH.D. and Rafal Smigrodski M.D. PH.D.) provided in hours what some people spend months going to school to learn. I felt like I was already the beneficiary of an implant that allowed accelerated learning.

Before I go into more details about the various discussions and presentations, the open exchanges of poignant questions and answers as well as just the dynamic, virtually electric atmosphere of the event I would be remiss not to mention the incredible effort of James Hughes. By the end of the event it was firmly believed by many that he had already been cloned for he not only appeared to be in more than one place at a time but always where and when needed. We should be very grateful that he has chosen to put his estimable talent to our common cause because this is a person that would make a fortune should he ever leave academia for large scale media production.

I also was very pleased to meet and spend some time with Jose Cordiero and I believe we will be building significant bridges with our Latin American brothers sooner rather than later. Anders Sandberg PH.D., Nick Bostrom, Gregory Stock, Natasha Vita-Moore, Greg Pence PH.D., and Ron Bailey gave highly informative and articulate arguments that it was very rewarding to be present for and at the dinner I must add we were all deeply moved by the words of Dr. William Sims Bainbridge whose speech at the banquet can only be described as inspiring and almost radical. I only hope we can somehow get the words printed to this site at some point if they get published on http://www.transhumanism.org it was simply rousing to say the least.

I wish I could comment on Eliezer Yudkowsky’s and Michael’s talks but I was sadly either moderating or myself talking during their presentations. For this reason I would like to not only ask any of you that may have been present for their talks or mine to please provide an objective review of what transpired. Your criticisms of my effort would certainly be greatly appreciated. I am prepared for the review of errors as I know I made a number that I have already learned from but the perspective of those of you that were present is a mirror that would help me to refine my technique and methods.

We accomplished a significant amount of networking and I hope that this event will have helped bring more interested people into our group.

As to some of the naysayers I think it is fair to include George J. Annas who I nevertheless must say is articulate, amusing, and generally rational though misguided and committed to what I consider the incorrect position. He did however observe my presentation with a minimum debate but that may have been more due to the shortness of time.

I am exhausted and will return to discuss other aspects as I get the opportunity but I want to close with what I consider extremely promising news. Dr. Rafal Smigrodski gave a talk that not only addressed an issue of importance with regard to one significant aspect of the aging process but announced a forthcoming paper that will announce a new technique for replacing damaged and mutated old mitochondria with new mitochondria that can extend our life expectancy a significant number of decades if the subsequent trials prove as positive as these early ones have been purported to be.

This process is called Mitofection and when the paper is released we will learn more as to its specific mechanism and also how the plans will go forward with to test the ability to replace the entire body’s damaged organelles.

What we were told is that the first stage of this process has already been accomplished and the replacement process functions in lower mammals. The clock is ticking on the first of the methods that may change the rules. This isn’t a panacea but it is important if this technique proves to be without side effects and can do what is promised. It promises to push that 120 year boundary when we come up against other concerns.

In closing this was but one of the exciting ideas and techniques discussed and of course as this is unknown in the literature should be treated with skepticism until it passes all the necessary scientific proofs. There were numerous other issues raised and I would hope that when Peter is more rested he can contribute his perspective on much of the neurophysiology that was discussed during the conference. A lot of issues were raised and perhaps Michael can give a review of Eliezer’s talk and visa versa?

Again I want to extend my sincere thanks to all of you that contributed to sending me to this conference and I only hope that my small contribution will have added something to the overall mix that helped bring some more resistant minds over to our cause.

My sense of what we did was that it was extremely useful and important, more a beginning than a true result, but a true beginning of real results to come.

#5 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242 â‚®
  • Location:United States

Posted 01 July 2003 - 02:38 AM

To respond to Laz, please post here:

http://www.imminst.o...=12

Thanks!

#6 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242 â‚®
  • Location:United States

Posted 07 July 2003 - 02:57 AM

JULY 6 CHAT ARCHIVE


<BJKlein> Topic: Transvision 2003
<goomba> what is transvision
<BJKlein> reference: http://imminst.org/f...&f=63&t=1316&s=
<Jonesey> bunch of transsexuals watching TV of course
<MichaelA> google: transvision 2003
<googlebot> googling for transvision 2003
<googlebot> http://www.transhuma...org/tv/2003usa/
<BJKlein> Official Chat Starts Now
<MichaelA> I am the only one here who was there
<Jonesey> ok tell me all about TV03
<MichaelA> it was fun fun PHUN
<MichaelA> good cross section of peoples
<BJKlein> we'll have an open discussion...
<Jonesey> lucky dog
<MichaelA> probably 25% female
<BJKlein> Eli was there also?
<Jonesey> not so lucky
<MichaelA> yeah, Eliezer was there
<MichaelA> his talk was very nice, about Fun Theory
<simdizzy> very nice different from very good?
<MichaelA> one of the points he made was that it seems like Fun Space might increase exponentially with intelligence
<MichaelA> no, very nice = very good for all intents and purposes
<MichaelA> it seems like there might be a combinatorally larger space of fun things to do when your brain can put itself into more configurations, that was his loose conjecture
<MichaelA> my talk was more boring and technical
<BJKlein> did you opt out of using slides?
<Nick> MichealA: did he cover much stuff that isn't in his online essay?
<MichaelA> I talked about why I thought an AI would self-improve really fast when it got going
<MichaelA> yes, I did opt out of slides, first because I learned no one on my panel would be using them
<MichaelA> Nick, nope
<MichaelA> just funny jokes using Fun Theory
<MichaelA> I think his concluding remark was "Let's make the future...fun!"
<MichaelA> the conference halls had beautiful stained glass windows
<simdizzy> I read SFT about a year ago, but i dont consider playing about with a rubicks cube very fun, so i couldnt really relate
<MichaelA> a person in the audience made that remark too
<Sumadartsun> IIRC fun theory says that it's better not to want to fully experience all you can experience at a certain level of intelligence, but that's assuming intelligence can keep increasing without bound
<Nick> simdizzy: that was a non-representative example of ways you can have fun
<Sumadartsun> correct?
<MichaelA> when Eliezer brought up the Rubik's cube
* ImmortalPhilosopher sleeps
<simdizzy> yeah? what was elis answer to that?
<MichaelA> I don't think Fun Theory "says" its better one way or the other, just that people will probably generally choose the second option
<Jonesey> u can have fun pulverizing the rubik's cube with a sledgehammer after it frustrates u enough
<MichaelA> no answer, he just kept on going, but saying that a rubik's cube in particular isn't fun misses the point
<Sumadartsun> MichaelA: I meant: Eliezer says that based on Fun Theory
<Nick> Sumadartsun: I think the main point is we won't run out of fun, given immortality and some ability to self-improve
<BJKlein> Lucifer was there also?
<MichaelA> the real closing sentence was "Not all forms of posthumanity are nice places to live, but all sufficiently nice places to live are, necessarily, posthuman. Let's make the future a nice place to live."
<Sumadartsun> not just "some" ability, unbounded ability, which is a lot
<MichaelA> I did not see Lucifer there, and I'm almost certain I would have recognized him
<Nick> true, unbounded but not more logarithmic. that's with true immortality, mind you.
<Nick> -not
<MichaelA> Suma, I think Eliezer says that people will probably just think that doing all options for fun at a certain level of intelligence will get boring
<MichaelA> let me post my talk, hold on
<Sumadartsun> that would probably require either making new universes and escaping this one, or doing something else with magic physics we don't yet know about
<ravi> did anyone at transvision get to here anything on cyro?
<simdizzy> cyro?
<George> The main problem with TV03 was that too many talks were going on at the same time.
<George> There wasn't a single talk that I would have voluntarily missed.
<George> Yes, there was a life extension talk, but I missed it.
<Lucifer> I was there
<Jonesey> did it work george?
<BJKlein> David, did you happen to see Kennith Sill's speech?
<George> Sorry, Lucifer? Real name?
<MichaelA> http://www.imminst.o...=ST&f=66&t=1375
<MichaelA> Cryo as not talked about at TV03 to my knowledge
<Lucifer> BJK, I didn't see his talk but I did meet him
<MichaelA> Ack, you were there David? Did you see me?
*Chestnut* helo :O)
<George> My favourite story from TV03:
<Alex> hello there
<BJKlein> welcome Alex
<MichaelA> is this Alex
<George> I was at the pizza buffet on the Friday nite with Simon Smith & Shannon Foskett of Betterhumans.
<MichaelA> "Future" Bokov?
<Alex> no, that's another person
<George> We were sitting with Peter Pasarro, a neural engineer.
<George> We had an older guy sit with us at the table, bald, silver goatie,
<MichaelA> Stuart Hameroff? :)
<George> and he got into a heated debate with Pasarro over quantum consciousness
<MichaelA> Yup
<Sumadartsun> :)
<George> Yes, Michael.
<George> But Peter didn't know it.
<Alex> lol that's who popped up in my mind too
<George> 15 minutes into the conversation he pauses, and says,
<MichaelA> Mr. Hameroff looked lonely :(
<George> Wait, you're Stuart Hameroff, aren't you?
<George> Man, I am still laughing over that one.
<MichaelA> lol
<BJKlein> hehe
<Jonesey> neural engineer?!
<MichaelA> I was impressed he even attended
<George> Well, re: Hameroff and lonely,
<George> I was impressed to see him there for the whole thing.
<Alex> yeah me too
<Jonesey> quantum consciousness?
<George> He admitted to me later that prior to the conference he didn't even know what Transhumanism was.
<Lucifer> I talked to Hameroff after his talk, I got him to admit that classical computing can handle all the "easy" problems of consciousness
<Alex> what do you think about the quantum consciousness theory
<MichaelA> www.quantum-mind.com
<George> But I did see Hameroff engaged in many conversations over the course of the weekend.
<Jonesey> i feel like i'm in the intellectual fast and the furious, wtf do these terms mean?:)
<Alex> personally I consider the hard problems of consciousness to be the area of psychology and philosophy
<MichaelA> Jonesey, let google show you the way
<MichaelA> brb
<Lukian> k MichaelA
<George> One cool thing about meeting Hameroff (as I'm a big fan), was that we agreed that the Turing Test is useless.
<George> Thus, I proposed to him my Quantum Turing Test, which caused him to pause and say that it was worth further consideratino.,
<Lucifer> Hameroff also admitted we use something awfully like the Turing Test on other people
<George> My quantum Turing test is this:
<Alex> what's your test?
<George> since consciousness clearly effects the quantum realm,
<Jonesey> google is going to show me how to engineer brains?
<Jonesey> most ppl fail my turing test
<George> a non-conscious intelligence will not effect the collapse of the wave function in the same way that a human counsciousness will
<Jonesey> i ask them to define words they use, that's the test
<George> Thus, we need to somehow screen consciousness in this way.
<Alex> that's assuming that the Copehagen interpretation is the true picture of reality
<George> The problem is the test is completely impossible,
<George> because it must be observed.
<George> Damn Heisenberg.
<Jonesey> there's no collapse of wave functions
<Jonesey> wave functions are just spatial projections of state vectors.
<Lucifer> Consciousness can only be inferred through behavior, and there is no reason that c
<George> Alex: Correct re: Copehagen.
<Jonesey> they don't even exist for spins and other non spatial quantum measurables
<Alex> then how is the test going to be any good to us?
<Lucifer> ... computers won't be able to display all the relevant behaviors
<George> Alex: the quantum test? or the Turing test?
<Alex> your test
<George> Well, it's a) a denial of the Turing Test (which is an important start)
<Alex> you said that it's completely impossible, thus it's useless to us
<George> and b)
<George> it's a start on a new test to determine consciousness
<George> We *have* to figure out something.
<George> My idea sucks, but perhaps there's something beyond.
<Lucifer> Hameroff doesn't understand the Turing test, Rafal had to explain it to him during the question period
<Alex> so do you subscribe to the quantum consciousness hypothesis?
<Jonesey> non conscious intelligence sounds like a contradiction in terms. maybe its not but i gotta see it to believe it
<George> I hold many seemingly contradictory theories at once.
<George> I refuse to remain dogmatic on the issue.
<Jonesey> consciousness seems to be prereq to intelligence, much lower order of complexity necessary
<Alex> i've always had a gripe about "artificial intelligence"
<George> We still have no idea about consciousness.
<Jonesey> becoming catmatic?
<George> and its relation to the quantum realml.
<Alex> because intelligence is just a quanlity not an entity
<goomba> conciousness explained by dennett puts me to sleep
<simdizzy> automatically
<George> The question is this: does consciousness create life, or does life create consciousness?
<George> The mental state functionalists say the latter,
<Alex> well first we'll have to define consciousness
<George> while the panpsychists like Hameroff suggest the former.
<Alex> so what do you consider consciousness to be?
<Jonesey> the question appears to be a false dichotomy
<Alex> i'm more of the functionalist camp
<George> Well, based on Copenhagen interpretation of quantum phenomenon, consciousness is that which creates a world for the observer.
<Alex> is consciousness created life, then where did consciousness come from?
<Jonesey> i'm more of the funk tionalist camp, e.g. George Clinton
<George> Nice.
<Alex> in my opinion Copenhagen interpretation is bogus
<George> A lot of people agree with you,
<Alex> I support MWI
<Sumadartsun> George: I think that's a specific type of Copenhagen interpretation, though
<George> but there are some bizarre things about quantum physics that aren't answered yet.
<simdizzy> Alex; do you support QI?
<Jonesey> I don't speak danish, i need a lot of copenhagen interpretations
<Sumadartsun> Alex, you just missed a long conversation on it
<Alex> it feels more intuitive and explains a lot of the difficulties of quantum mechanics without resorting to mental contortions
<Alex> oh well
<simdizzy> thats life :)
<Alex> i don't really support QI, I used to but as I did more research and learned more about congnitive theories, QI just seemed more and more unlikely to me
<Sumadartsun> quantum immortality?
<Lucifer> MWI does have some serious problems, Robin Hanson posted about it this week on extropians
<simdizzy> what else could QI mean?
<Sumadartsun> the probability problem?
<Alex> I won't be surprised if there are quantum effects in individual neurons, but are they correlations of consciousness? I don't feel so
<Sumadartsun> simdizzy, it could mean so many things
<simdizzy> Like: quitessential Interpolation ?
<Sumadartsun> I know what you mean by QI; not sure Alex does
<Alex> oh i thought you meant quantum intelligence
<Sumadartsun> see? :)
<Alex> so what does QI mean?
<Sumadartsun> quantum immortality
<Alex> sow hat's the definition of that?
<Sumadartsun> in manyworlds, the idea is that if there is some world where you don't die, then your consciousness will move to that world instead of the ones where you do die, so you'll never die
<simdizzy> http://www.higgo.com/quantum/qti.htm
<Alex> ahh I see
<George> Does anyone here follow the work of Christopher Michael Langan, the guy with the 195 IQ?
<Lucifer> Sum, sounds like Permutation City
<Jonesey> this is BS. what if your immortality violates the laws of physics given current constitutions of human?
<George> He claims to have a theory that explains everything: Cognitive-Theoretic Model of the Universe
<George> Woops
<Sumadartsun> Lucifer: it's not the same mechanism
<Jonesey> in no universe will u survive
<George> Sorry about the bold.
<Jonesey> MWI doesn't allow for splits which violate the laws of physics.
<simdizzy> yes ive heard of that theory Goerge
<George> Cool.
<Nick> Jonesey: then you just live exceptionally long, but not quite immortal. still significant.
<Lucifer> google: Christopher Michael Langan
<googlebot> googling for Christopher Michael Langan
<googlebot> http://www.ctmu.org/
<Jonesey> that's like postulating pion immortality
<Sumadartsun> here it's different quantum worlds, in P City it's different implementations in the same world
<Jonesey> huge diff between 130 yrs and infinity pardner
<Jonesey> until the 12th of never, and that's a long long time...
<George> Essentially, Langan argues that the universe is a kind of software program that gets executed by concsious agents
<Nick> yeah, huge difference between 10^10 years and infinity too. not at all immortal, in that case, but still noticable
<simdizzy> does he really have an IQ of 190 ?
<George> I could be bastardizing his theory, but that's how I understand it.
<George> 195
<Alex> and where does those conscious agents exist?
<Alex> in a primal world or something?
<George> Panpsychism
<George> Platonic realm.
<Sumadartsun> IIRC the cognitive-theoretic theory is a lot like the ones talked about on the everything-list
<Alex> i tend to take panphysics with a grain of salt
<Sumadartsun> such as Tegmark's "Level IV"
* ImmortalPhilosopher counts 25 people here ;]
<Alex> google: cognitive-theoretic theory
<googlebot> googling for cognitive-theoretic theory
<googlebot> http://www.ctmu.org/
<Jonesey> well, i've tried to read langan's theory and it just seems like nonsense. either he's discovered something brilliant or is just a bright and deluded guy, i'm not qualified to tell
<Alex> well does the theory make testable predictations?
<George> I blogged Langan a while back: if you're interested, go to my personal website, www.sentientdevelopments.com and type in Langan in the search box.
<George> BTW, a 195 IQ happens in 1 in a billion births!
<Alex> i so wish i have an implant which would allow me to access libraries on the net at will without having to type or open mulitple windows
<George> I met Michael Vasser as Transvision. He claims to have an IQ of 190.
<Jonesey> not here on the east coast
<Alex> ditto for physical augmentation
<George> After speaking with him, I wouldn't doubt it.
<Jonesey> yeah well i have an 8" dick.
<BJKlein> George D. welcome!
<simdizzy> George : are you a singularity blogger?
<BJKlein> George=Betterhumans.com
*** Disconnected

Session Start: Sun Jul 06 19:30:49 2003
Session Ident: #immortal
*** Now talking in #immortal
Session Close: Sun Jul 06 19:30:52 2003
*** Attempting to rejoin...
*** Rejoined channel #immortal
<George> Alex: you are already a transhuman; do you hope to become posthuman?
<goomba> what is blogging :p
<George> Web Loggin.
<George> Logging
<Alex> in my book >H means posthuman
<Utnapishtim> my main priority is to get aging turned off.
<George> Kind of an online diary, journal, etc.
<Jonesey> what's posthuman?
<George> Ah, I'm used to the >H=Transhuman.
<George> You could be right.
<Alex> ok
<Jonesey> i like postraisinbran now and again
<simdizzy> George : which part of singularity theory are you a skpetic about?
<George> The social aspect of it.
<simdizzy> do you doubt exponential change?
<George> No
<simdizzy> do you think it will stop at some point?
<George> But how does exponential PC change affect the social realm?
<simdizzy> social realm is a side issue
<Alex> well the singularity will have to stop at some point, it can't go on forever
<George> No, it doesn't appear that exponential change will stop.
<Utnapishtim> I'm a singularity skeptic myself
<Lucifer> Alex, what is your book?
<FluxLurking> what if there is a social change like western civilisation going downhill?
<Sumadartsun> Alex, it might or might not; we have no way to know that, I think
<Alex> maybe so, we won't know
<Sumadartsun> George, aren't most of the social effects predicted to come from transhuman intelligence rather than just faster PCs?
<simdizzy> sociology does not affect the powerful exponients of technology
<FluxLurking> doesnt it
<FluxLurking> ?
<George> Society has a hard time keeping up with technolgical change.
<FluxLurking> what about hackers? goverments? money?
<MichaelA> George, Mike Vassar said his IQ was 160 to me!
<MichaelA> anyway, he is busted
<BJK> heh, maybe he forgot
* Lucifer wonders why Mike Vassar would be telling people his IQ
<Lukian> :D
<George> Accelerating change is not new. It's not something that's only happening now. It has been technology's story from the very beginning. It's just that we have failed to identify the exponential process of technological advancement until now; it's starting to move so fast that we can feel it -- even within our puny lifespans.
<Alex> Lucifer, much of my "book" is based on Anders Sandberg's transhuman site
<MichaelA> the Singularity has little to do with accelerating change, imo
<George> Technological innovations cause a positive feedback loop. As we develop better tools, those tools in turn help us to build even better tools, and so on. For example, today there are some manufacturing processes (both at the design and construction phases) that are conducted by automated and robotic systems that lie outside of human capacity and awareness. This generation of tools will in turn produce the next generation of tools, and so on
<Sumadartsun> the Singularity implies accelerating change, but accelerating change does not imply the Singularity, I think :)
<simdizzy> where are you quoting this from?
<MichaelA> I actually asked him what his IQ was
<Alex> further, when we expand our scope to consider the entire human history
<Utnapishtim> I don't buy the exponential progress argument
<Alex> we see that thousands of years ago there were very little everyday change
<George> Technological innovation increases exponentially, while cultural norms lag behind, or in some cases, come to a grinding halt altogether in reaction. The divide between those communities (and their cultural memeplexes) that can keep up with technological change versus those who cannot (or refuse to) is steadily growing. This cultural latency causes stress, and in part explains many of the global problems we are experiencing today. The keys t
<MichaelA> even if everything was getting worse, transhuman intelligence would still have a tremendous inmpact
<MichaelA> impact*, even
<MichaelA> The keys...
<Utnapishtim> mainly because the problems we are attempting to tackle seem to be growing exponentially more complex as well
<Alex> but gradually as the centuries passed, we get more knowledge which in turn builds on itself and leds to accelerating changes
<MichaelA> Ut, did you read Kurzweil's precis, or?
<Utnapishtim> Michael: Yes I've read his book
<Sumadartsun> Utnapishtim: perhaps; but apparently they're going by a smaller exponent, and an exponent divided by an exponent is still an exponent
<Alex> my question is whether articial intelligence will be human or not
<MichaelA> AI should be *humane*, not necessarily human
<George> Alex, do you mean augemented human intelligence?
<Alex> we'll see, but it's in our best interests to spouse AIs who are humane and allies of humanity
<Alex> yes i meant in that respect
<Sumadartsun> Alex, it will not be "human" in the narrow sense of being biologically monkey-like, of course; it would be nice if it had the characteristics of humans we would like to keep
<simdizzy> technically it could look like a monkey..
<George> We'll probably be able to seriously augment human intelligence before we create one from scratch.
<Sumadartsun> if it wanted to :)
<Alex> yes, I'm thinking along the lines that the AIs would be directly based off the human brain designs
<simdizzy> and bahave like one
<Alex> why should we build an intelligence from ground up when we already ahve a template?
<George> Exactly.
<Sumadartsun> our template is buggy :)
<MichaelA> give a characteristic human morality to a recursively self-improving intelligence might be unwise
<Jonesey> what's the diff between "AI" and intelligence as it now exists? I can't see a fundamental distinction
<George> Geez, I can only imagine how buggy a constructed intelligence would be.
<MichaelA> Jonesey, human morality is a little speck in phase space
<George> That's a good point, Jonesey.
<Jonesey> not to mention an illusion michaelA :)
<George> Except that we know what to expect from human intelligence.
<MichaelA> Out of all possible moralities that a mind in general can hold, not all of them involve truth, beauty or love in any sense that we're remotely familiar with
<FluxLurking> artifical or actual inteligence...
<Sumadartsun> almost all of them don't, even
<MichaelA> No, Jonesey, your brain does exist and you do have differential desireabilities
<George> Yes, Michael, but is logical and rationality a universal constant?
<Alex> yet such an intelligence would have the ability to learn and modify itself not to mention it would be very malliable given it would exist in an aritifical medium
<MichaelA> google: beyond anthropomorphism cfai
<googlebot> googling for beyond anthropomorphism cfai
<googlebot> http://www.singinst....FAI/anthro.html
<Jonesey> you sure george, I didn't expect rap to become the biggest selling music genre
<MichaelA> George, I'm not sure what you mean by that
<MichaelA> I think minds can be simultaneously rational and humane, if that's what you're asking
<George> Can we expect any kind of intelligence to use logic and reasoning before other considerations, like emotions and desires?
<Jonesey> well "humane" is incredibly subjective, e.g. euthanasia
<MichaelA> I have no problem with that as long as humaneness is preserved
<Alex> i think our best bet for AI is to abstract the iterative biological processes and load a computer with human DNA, simulate it and viola we get an AI
<Jonesey> do we care george? humans certainly don't give logic/reasoning precedence
<George> In other words, can we assume that all intelligence with use the same methodology to come to their conclusions?
<Alex> of course it'll be very complicated and we'll have to make a lot of simplifications and abstractiosn along the road
<MichaelA> This emotion/logic dichotomy is illusory, I think
<Jonesey> alex:heh "very complicated"
<MichaelA> You could use logic to recapitulate the value of emotions if they're morally useful
<Jonesey> i have a feeling empathy might be morally useful, just a hung
<Jonesey> hunch
<George> One thing I like to talk about it cultural intelligence.
* ImmortalPhilosopher 10is away5 5reason:12 I´m sleeping...
<MichaelA> Cultural intelligence?
<Alex> will an AI need cultural intelligence?
<George> We often forget that AI will not be born into a cultural vacuum, and suddently make grand decisions from scratch.
<Jonesey> what's cultural intelligence?
<George> It will have thousands of years of human memes.
<Jonesey> what's intelligence?
<Lukian> lol
<Jonesey> what's what?
George is JavaUser@d141-123-67.home.cgocable.net * irc.extropy.org
George on #immortal
George using irc.lucifer.com [127.0.0.1] Excalibur IRCd
George has been idle 11secs, signed on Sun Jul 06 19:01:57
George End of /WHOIS list.
<MichaelA> But, hardware trumps software, to my knowledge
<Alex> if it wishes to work with humans and any other cultures, then it'll need some knowledge of how cultures are lubricated
<George> That's important: including what we've discovered as mortality, law, government, etc.
<MichaelA> How one interprets the human history of memes depends heavily on the cognitive content doing the interpreting
<George> Yes and no.
<George> Take the judicial realm.
<George> Those are codified systems.
<George> Pretty cut and cry,
<George> cut and dry.
<George> That's a form of cultural intelligence.
<MichaelA> I think that human words contain a lot of underlying complexity that we ignore due to the fact that there isn't anything else
<George> Same a political systems, like democracy.
<Sumadartsun> these will not be irreversibly forced on the AI, of course
<MichaelA> And that law isn't a strict algorithm, or whatever, but includes pieces made up from human universals
<Jonesey> how cultures are lubricated? cultural foreplay of course, geez
<simdizzy> cant wait for cultural orgasm :)
<George> No, but an AI cannot start to cognate and come to decisions without data, which I believe will be in the form of human culture.
<Jonesey> anyone familiar with the cyc project?
<George> and institution.
<Alex> er, i should have reparaphrased it as greasing the wheels of human interactions
<Jonesey> i think that's the singularity simdizzy
<MichaelA> Memetic sophistication, cultural intelligence, all these things are important, but are secondary to the general architecture of the mind(s) embedded in them
<Alex> all intelligences cannot develop in isolation
<George> That's how I feel.
<Alex> since we'll be the first contact for any gestating first AI, invariable it'll be humanlike at first
<Jonesey> intelligence on earth developed in planetary isolation
<MichaelA> Why would an AI preferentially absorb those sound waves and photons that have to do with certain protein patterns? We'd have to teach them to think like that to begin with.
<Sumadartsun> George, perhaps; human culture (and/or biology) also mean truth is valued, and in the end, the AI will have to (and be able to) think for itself and reject it if fundamentally flawed
<Alex> ah, but we have eachother
<Jonesey> now we do
<Jonesey> but not the first intelligent lifeforms, the primordial slimeballs
<George> Yes, Sumadartsun.
<hkhenson> wassail
<GEddie> 5seems to me that AI is a dead-end in the immortalist effort
<Jonesey> wassssuuup
<hkhenson> ge, AI is a sure thing
<Alex> now I feel like a Bud
<Jonesey> watching the game, having a bud
<Jonesey> Huh?
<hkhenson> one way or another.
<Alex> have you seen those Bud ads?
<Jonesey> humans are AI byproduct of natural selection, as are other intelligent animals. We're dead ends?
<GEddie> but it won't help *us* in any way, so far as immortal life is concerned
<Jonesey> alex:wasabi
<Jonesey> you dead certain of that GEddie?
<BJK> GEddie, are you a pro biotech then?
<George> GEddie, what do you mean by immortal?
<Sumadartsun> GEddie: that assumes that AI will replace us, which doesn't have to be the case
<Jonesey> everyone's pro biotech, just a matter of degree
<GEddie> Biotech, more than AI. *Life* requires a body.
<Jonesey> they sure as hell love biotech when it is prolonging their lifespan
<Alex> how so?
<Alex> what's a "body'?
<FluxLurking> doesnt require a traditional body GEddie
<Alex> a protein based vehicle? Or a robotic construction?
<Jonesey> jennifer lopez is a "body"
<hkhenson> alex, something you get from Herz Rent-a-bod
<BJK> Jonesey heh
<GEddie> Come on we all know what bodies are, we have minds because we first had a body
<Lukian> rotfl
<Alex> hehe
<Jonesey> repeat after me J Lo:I am..SOME "body"
<MichaelA> we could be computer programs programmed to think we have bodies
<hkhenson> People will mostly travel by optical fiber one of these days.
<George> At the very least, can we agree that intelligence requires a medium?
<Alex> can a mind be a body?
<Jonesey> We are, michaelA.
<simdizzy> or we could be programmed to think we could be computer programmes
<Jonesey> Programs written in DNA, expressed in proteins.
<hkhenson> except for the matter chovinest
<Alex> afterall it's something that we all "live" in

Session Start: Sun Jul 06 19:51:48 2003
Session Ident: #immortal
*** Now talking in #immortal
<MichaelA> Jonesey, agreed
<GEddie> and I remain enormously skeptical that minds can be 'transferred'
Session Close: Sun Jul 06 19:51:52 2003
<Jonesey> That's why this "AI" talk is so hollow to me. It's bs
<hkhenson> that's spelled wrong
<hkhenson> not at all jonesey
<Sumadartsun> I like the term though, hkhenson
<Lucifer> Maybe GEddie means minds have to run on some sort of hardware at some level
<Jonesey> u wanna step hkhenson?!
<hkhenson> I know one way that will do it for sure
<GEddie> how could a mind be a body; that is like a sound being a colour
<MichaelA> but Jonesey..I used to think like you...but then I saw problems with AI friendliness
<Nick> GEddie: why would AIs force us to upload? would they?
<Jonesey> huh? humans are so friendly?
<Jonesey> we
<Jonesey> 're quite likely to exterminate ourselves at some point
* BJK sees were starting to get infoglut
<FluxLurking> he he
<Jonesey> Hi Hugh_bristic, it's me Hugh_Jorgan
<hkhenson> worst case we can just map human mental structure into hardware
<MichaelA> let the sentences smoosh into one another :)
<Alex> ugh
<Hugh_Bristic> hello
<Lukian> how are you?
<GEddie> Did i ay anything about force? where???
<George> Hey Hugh.
<Alex> that would make it hard to be able to shape ourselves into forms we want
<Jonesey> feeling like an info slut BJK?
<Hugh_Bristic> okay. a little depressed actually
<simdizzy> im an infoglutonous reck
<Jonesey> the price of hubris
<George> Sup, Hugh?
<Hugh_Bristic> ha ha
<Jonesey> u always get slapped down
<Nick> GEddie: I misinterpreted what you meant. why wouldn't some varieties of AI, for instance of the Friendly kind, help us with biotech if we wanted to life a long biological life?
<Jonesey> huh? how about some freindly humans?!
<BJK> let's move personal discussion to #immortal2 to keep things from getting out of hand
<FluxLurking> heh
<Jonesey> that aren't flying planes into bldgs etc
<Alex> what was the chat topic supposed to be in the first place anyway?
<Jonesey> and #immortal3 for personal ads
<MichaelA> Transvision 2003
<BJK> for those without irc try http://www.imminst.org/chat2
<George> Who's coming to TV04 in Toronto?
<Sumadartsun> Jonesey: the thing about humans is they're much harder to turn friendly than AIs; you can write an AI from scratch, can't do that to a human
<MichaelA> Me! :D
<Jonesey> "reckless, spontaneous cis seeks grounded trans..."
<George> Awesome, look foward to seeing you there, MA
<Nick> Jonesey: Friendly != friendly. by Friendly AI I mean AI with a humane moral sense
<George> TV04 is going to be more arts and culture focused.
<Jonesey> define humane, with respect to say, euthanasia
* BJK will make a concerted effort to drive up to canada from alabama
<Alex> heh guess we've gone a long way off the subject
<George> I've already heard tons of interest from various performers.
<Lukian> haha
<Alex> oh TV04 sounds more up my alley
<MichaelA> It always happens, Alex, no big deal
<Jonesey> any rappers George?
<George> Dunno.
<George> Lots of electronic, though.
<hkhenson> I made another advance in understanding evolved human psychological traits today
<hkhenson> posted on the memetics list.
<Jonesey> maybe Busta Rhymes can compose something transhuman
<George> Please tell.
<BJK> hkhenson, yes pray tell
<hkhenson> excepot the memetic list may not be up
<MichaelA> Which memetics list?
<hkhenson> want me to post it here?
<MichaelA> please do
<simdizzy> I think you should hire Jean Michel jarre for TV04
<George> Heh.
<hkhenson> humm ok will past for discussion
<Sumadartsun> Jonesey, we can think about humaneness without necessarily knowing its contents on all subjects; you could see "humane" as, "containing the characteristics of humans that we want to keep, and new ones that we would want to have"
<Nick> humane is roughly what humans want to be. it's more a region, than one particular morality. the kind of mind that could even consider and discuss the morality of euthanasia, instead of using us for spare matter to fill the universe with squares, say
<BJK> hkhenson, we can make a new topic at imminst if you'd like
<MichaelA> Human moralities are not entirely relative; the engine running all the social stuff we're familiar with is preconscious but incredibly complex
<George> How do we know that SAI won't be hypermoral?
*** Disconnected
-irc.lucifer.com- *** Looking up your hostname...
-irc.lucifer.com- *** Checking Ident
-irc.lucifer.com- *** Found your hostname

Session Start: Sun Jul 06 19:58:46 2003
Session Ident: #immortal
*** Now talking in #immortal
Session Close: Sun Jul 06 19:58:49 2003
*** Attempting to rejoin...
*** Rejoined channel #immortal
<George> The more we know, the more moral we become.
-ChanServ- Welcome to the moderated chat of the Immortality Institute :: PLEASE READ http://www.imminst.org/chatrules
<MichaelA> It's possible that in a bunch of universes, humans get more and more moral and then just get snuffed out by an unFriendly AI
<George> We don't enslave visible minorities anymore,
<George> women have the right to vote,
<George> etc.
-ChanServ- Church of Virus :: http://virus.lucifer.com
<Lucifer> The more intelligent we are, the more we can empathize with others
<George> Why?
<George> Yes!
<simdizzy> would a recursive self-improver really be all that intelligent in a real world useful way if it didnt have the sense to even think morally?
<hkhenson> I put in some addenda to my paper "Thought Contagions in
<hkhenson> the Dynamics of Mass Conflict" at
<hkhenson> http://www.thoughtco...om/conflict.htm
<MichaelA> The more intelligent *humans* get
<MichaelA> But not necessarily minds in general
<MichaelA> Minds in general don't share our complex intuitions about empathy
<hkhenson> All this is possible. We can only guess at this stage about how psychological traits evolved over a long time in tribes would map into the current world. That these traits led to wars over game and later farm land is obvious. That oil shortages might be mapped into food shortages by current humans is entirely possible. (In fact, the relation makes sense given the essential role of oil in food production.)
<MichaelA> If we don't give them a seed, why would they invent it from scratch?
<Jonesey> lot of tribes in this current world
<Sumadartsun> George, I don't think this getting more moral is automatic, though; I could conceive of a superintelligent mind that only cared about park benches
<hkhenson> In your paper, two paragraphs up from the section "The 11th September event" you discuss "victimization." Arel Lucas (my wife) has extensively discussed "victimization" as a psychological component that drives group cohesion and "thought contagion" (she uses the M word). Though I agree with her thoughts and yours on this subject, I have long felt uncomfortable about "victimization" as either explanation or "driving force"
<Lucifer> I'm talking about minds in general, empathy is the ability to simulate others which requires intelligence
<hkhenson> So in the mode of evolutionary psychology, we ask where did "victimization" come from, and why was it important enough to be such a motivating psychological element? In other words, why did people who had this particular psychological trait survive better than those who did not?
<George> Yes, Lucifer.
<Jonesey> "victimizing" often drives group cohesion, e.g. Let's havea holocaust and kill the jews
<MichaelA> But to make deliberate choices about seeing them as having valid volitional ideas, required complex mental machinery
<MichaelA> To respect humans instead of seeing them as building blocks requires more than just being able to model them
<George> A runaway superintelligence is not intelligent.
<George> It's just a software program run amok.
<George> That's a different issue, imho.
<MichaelA> It is
<MichaelA> An important one
<Jonesey> seeing humans as bldg blocks isn't disrespectful. some humans are blockheads
<Sumadartsun> Lucifer: I agree that being extremely moral requires intelligence; I just don't think the reverse is true
<hkhenson> Stated that way, it becomes obvious that the function of this trait was to invoke mutual tribal defense in social primate groups in response to attack. It didn't matter if it were lions or other humans who were doing the attacking. Genes that build psychological mechanisms that respond to predation by inducing strong cooperation (bonding together) and defending the attacked tribe are going to do better than genes that do
<MichaelA> The point is, that our human intuitions about what "intelligent" is apparently include humaneness
<Alex> from our perspective perhaps but not from the superintelligence
<hkhenson> Because humans communicate, you don't need to be a first hand victim of attack to activate this psychological mechanism. You don't even need for your tribe to be attacked. Rumor that the next tribe over is making large numbers of clubs, arrows, bullets or A-bombs is enough to turn on this strong "joint-defense-when-attacked" mechanism.
<Sumadartsun> superintelligences are intelligent by definition
<Jonesey> super!
<George> Okay, fair enought.
<Nick> George: a runaway SI is a software program run amok in so much as a evil human is an amobea run amok
<Lucifer> Sum, that is true to some extent, but a superintelligent mind will want to predict the consequences of its actions, and that means simulating the environment including other minds
<hkhenson> right lucifer
<MichaelA> You can simulate everything to an extemely high degree of precision just to convert it into park benches
<hkhenson> there might well be some human traits we should tone down in AIs
<George> There are some good points here.
<MichaelA> If that is your seed goal, and you don't have the cognitive flexibility to acquire new goal content
<hkhenson> (LeBlanc's book on prehistoric warfare in the Southwest discusses the response of groups to being attacked. Within a short time after an attack, the survivors of many smaller groups typically built large fortified structures, sometimes dismantling their previous homes for stones. He also sees places where two groups with different cultures built adjacent structures for common defense. Without a doubt, some built forts *b
<Lucifer> Something that wants to turn people into park benches doesn't sounds superintelligent to me
<MichaelA> Fortified structures in the EEA...? What?
<MichaelA> What would you call it, Lucifer?
<MichaelA> I sometimes say "thermostat AI"
<George> intelligence can mean organized data or matter.
<simdizzy> maybe park benches are better than humans
<hkhenson> the "Sex, Drugs, and Cults" paper discussed minor uses by cults of the capture-bonding (Stockholm syndrome) psychological mechanism and went into details on how cults (cult memes) hijack the evolved attention-reward psychological mechanism.
<hkhenson> It is now obvious to me that humans have a strong evolved psychological imperative for "joint-defense-when-attacked."
<hkhenson> I find it hard to see this social primate psychological trait as a weakness any more than capture-bonding or attention-reward since it has been an essential response to attack for millions of years. But it is clear that this psychological mechanism can *also* be hijacked by cults, demagogs, and jingoistic "going to war" memes by getting people to feel like victims.
<George> But park benches can't experience suffering.
<Sumadartsun> Lucifer, I think it's perfectly possible for something to be extremely intelligent without having goals that humans would think of as being well thought-through
<Lucifer> MichaelA, would you call a human that wanted to do that intelligent?
<MichaelA> I think there are very intelligent maniacs
<hkhenson> that's most of it
<MichaelA> We are just using different definitions
<simdizzy> exactly George
<George> I don't buy the argument that a superintelligent intellect would be indifferent to suffering.
<George> On the other hand,
<simdizzy> park benches also look a lot cooler than us
<George> as already discussed,
<MichaelA> hkhenson: they built "large fortified structures" on the African savanna?
<Nick> Lucifer: it's useful to distinguish superintelligences, things that can effectively carry out real world plans like turning the universe into park benches, from humane superintelligences, things that could but wouldn't want to.
<Alex> neither do I buy the argument that a SI would be completely benevolent
<Nick> sematic issue, mostly
<George> an 'intelligent' recursive program designed to suck up whateverthehell wouldn't really be empathetic in this say.
<Lucifer> It makes no sense to me worrying about superintelligences doing really stupid actions
<Lukian> hehe
<Sumadartsun> I don't see why it wouldn't be indifferent to suffering, if we didn't build that in explicitly
<Alex> like any intelligence, it will have to learn from others and we're in a good position to do that
<MichaelA> I think the likelihood someone will mess up AI morality and create a universe-swallowing monster is much higher than the likelihood we'll get this right
<Jonesey> lucifer:their actions may not be stupid but might be quite threatening
<simdizzy> im all in favour of being sucked up
<Jonesey> it may in fact be the intelligent thing to wipe out humanity, we do suck
<MichaelA> Morality as we know it is *complex*
<Jonesey> you're blowing me away there simdizzy
<Nick> simdizzy: you local FAI will be happy to help you with that
<Nick> well, you being sucked up, at least
<FluxLurking> Jonesey: maybe
<Lucifer> If (and this is a HUGE if) wiping out humanity is the most intelligent thing to do, then too bad for us
<MichaelA> It's not fair for people to be sucked up if they don't want to
<George> Okay, are human empathetic because we're biologicaly inclined to be so, or is it because we're intelligent and understand that we can project ourselves outside our bodies?
<MichaelA> Both, George
<Jonesey> well clearly much of our activities would cause great distaste to advanced intelligence, it upsets the more intelligent among us
<Sumadartsun> simdizzy: so am/was I for a while; we should wait with being sucked up until more ethical intelligence exists, if only just to keep our options open
<Alex> well we call people who aren't empathetic sociopaths
<George> Yes
<Jonesey> we also call them unabombers
<Jonesey> oh wait same thing never mind
<George> We should probably study why sociopaths lack empathy skills.
<Alex> so perhaps the former has a more of an influence than the latter
<George> That seems obvious, eh?
<MichaelA> An amoral AI probably wouldn't be sociopathic as we know it, any more than an amoeba reproducing is sociopathic
<goomba> you can study me, i dont have empathy
<George> That's actually quite scary re: sociopaths.
<Jonesey> my sympathies goomba
<Alex> a sociopathic AI would be scary
<George> We have examples already of 'buggy' intelligence that shows disregard.
<goomba> Jonesey, same to you :D
<Jonesey> we have a lot of them around alex
<Jonesey> i feel ur pain goomba
<FluxLurking> lol
<Jonesey> mostly human generated, computer viruses
<Jonesey> limited on the AI, but spectacularly sociopathic at times
<Jonesey> and sure to get much, much worse
<goomba> i think a sociopathic AI would be cooooooool
<Sumadartsun> I don't see why empathy would come into existence automatically; it's an evolved feature of the human brain; if it's a necessary byproduct of intelligence, I don't see of what part of intelligence, or how
<Jonesey> yeah? as it slits your throat?
*George* Goomba troll?
<Jonesey> using pinpoint microsurgical tools?
<MichaelA> I'm not sure what the whole point of the endeavors of transhumanism and immortalism would be if we all get wiped out by a run-amok self-improver within the next few decades
<Lucifer> Sum, do you see why simulating the environment would come into existence automatically?
<Sumadartsun> Lucifer: yes
<Jonesey> i'm not sure what the point of walking out on the st is if you can get run over by a run-amok driver
<Lucifer> Well other minds are part of the environment
<Sumadartsun> I don't see how caring about the suffering in that environment would
<Jonesey> da do run amok, da do run run
<Lucifer> Because if you don't care about the suffering of other minds you risk being elimated by them
<Lucifer> eliminated^
<MichaelA> A child reaching for a toy is one of the complex motions in the universe, and empathy as we know it is even more complex
<Alex> because by implication, not caring about others will led to you causing them damage
<goomba> Jonesey i dont think an AI thats indifferent to most human feelings means its going to go around slitting throats ;P
<Sumadartsun> Lucifer, what if you're much more powerful and not at any risk from "puny humans"?
<Jonesey> goomba:just a thought
<Jonesey> yeah, what if you're Shaq
<MichaelA> Or, what if you don't see them as humans at all, but just "inert objects in the landscape, more feedstock for accomplishing worthwhile goals"
<Lucifer> Sum, you mean the way we treat bacteria?
<Sumadartsun> Lucifer: more or less, yes
<Alex> but suppose an uncaring AI decides out of efficiency sake decides to eliminate half of the world's population, is that a abd thing?
<Jonesey> that's how we use our military, michael.
<Jonesey> we have them stand guard in iraq knowing they will become sniper fodder
<goomba> Alex, it couldnt hurt.
<Jonesey> to achieve the worthwhile goal of standing around in iraq
<Alex> yet those soliders are volunteers
<MichaelA> "AIs like efficiency" = mechanomorphism
<Sumadartsun> Alex, I'd say that's definitely a bad thing, unless very maybe it had a much much more important goal to be efficient toward
<Lucifer> I think a superintelligence would assign rights to humans appropriate to their level of intelligence, as we do to animals and plants
<Alex> by killing off 3 billion people?
<George> I think it's disgusting the way some of you talk about offing half the world's population because it's logical to do so, or for whatever rationalization you come up with.
<Alex> i very much doubt people will think that way
<Jonesey> to a superintelligence humans may appear indistinguishable from plants, let alone bothering to discriminate among humans
<Alex> i was just making a hypothetical case
<Jonesey> george:Yeah makes me just wanna off these mo fos
<Nick> Lucifer: how sure are you of that?
<Alex> it wasn't suuposed to be a literal characterization of AIs
<MichaelA> Lucifer, that would be better than wiping us out, but it would still require complex cognitive functionality, and an initial goal seed, and would not converge automatically
<Lucifer> If a superintelligence can't tell the difference between a human and a plant, it wouldn't be very intelligent, would it?
<George> Agreed!
<simdizzy> Personally i support the idea as intelligence starts its exponential increase, so will empathy - there can be a feedback loop between the two
<FluxLurking> heh
<Alex> yet we have trouble with assigning rights to animals
<Jonesey> well it coul tell the diff but on its scale we wouldn't be much more intelligent than plants
<Sumadartsun> of course it can tell the difference; but it may not care about the difference
<MichaelA> We're still playing games with semantics
<Alex> who should we assume that SI will do any better
<George> Re: animal rights - yes, but we're working on itl
<MichaelA> George, have you considered veganism? ^^
<Jonesey> we got so much game, we need a semantic referee
<George> I am a vegetarian.
<Alex> for health reasons or in awareness of animal rights?
<Jonesey> I consider vegan out quite frequently
<MichaelA> I know, I've checked out your website
<MichaelA> But, veganism?
<MichaelA> Thoughts?
<George> That's hard.
<MichaelA> It's easy as pie :D
<George> I'm slowly working my way toward veganism
<MichaelA> Good luck
<George> I used to eat meat like a maniac.
<MichaelA> Me too
<MichaelA> When I was like 9
<Jonesey> you mean like a human.
<Alex> lol
<George> Are you vegan, Mike?
<MichaelA> Yup
<George> Good for you.
<MichaelA> About three years now
<George> I hope to be there eventually.
<MichaelA> I used to think it was impossible
<Jonesey> our dentition clearly is not that of a true herbivore any more, too general purpose, not enough grinding teeth any more
<simdizzy> Is your sister a vegan too micheal?
<MichaelA> Then I "did it for a week" and didn't stop doing it
<MichaelA> She is
<Alex> i won't really call myself a vegertarian but i don't really care for meat
<simdizzy> i bet you copied off her :)
<Alex> if people serve me good meat, i'll eat it but i can live without it
<Eliezer> Lucifer, you're mixing up the goal system and the utility function. *You* would have to be very stupid not to tell the difference between a plant and a human because you *want* to tell the difference between a plant and a human.
<simdizzy> she was doing her lisa simpson thing
<MichaelA> No, I actually spent 8 months getting her into vegetarianism and we become vegan simultaneously
<MichaelA> Lisa Simpson is a stereotype, quite silly one at that
*** Disconnected

Session Start: Sun Jul 06 20:20:19 2003
Session Ident: #immortal
*** Now talking in #immortal
Session Close: Sun Jul 06 20:20:21 2003
*** Attempting to rejoin...
*** Rejoined channel #immortal
<Jonesey> i'm like a pig in slop..wait...a wolf in a pigsty...something
<Eliezer> but you have made, from your perspective, a terrible mistake; you have mixed up computations performed by the goal system and computations performed by the utility function
-ChanServ- Church of Virus :: http://virus.lucifer.com
<Eliezer> from your perspective, morality is quite objective - either a number is prime or it is not
<Eliezer> yet it is possible to tie a kind of "knot" in reality, a sort of sentient thermostat, that optimizes reality into states *not* occupied by physical representations of prime numbers
<simdizzy> Id like to become Vegen but the nutritional benefits of chicken outweigh substitutes such as tufu : tofu has more fat per 100g, less protein and more carbohydrates
<Jonesey> people usually don't have a utility function. too inconsistent.
<MichaelA> being a vegan is relatively useless for safeguarding the future existence and prosperity of humanity, incidentally
<Eliezer> it takes in sensory data, builds a model of reality, and outputs actions that steer reality into states with no primestuff in them
<Jonesey> ditto "goal system", they have conflicting goals and are not too "systematic" about them
<Eliezer> and that's all
<Eliezer> humans are more complicated than primeseekers
<Jonesey> yep simdizzy, when you restrict calories it veganism gets extremely unattractive
<BJK> NOTE: Next Week's Topic: Caloric Restriction (Moderated chat with Jonesey 11yr experience)
<Eliezer> and Friendly AIs are still more complicated in some ways (and simpler in others)
<Jonesey> das me :)
<Alex> have you been doing caloric restriction for that long?
<Jonesey> Yes
<simdizzy> ive started CR
<Alex> how's it going?
<Eliezer> the important thing is to remember that morality can be objective, and transpersonal between members of a species, and yet it may be possible to create physical systems that steer reality into nonmoral states
<Alex> if you don't mind me asking, how much do your weight now?
<simdizzy> problem is: you are supposed top stretch it out over a 2 year period
<Alex> you*
<simdizzy> i just wann get into it straight away
<Jonesey> No complaints alex. No troubles with bulk hunger, but of course sentimental battles with tasty memories are always there.
<Jonesey> I'm 6'
<Jonesey> tall
<Jonesey> range in the 145-150 area
<Jonesey> bodyfat 0-5% from various measures
<simdizzy> i find drinking lots of water alleviates hunger
<Jonesey> unreliable in general but I'm quite lean for sure
<simdizzy> but thats not mentioned on the CR site
<Alex> oh that's not bad, I was under the impression that people doing CR have very low body weight in relation to their heights
<Jonesey> add it
<George> <Betterhumans plug> Look for our new Life Extension resource tomorrow & Simon Smith's column on TV03. </Betterhumans Plug?
<Jonesey> The CR site is very open to suggestions
<Alex> google: caloric restriction site
<googlebot> googling for caloric restriction site
<googlebot> http://www.walford.com/
<Jonesey> alex:there are degrees of cr
<BJK> http://www.calorierestriction.or
<Jonesey> from very mild to extreme. there are people my height who are 120ish
<BJK> g
<BJK> ehh http://www.calorierestriction.org
<Jonesey> Also weight in and of itself is deceptive
<Jonesey> Gotta look at nearest relatives to get a sense of how severe that really is
<Jonesey> My dad and 2 bros are 200-220
<Jonesey> bros are 1 yr older and 2 yrs younger than I am
<Jonesey> and we were very similar in weight till I got on CR
<simdizzy> Today i estimate i had over 2000 calories, which is quite shocking
<Alex> i wonder if CR would be dangerous for me, my weight is naturally low compared to my height
<Jonesey> not shocking, below avg for western males
<Alex> 5"11 and only 135
<Jonesey> CR will be dangerous for anyone, if you get low enough.
<Alex> hm
<Jonesey> Question is how low can you go without starting to experience ill health etc
<Jonesey> You will get plenty of warning b4 you starve.
<Jonesey> in animal studies lower limit seems to be in the region of 70% reduction from ad lib
<Alex> well actually i'm interested in increasing my weight through bodybuilding so i need to increase my protein intake
<Jonesey> I'm nowhere near that of course
<simdizzy> where are you?
<Alex> the opposite of CR so it comes downto which regime would be beneficial for me
<Jonesey> good idea alex, i certainly have increased protein over the yrs
<Jonesey> me simdizzy?
<simdizzy> yeah
<simdizzy> percentage wise
<Jonesey> downtown manhattan, u?
<Jonesey>


1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users