• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Steve Omohundro, Ph.D. - Self-Aware Systems


  • Please log in to reply
4 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 09 June 2004 - 10:19 PM


Chat Topic: Steve Omohundro, Ph.D. - Self-Aware Systems
Founder of Self-Aware Systems, with the mission to "benefit humanity by developing intelligent technology that understands and improves itself," Steve joins ImmInst to discuss, among other topics, the advancement of self-organizing behavior in complex systems.

Chat Time: Sun. July 11 @ 8 PM Eastern Time [Time Zone Help]
Chat Room: http://www.imminst.org/chat (irc.lucifer.com port: 6667 #immortal)

Posted Image
Steve Omohundro

Stephen Omohundro has had a wide-ranging career as a scientist, university professor, author, software architect, and entrepreneur. He graduated Phi Beta Kappa from Stanford University with Honors and Distinction in Physics and with Distinction in Mathematics. He received a Ph.D. in Physics from the University of California at Berkeley and his thesis was published as the book Geometric Perturbation Theory in Physics.

His first company was Om Sonic Systems which designed and built custom music synthesizers. At Thinking Machines orporation, he co-developed Star Lisp, the programming language for the massively parallel Connection Machine. He was a computer science professor at the University of Illinois at Champaign/Urbana where he co-founded the Center for Complex Systems Research, supervised 4 Masters theses and 2 Ph.D. theses, and was ranked as an excellent teacher. At Wolfram Research Inc., he wrote the three-dimensional graphics portion of Mathematica as one of the seven original evelopers.

At the International Computer Science Institute in Berkeley, he led an international team in developing the object-oriented programming language Sather (recently featured in O'Reilly's poster of the History of Programming Languages). He also developed a variety of novel neural network techniques and machine learning algorithms which led to systems for reading lips and learning grammars.

At the NEC Research Institute in Princeton, he worked on a variety of applications of artificial intelligence and co-authored a patent on the PicHunter image database retrieval system. While at these institutions, he served on 6 conference program committees and 2 journal editorial boards, gave many invited talks, and produced 48 scientific publications.

He founded Olo Software in Palo Alto to provide technology consulting to a variety of startup companies and research labs including InterTrust Technologies, Xerox PARC, Fuji-Xerox PAL, Ask Jeeves Inc., VideoScribe, LinuxMatix, and Video Memoirs, and Molecular Objects. Most recently, he founded Self-Aware Systems to develop a new kind of intelligent learning technology.

More: http://om3.home.att.net/bio.html

Homepage: http://om3.home.att.net/

#2 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 12 July 2004 - 01:55 AM

<Randolfe> Greetings. Isn't omohundro scheduled for the eight o'clock slot? Anyone read the charming quotes on his home page?
<omohundro> Sounds good. I've been reading past discussions and found them quite interesting. Especially Robin Hanson's.
<BJKlein> Hanson is quite impressive
<omohundro> Yes, he's always got a new and different take on things.
<Randolfe> BJK, is there any record or tracking done as to what parts of the sites get the most visitors?
<John_Ventureville> he made quite a splash in the media with his "terrorist futures" concept
<BJKlein> the topics have a view counter
<BJKlein> we also have a stats page: http://www.imminst.org/stats
<Randolfe> Thanks. I realized thje views concept but I thought time was also important in measuring interest.
<John_Ventureville> good point
<BJKlein> hmm, not sure... the cycles would be interesting to measure
<Randolfe> I had a service from Microsoft which told me what countires each visitor came from, time spent, etc. I just couldn't figure out all the details in the numbers but countries of origin are very interesting.
* ChrisRovner has joined #immortal
<BJKlein> ya the stats page has such
* Eliezer has joined #immortal
* ChanServ sets mode: +o Eliezer
* BJKlein Official Chat starts
* Eliezer sets mode: -o Eliezer
<BJKlein> Steve Omohundro, Ph.D. - Self-Aware Systems Founder of Self-Aware Systems, with the mission to "benefit humanity by developing intelligent technology that understands and improves itself," Steve joins ImmInst to discuss, among other topics, the advancement of self-organizing behavior in complex systems.
* TimFreeman has joined #immortal
<BJKlein> see http://www.imminst.org for more
<TimFreeman> Hmm, the topic refers to a talk scheduled a week ago.
<BJKlein> welcome again, Steve...
<omohundro> Hello everyone! I'm excited about this kind of a forum for discussion.
* sh has joined #immortal
* TylerE has joined #immortal
<BJKlein> Your bio is quite impressive... what kicked off your interest in self-aware systems?
* ravi has joined #immortal
* ChanServ sets mode: +o Eliezer
<omohundro> I've been thinking about intelligence since I was a kid. Seems like I keep getting pulled back into trying to understand meaning at a deep level.
<Randolfe> I saw another ad today playing on the negative view of technology. "the Robots were subservient. they were watching..and now, it implied they were going to take over. Another sci-fi scare movie.
* Eliezer changes topic to 'Steve Omohundro - Sun July 11 @ 8PM Eastern - http://www.imminst.org/'
* Eliezer sets mode: -o Eliezer
<BJKlein> do you believe in an overarching 'god' ?
<BJKlein> or just complexity?
<TimFreeman> Marvin Minsky complains that not enough people try to understand what "meaning" is and how to represent knowledge in general.
<John_Ventureville> We are going to overwhelm Dr. Omohundro with comments and questions. lol
<BJKlein> it seems there is no meaning in life
<omohundro> Well, the term "god" is incredibly overloaded as is "believe"! I generally try to assign degrees of belief to various possibilities. One of Robin Hanson's papers upped my belief in the possibility that we're in a matrix-like simulation.
<BJKlein> with an infinite level of matrix similations, perhaps?
<omohundro> It certainly does seem that we're in a very special time right now, not at all likely if you just select randomly.
<John_Ventureville> I agree
* goomba has joined #immortal
<BJKlein> thus you've heard of Singularity?
<TimFreeman> Minsky was worried about more prosaic kinds of meaning, like "what's an arch?". When BJKlein says "it seems there is no meaning in life", I would substitute significance for meaning.
<omohundro> I don't know about an infinite level of simulation, though once you start including models of yourself in a simulation, infinite regress lurks around every corner.
<Eliezer> Infinite regress means that you're phrasing the problem wrongly, solving it incompletely, or using a poor algorithm.
* taza0__ has quit IRC (Read error: Connection reset by peer)
<omohundro> Singularity? Absolutely, and I think we are quite close to it. Biochemistry, neuroscience, physics, cosmology, AI all seem to be on the verge of major new understandings, and technology seems quite close as well.
<BJKlein> should humans try to upgrade to keep in step with AI?
<omohundro> Hi Eliezer! Good to see you here! Sorry for not responding to your last email yet, I'm still pondering it.
<BJKlein> Eli and Steve... with either of you be at Transvision 04 Toronto?
<TimFreeman> When you said "understand meaning at a deep level", did you mean the significance of big things, or ordinary kinds of meaning like the meaning of the word "arch" or "chair"?
<John_Ventureville> Dr., what do you think of Eliezer's "friendly AI" theory? Do you think it is ultimately a life or death matter for humanity to include such concepts in our efforts to creat self-improving A.I.?
<omohundro> I think a number of these new understandings introduce fundamental questions about what it is to be a human. Some will surely want to keep up with the AI's and in fact merge with them, others I think will want to retain their present nature.
<BJKlein> it seems if they refrain, they will die..
<BJKlein> eventually
<BJKlein> it seems we are destined to merge, all of us one day.. as we already are to some degree
<BJKlein> via the internet, etc
<John_Ventureville> BJ, would linking be a better word than merging?
<BJKlein> integrating.. perhaps
<John_Ventureville> I don't like the sound of "merging"
<omohundro> Tim: I meant ordinary meaning. Human concepts like "chair" have arbitrary characteristics to them. I've mostly been intrigued by even understanding precise concepts and how it is that we can do finite reasoning to achieve infinite conclusions. I think that is what drew me to physics originallyl. I couldn't understand how you could capture the richness of the physical world with a smallnumber of equations.
* James_Swayze has joined #immortal
* James_Swayze is now known as FutureQ
<BJKlein> the 'laws' of the universe.. pretty crazy how stable they are
<BJKlein> why is that
* RJB has joined #immortal
<omohundro> John: I met Eliezer at the Foresight vision conference a few weeks ago and certainly feel that many of the concepts around friendly AI are very important to consider. I was aware that AI in the wrong hands could cause real problems, he convinced me that unintended consequences are very important to consider even in the best of circumstances.
<John_Ventureville> I'm very glad he stated his case and you took it to heart.
<Randolfe> Omohundro had an interesting quote from George Bernard Shaw saying we should see ourself as being "a force of nature" and how true joy in life comes from "being used for a purpose recognized by yourself as a mighty one". Being part of the Transhumanist, Cloning and/or Immortality movements can fill such a role.
* BJKlein nods
<omohundro> On merging: I find it vaugely creepy. In Kurzwel's book he has a graduate student who he interviews through the transition and by the end she is the AI. From his perspective she's been enhanced and grown. From another perspective she's just dead. Figuring out how we retain our humanity will be one of the great challenges I think.
<John_Ventureville> I totally agree.
<BJKlein> i think it is one of degree...
<BJKlein> one is not either alive or dead.. but somewhat more alive or somewhat more dead.. to degrees
<Eliezer> We are parts of physics, which makes us as natural as anything else. And I don't know about true joy in life coming from wielding *yourself* in a purpose you chose as the mightiest available, but sometimes true joy in life isn't the most important thing you need to worry about.
<omohundro> Stable laws of the universe: I've been intrigued by the various anthropic ideas floating around now. One amusing one says that all possible mathematical systems exist and only the ones which can support us of course can be visible. It is most likely in a Bayesian sense that we are in a system that is near the simplest it could be and still support us.
<John_Ventureville> I see uploading as a perfect copy of me but not the *real me* because as Robert Ettinger would say "the self-circuit has been broken."
<FutureQ> Could I ever have an AI implated that was designed to iebtufy with my concsiousness, an ultra ego so to speak, but that I maintain contro, over and believing it is integrat to me would not seek separation or harm to me/us?
<TimFreeman> Hmm, the concept of "retain our humanity" is even fuzzier than "chair" or "arch". What motivates you to be concerned about it?
<John_Ventureville> there are some parts of my humanity I don't want to retain! lol
<Eliezer> I usually speak of humaneness, where human is what we are, and humane is what, being human, we wish we were. Like, humanity renormalized under reflection.
<John_Ventureville> I see
<RJB> Why? --- a significant majority of us are still going to come out ahead of the people who lived in the shadow of Mt. Vesuvius in AD 79.
<omohundro> John: I have a friend who views clones as identical to himself to the point where he would happily shoot himself if his clone were to then receive a million dollars. I don't have that sense for myself at all.
<TimFreeman> I guess this typical primate politics crap one sees all the time is part of being human, but not part of being humane. I like that distinction.
<RJB> Is or is the clone not identical?
<Eliezer> A genetic clone obviously isn't you. No memories. Just your time-separated identical twin.
<BJKlein> a clone is a different entity in space/time
<omohundro> Uploading captures the information, but I like having a body as well! Our physical instantiations are part of who we are.
<John_Ventureville> Dr. Omohundro, Wow. I would use the ".357 bullet to the brain" example to shake people up with his view. He is definitely a true believer.
<Randolfe> I view a clone of myself as "a cell off the old block" but I would never kill "me" for "his benefit".
<Eliezer> Omo, I would say that there is information in our physical instantiations. But it is capturable information.
<Eliezer> As far as I'm concerned, a human with a simulated body in a simulated environment is in exactly the same situation as before uploading. It even makes you wonder, what is the point?
<RJB> Ok, lets make-up a word for an *identical* copy down to the level where all of the atoms and molecules that are important are the same except they happen to be in a different location. There is an example of this in Star Trek NG when the transporter malfunctionied and Riker ended up getting duplicated.
<omohundro> FutureQ: Certainly at first, I think we will treat them as valued servants who give us ready answers to our questions. At what point we start losing our identity is the question for me. When there are multiple uploaded copies of you from where do you get your identity?
<Eliezer> Generalization from fictional evidence.
<BJKlein> please replace 'uploading' with 'integration'
<Randolfe> Omo, I don't want to give up the physical pleasures of taste, smell, sensation, orgasms, etc. I just want to intensify them in an improved version of myself.
<Eliezer> Omo, I have a saying about bitwise copies, which is: "There is no copy; there are two originals."
<RJB> The body issue begs the question of downloading -- if one has the technology to go in one direction one probably has the technology to go in the other.
<FutureQ> I'm more intersted in being physical a longer while than many of my >H peers, so I meant an onboard chip AI helper that was made to belive it is also me, an avata perhaps.
<TimFreeman> I think the Buddhists are right. I don't have an identity to lose.
<John_Ventureville> good point but remember that whether or not you are the original, you're not going to want to be "erased"
<Randolfe> If there is no loss of quality (as in xeroxing or videotaping) then the copy and original are equal and the same.
<FutureQ> avatar
* Adraeus has joined #immortal
<John_Ventureville> FuturQ, I p.m.'d you
<omohundro> Randolfe: I agree about enhancing experience, the question is where do we stop. I think the crack addict sitting in the corner doing nothing but blissing out has gone too far for my taste. As we get more access to the underlying knobs of our existence, I don't know what principles we can or should use to choose what to change.
<BJKlein> omohundro, where do you wish to take your company: 'Self-Aware Systems'?
* Friendly_I has joined #immortal
<TimFreeman> Which Kurzweil book has the interview with the grad student being uploaded? He has a few out in addition to "The Age of Spritual Machines".
<FutureQ> Omo, how do we invest in your co?
<BJKlein> http://home.att.net/...aresystems.html
<omohundro> Tim: I meant the Age of Spiritual Machines.
<RJB> Omo, that is a moral perspective. It implies that it may be "wrong" for sentient beings to make no contribution.
<omohundro> Self-Aware Systems is still early on in its development. I've been primarily focussed on the underlying technology and have only recently been talking to venture capitalists, etc. about business models. At present I don't think a short term profit-driven model is the right way to guide the development of the technology.
<BJKlein> fyi, RJB = Robert J Bradbury
* sjvan has joined #immortal
<BJKlein> have you met with Peter Voss, per chance?
<omohundro> I'd love to have a better understanding of how to phase in potentially disruptive technologies in the most beneficial way possible.
<BJKlein> Peter is focused in creating General Intelligence, per a group of programers
<Randolfe> Omo, the problem with the crack addict "blissing out" on the corner is bad only because he/she becomes a slave to that pleasure. I think if I were dying, I might enjoy some good opium and some pleasurable dreams.
<TimFreeman> Do you think you'll have any hope of controlling how these potentially disruptive technologies are phased in? Put another way, how many cooperating people do you think it would take to have a chance of having meaningful ...
<TimFreeman> influence onthe process?
<omohundro> FutureQ: I'm not taking investments yet! When I find a business model that seems to push things in the right way, that'll be the time!
<Eliezer> Randolfe: I've done some analysis of this problem, i.e., Fun Theory. http://yudkowsky.net.../funtheory.html
<omohundro> I've talked with some of the people associated with him and it sounds like quite interesting work.
<Eliezer> The crack addict seems boring because he isn't interacting with other minds and he isn't solving complex problems for his pleasures.
<FutureQ> My test for "if a copy is me" -- The instane we are first facing each other upon the creation of the second and I see behind myself through his eyes, we share experience live, then indeed we ARE the same and I could die and not feel I would be lost.
<John_Ventureville> to get his pleasure "needs" met might take some fairly complex problem solving in the form of stealing, grafting, etc.
<Randolfe> I agree that the crack addict is possibly wasting his time. I loved one of the quotes from Omo's site that said: "You love most of all those who need you as they need a crowbar or a hoe."
<omohundro> Yes, in some sense he has shut down a large part of his potential. Pleasure is great to guide us but if we just bliss out we in some sense lose our essence.
<John_Ventureville> desperate measures for desperate needs
<RJB> John, I'm assuming my "Sapphire Mansions" scenario where the nanobots are taking care of all material needs. Before then I would agree.
<John_Ventureville> ok
<RJB> Omo, you are *still* making a moral judgement that people must live up to their potential.
<omohundro> FutureQ: The problem I think is that at the moment you are cloned or uploaded, you and the clone are indeed the same but you have different futures from that point on. You might treat it like a very close sibling but I don't think you would feel closer than that.
<FutureQ> I agree totally
<Eliezer> RJB, he didn't say he *wasn't* making a moral judgment. So what's wrong with that?
<Randolfe> FutureQ, what if you could see yuourself through your double's eyes but had no control over his actions?
<omohundro> RJB: I don't want to make moral judgements for other people but as I consider what I want for myself, I think that blissing out doing nothing wouldn't fullfill my destiny.
<Eliezer> Omo: Cloning and uploading are very different cases. In the case of uploading, it's a sibling, twin, or whatever. If you're actually xoxed, then it makes no sense to speak of one of you being the copy, and the other the original. There is no copy; there are two originals. There is no "you and the copy". There is just "you and you".
<Randolfe> "Blissing out and doing nothing" is just another way to describe what religionists call Hell!
<RJB> You can solve the identical clone(s) separate future problem with a real-time brain-to-brain interface (perhaps multiple) -- it is highly probable that nanotech can manage it so that you have each others personal experiences in "real time" -- now whether our brains are designed to deal with two (or more) full sensory input streams is a good question.
<FutureQ> Well, I said shared experiences so I'd ave to trust that being so close to me he'd do much the same as I. I could live with separate actions. My poiunt is that I am not my copy unless we share consciousness.
<omohundro> The danger is that when we get our hands on the knobs, the temptaion is very great to use them. Just look at all the addictions we have today with our very imperfect drugs. What happens when we get direct control over our goal systems? What do we choose when we can change the entire genetic makeup of our children. What principles will guide us as we move beyond our evolutionary heritage?
<TimFreeman> Randolfe: The typical description of hell isn't blissful.
<BJKlein> omohundro, would you hypothesize that: Death = Oblivion(same as before birth)?
<Eliezer> "Blissing out and doing nothing" is just another way to describe what religionists wrongly call heaven.
* kaksisa has quit IRC (Ping timeout)
<John_Ventureville> Dr., you used the term "destiny" which surprised me. Do you simply mean "personal life goals" when you say "destiny?" Or do you feel you were born/made to do a certain life's work and their is something deep or even mystical about it?
<Eliezer> Future, my point is that there is no way to decide which of you is the copy, and which the original. When you are about to step into the xox machine, you must expect that one of you will step out of each terminal.
<FutureQ> Knobs, hehe, the late great planet earth, anc the last uttered word of humanity? "What's this button do?"
<Eliezer> Just like when you flip a quantum coin, you must expect that one of you sees heads, and one of you sees tails.
<omohundro> RJB: That's cool! Kind of like having two monitors.
<Eliezer> Given many-worlds theory, that is.
<Randolfe> Life is meaningless. We are the ones who find something to give it a meaning.
<BJKlein> i think the copy question is answered for me with by Bart Kosko's fuzziness.. everything is measured to some gray area.. even identity
<RJB> Eli, nothing so long as we are clear about that -- if he is saying he doesn't like it for himself or people he can convince its undesirable then it seems fine to me. But once you start saying there is only one right way for others its only a short hop before one is proselytizing for some agenda.
<Eliezer> Omo, if it were up to me, I wouldn't give people control over their own goal systems right away. I would not consider that to be "helping" them. If they learned enough science to wirehead themselves, that'd be one thing - I wouldn't stop them from developing the capability for themselves, that would smother their growth.
<Eliezer> But for *me* to *give* them the capability, I do not consider that to be helping someone.
<Eliezer> I might even argue that it is a proper function of a collective volition to prevent the sale of wireheading kits, providing people can still develop the technology as home hobbyists.
<FutureQ> Waht if the so called light emmision proofs for many worlds are nothing but many dimensions in this world? I don't relly buy many worlds. Where's the energy for me to create an entiure new unvierse just from typing this X instead of possibly a Y?
<John_Ventureville> Eliezer, I'm glad you wouldn't give dynamite to any children wanting to learn about their world by blowing things up.
<omohundro> I don't really know about death. As I have a personal probability distribution over different cosmic models, I don't like to rule things out till there's evidence. I do have a sense of personal destiny, don't have any idea where that comes from. But a sense of "what is my deepest gift to the universe". Often times it is only in retrospect that I see why I did something in the past. I find it a useful perspective to adopt th
<Eliezer> With the essential notion being that the *problem* is not that people can do this eventually. The problem arises if people get their hands on the capability too easily, without learning the discipline and perhaps growing in intelligence, so that they could control it.
<Eliezer> Omo: The IRC here has a character line limit. You got cut off at: " I find it a useful perspective to adopt th"
<TimFreeman> Eliezer: How much of the wireheading kit would they have to manufacture at home for you to feel comfortable with it? For heroin addicts, the heroin is not much different from the wireheading kit.
<BJKlein> sorry.. your message was cutoff at "perspective to adopt th"
<tomo> hi omohundro. so when are we going to have self-aware systems?
<Eliezer> Tim: I think that once someone is grown enough to build their own (Friendly!) AI, they should be able to do whatever they want to themselves with the FAI. Very high bar to set, but probably sufficient. Because you can use your own FAI to check for hidden dangers.
<John_Ventureville> RJB, so based on your last comment I take it that you don't want transhumanists to be like the evil Necromongers of the latest Vin Diesel SF flick! lol
<John_Ventureville> "Convert or fall forever!!"
<omohundro> Oh sorry about the character limit: the rest of the sentence was "I find it a useful perspective to adopt the attitude that I am "meant" to do something and to continually ask what that is.
<Randolfe> Why would any AI be "unfriendly" or "hate" you?
<BJKlein> heh, meant by whom..
<BJKlein> yourself.. recursive
<TimFreeman> Eliezer: I think synthesizing designer drugs will be a much easier shortcut to your wireheading kit, and the designs will be published in google so no FAI an no wisdom will be required. Lots of people will destroy ...
<TimFreeman> themselves.
<BJKlein> infinity.. so, omohundro, do you want to 'live' forever?
<omohundro> Eliezer: I like the concept of having the "discipline and growing intelligence" to control some of these new powers. I haven't yet seen a perspective to guide us in the choice of that discipline, though.
<Eliezer> Randolfe: The alternative to FAI is a paperclip-maximizing optimization process. A paperclip maximizer cannot be said to hate you. It is just a decision system that takes the atoms composing you, and turns them into paperclips.
<John_Ventureville> why not staples or liquid paper?
<John_Ventureville> : )
<omohundro> tomo: I am an optimist in this area (Eliezer might not call it optimism though!), I believe that the understanding we need for it will come in the next few years. What we do with that and how systems get deployed is a more complex issue.
<RJB> Hmmm... I think this goes back to Maslow's "Hierarchy of Needs" -- all of the higher "goals" we are already in control of and we can in most cases exert control over the lower ones -- look at Ghandhi for example.
<Eliezer> Omo: That's why I suggested, "build your own FAI" as a test. People would have to learn a lot of stuff for that, including cognitive science, and what they themselves were made of, and eliminate a lot of mental garbage that leads to incoherent theories.
<Randolfe> Omo had a charming story in his "quotes" section about a mother's reaction to her child spilling milk. The summation was that you become a better scientist because you know that you even learn from mistakes. What's wrong with that approach?
<BJKlein> killing everyone making the mistake
<tomo> Microsoft Windows w/ Self-Aware Technology by the end of the decade?
<Eliezer> A Bayesian learns from everything - success, mistakes, side effects, everything.
<BJKlein> Bad AI = Paperclips
<John_Ventureville> do we want a Bill Gates spawned seedA.I. considering Microsoft's track record?
<omohundro> BJKlein: "meant by whom": I don't really look at it that way, I just notice that sometimes I'm behaving in a way that doesn't feel like I'm doing what I could be and other times it feels like I'm right on and am expressing "what I am meant to be". So it's more an empirical sensation than an ideology.
* Joachim has joined #immortal
<BJKlein> omohundro, have you studied evolutionary psychology?
<RJB> J. Haven't seen the flick but I suspect the statement is accurate.
<tomo> see, microsoft already tried the paperclip thing. they must be on to something.
<BJKlein> heh
<FutureQ> It would need to reboot for every new upgrade john, then we could turn it off! LOL!
<omohundro> BJKlein: living forever: I think we continually balance living in the moment with preparing for the future (in AI we call it exploration vs. exploitation). I'd certainly like to live a long time but I think I place a higher value on living fully in the present than the long time tail.
<RJB> BJ -- living "forever" has a very low probability given current theories of the universe.
<eclecticdreamer> in certain like-minded interpretations, Robert ;)
<TimFreeman> Living 100 years would seem unlikely, for some plausible definitions of "living", given what the machines will be capable of by then.
<tomo> do we need some sort of X prize (or meth mouse) for self aware systems?
<TimFreeman> For instance, if Kurzweil's uploaded grad student is defined to be dead, then living 100 year seems pretty unlikely.
<Randolfe> A lot of "life-extension" involves sacrificing today's "fullness" for a longer time frame--such as calorie restriction, even exercise.
<omohundro> killing everyone making the mistake: Yes I think that's the great danger we are about to face. Having powerful enough technologies that we don't get too many chances to make mistakes.
<BJKlein> RJB, seems we're tweaking the theories every few years
<Eliezer> Ask me in a billion years whether I want to live forever. That's my theory. I'm a short-term thinker, I live my life one eon at a time.
<BJKlein> i would not like to give up (die) before finding out what theories are correct
<FutureQ> Sentience, a dangerous thing, maybe why the universe seems to have only tried it once.
<omohundro> BJKlein: evolutionary psychology: Yes I'm a big fan of it, I liked The Red Queen, The Mating Mind, The Selfish Gene, "Human Sperm Competition". Am now reading "The Robot's Rebellion" by Stanovich.
<BJKlein> we're complexity pushers.... more Complexity = Good
* BJKlein claps for EP
<Eliezer> Omo: I really liked "The Adapted Mind" and I'm trying to learn some basic math of evolutionary biology.
* Mind has joined #immortal
<Eliezer> More Good requires more complexity. It doesn't necessarily work in reverse.
<BJKlein> we're coming up on the end of the first Official Chat, (Philip Van Nedervelde - Foresight) is scheduled to join us in 5 min (I hope)
<BJKlein> but please stay with us as long as you'd like Steve..
<BJKlein> great chat
<eclecticdreamer> Eliezer, most human facts are unnecessary & ambiguous symbols
<omohundro> FutureQ: I'm a bit worried that we see no evidence of ET's (no Dyson sphere radiation, etc.) One interpretation is that it's hard to make it past the next technological advance.
<TimFreeman> mohundro: Who is the author of "Human Sperm Competition"?
<FutureQ> Yes, that was what I meant.
<Mind> I have never seen any ET
<RJB> Yes, and how many times have physicists "tweeked" and gotten it wrong. You have to do some heavy duty tweeking to get past currently accepted laws of physics and that limits you to ~10^14 years if you want an existence that you can envision now and 10^100 years if you *really* get creative in engineering the stuff that "life" has to function on top of (this is based on The Five Ages of the Universe)
<eclecticdreamer> Eliezer, have you checked out Stephen Thaler's creativity machine?
<omohundro> Thanks all! Very stimulating! Unfortunately I've got to meet somebody so I can't hang out much longer. But I love the forum and that you do this! Thanks a lot!
<Eliezer> Omo: It'd have to be something that killed us before AI, though, because of the range of possible hostile AIs in our past light cone, at least one of them should want to convert our solar system to paperclips.
<Eliezer> I buy Hanson's theory of the "hard step".
<Eliezer> That intelligent life is just very rare.
<RJB> There is no evidence that there is a lack of sentience all around us in the Universe.
<Eliezer> Because damned if I can think of a single other plausible explanation for the Fermi Problem.
<eclecticdreamer> Eliezer, please read up :)
<BJKlein> Thanks very much, Steve. Please feel free to join us anytime in the 'infinite' future!
<Eliezer> Hostile AI doesn't work. Friendly AI doesn't work. I suppose maybe we could be in the light cone of a different species' concept of "Friendly AI" - the Zoo Hypothesis.
<Mind> I think they are out, but there are none that are equal to our development and within our observation range
<omohundro> "Human Sperm Competition": by Baker and Belis who also did "Sperm Wars" and "Baby Wars" which are also quite good. HSC is the most mindblowing book on evolutionary stuff I've read. Unfortunately it's like $200 or something for some unknown reason.
<Mind> ugh
<Mind> that was bad grammar
<RJB> Omo, *all* of the efforts to search for Dyson *Shells* to date are bogus. They are based on the assumption that all Dyson shells must be populated by "human"-type machines that require temperatures @ 300K.
<FutureQ> Thanks for coming to chat with us Omo
<Eliezer> RJB, any decent civilization should have turned *everything* into computronium. And switched off the stars. They're wasting entropy.
<eclecticdreamer> ..or be ignored :O|
<BJKlein> Eli, will you be in Toronto?
<eclecticdreamer> Bruce, what do you think about what I said in my post?
<eclecticdreamer> About your mother's death..
<RJB> Dyson shells can easily be constructed out of computronium that can radiate in the T range from ~4K to perhaps 1500+K. Interestingly Minsky pointed this out to Dyson at the Russian-American SETI conference in 1963.
<Eliezer> BJ: I'll be in Toronto for a friend's wedding, but not, alas, TV04.
<BJKlein> eclecticdreamer, not sure.. my first instinct is to say unlikely.
<eclecticdreamer> :)
<eclecticdreamer> in human terms
<RJB> Actually Eli, that may not be true -- if you can harvest and effectively use all of the energy then it isn't wasted. Stars may actually be the most efficient way to get "metals" out of which you can construct more computronium.
* BJKlein End Official Chat
<RJB> BJ do we need to rejoin the chat room?
<BJKlein> no.. youre ok
<eclecticdreamer> Bruce, what I can tell you, is that there is undoubtedly existence after death.
<Mind> Wonderful to see so many in the chat
<ravi> BJK are u going to be flying to Toronto?
<Mind> hi all
<Mind> BJ doesn't fly
<Eliezer> RJB: Entropy is the only truly conserved resource in the universe. Construct computronium out of hydrogen. The only legitimate excuse to turn hydrogen into iron is if you need the energy, because once it turns into iron you can't turn it back.
<BJKlein> i suspect Philip may be late or not show.. so open chat until then
<Eliezer> The Sun is doing an immense amount of wasted computation.
<BJKlein> I will rent a car
<BJKlein> something with better gas millage than my Blazer
<ravi> ah ok
<Eliezer> There is only one excuse for generating that amount of heat, and it is a really big computer.
<Eliezer> And if we are in their future light cone, they should be *here*. They've got to switch off our sun. Even if they've got to throw a Dyson shell around it, which seems silly, they've still got to switch it off.
<Eliezer> The Milky Way galaxy is wasting electricity!
<FutureQ> Bj, why not fly, every car you pass is a chnace for a head-om colliison whereas the chances for a crzsh are nil?
<BJKlein> Eli, you think you could swing by for a few? .. would like to capture your image on film
<Eliezer> BJ: it's not the same time
<BJKlein> http://www.imminst.org/film.php
<Mind> ELI...so you are saying that the sun is doing a bunch of useless random computation right now, but an advanced civilization will turn it off and use it for more directed computation
<RJB> E. I've given some thought to constructing computronium out of H. I have yet to figure out a way to do it. H does not have material properties, even at very low temperatures, that make computronium reliable. TiC and a host of other materials yes, but H and He suck other than as coolants.
<BJKlein> with flying crash = nothing for Alcor
<tomo> will there be an imminst meetup at TV04?
<BJKlein> no, just a film crew
<TimFreeman> Eliezer: How did the term "paperclip" come to mean what you mean by it? And what exactly do you mean?
<Mind> BJK, My wife and I are looking into life insurance and cryonics right now...do you have any suggestions...best place to go...reasons why?
<BJKlein> quickquote.. (i think) good price
<Eliezer> BJ, that doesn't sound to me like a correct risk calculation. Your chance of not being suspended as a result of a car crash, or not coming back, etc., is intuitively much higher than your chance of being involved in a plane crash at all.
<RJB> BJ - but the risk of an automobile crash is much greater than that in the plane -- hazard function is much higher. Nothing (much) for alcor if car burns up in crash or you drive off some long lonely stretch of highway and nobody notices until the next day.
<BJKlein> sure.. but oblivion is much more likely with plane crash
* omohundro has quit IRC (Read error: Connection reset by peer)
<BJKlein> plus I'm driving the car ;)
<Eliezer> Tim: I just use it as a generic term for an optimization process that produces uninteresting things.
<eclecticdreamer> Eliezer..
<FutureQ> That is my point to BJ, WEli.
<FutureQ> damn sticks
<eclecticdreamer> have you seen Stephen Thaler's Creativity Machine?
<Eliezer> BJ: Er, you're doing the risk calculation way wrong. You can't assume a plane crash, and then say, oblivion is more likely. You have to do the chained probabilities. p(crash)*p(oblivion|crash) vs p(collision)*p(oblivion|collision)
<BJKlein> i'm doing one car crash for one plane crash..
<Eliezer> yeah but that's the *wrong calculation*
<BJKlein> how many bodies are survived from each?
<BJKlein> much less from plane, i suspect
<Eliezer> it's like making the life-or-death decision with an Ouija Board
<Eliezer> car crash and plane crash are not equally probable
<BJKlein> sure..
<FutureQ> that's not the point there'sway way too many chnaces of a abd car crash vs hardly any fo a [plane crash
<Eliezer> I think you're about a hundred times more likely to be involved in a car crash than in a plane crash
<BJKlein> but the recoverable factor
* sjvan has quit IRC (Quit: Leaving)
<eclecticdreamer> Eliezer, depends on hidden factors, too
<eclecticdreamer> but generally, that would probably be true
<ravi> the point is I think BJK is trying to make is that incase there is a plane crash, he can't be frozen at all....but if there was a car accident chances are he can be frozen
<BJKlein> plus I like to see the country via road
<eclecticdreamer> but if say you leave on a date with 11 or 666 on a plane with the mathematical encoding of the pilot/s equivalent, you might want to rethink it ;)
<RJB> But the important thing is the overall probability of survival.
<BJKlein> not survival.. recoverable
<BJKlein> brain intact
<Mind> Most plane crash statistics are for the "full flight", while most crashes occur during take-off and landing. If you restirct the data to the beginning and end of the flight time...the probability is higher...but still way less than crashing in a car
* outlawpoet has joined #immortal
<Eliezer> BJK, when you fly to Toronto, you want to minimize your probability of nonrecoverable death, and maximize your probability of survival. Right? You aren't distinguishing between nonrecoverable death in a car accident, and nonrecoverable death in a plane crash. You're just dead and not coming back, either way.
<RJB> That is what I mean -- and you don't *have* to have an intact brain. You just have to have one where there is sufficient structural information left to restore you to a mostly normal functionial state (there is a lot of redundancy in a brain -- and you could probably lose some of that and not miss it at all -- or have technology that could compensate for it)
<BJKlein> well, i want to max my recoverable option at all times
<eclecticdreamer> Eliezer, do you actually LISTEN to others ask you a question? :)
<Eliezer> Now, unless your probability of dying irrecoverably, *given* a car accident, is *less than one percent*, you should fly, not drive. So saith the math.
<eclecticdreamer> or do you LISTEN & choose to ignore it?
<eclecticdreamer> I'm wondering which
<eclecticdreamer> :O?
<outlawpoet> oh man, don't tell me BJKlein still has the willies about flying.
<BJKlein> a plane is not very good (in my opinion) at giving me a recoverable (brain intact) option
<tomo> why fly to toronto? via rail takes you right downtown :)
<Eliezer> BJ, it makes no sense to maximize your probability of recoverability given an accident. That is *total nonsense*. You want to maximize your probability of survival. Period!
<RJB> Looks that way... E. is trying to explain probabilities but the only argument I've seen thus far that seems good is that BJ likes to see the countryside.
<eclecticdreamer> ahem.. :p
<BJKlein> but while a car with a probability of no recoverable is true... i think overall it is less than with a plane
<John_Ventureville> I think BJ may be an "in the closet" control freak! : ) And that is why he prefers driving over flying.
<ravi> i should have never brought up the plane issue
<BJKlein> heh.. thanks RJB.. and I'M driving the car!
<FutureQ> Bravo RJB
<BJKlein> i'm out of the closet
<RJB> E.D. If you were asking the question to E. he may not have understood that -- chats are very multi-threaded and it is tough to catch everything.
<Mind> it isn't too long by car anyway....maybe 15 hours
<Eliezer> I mean, if you're just phobic of planes, be phobic!
<Eliezer> Don't make up silly reasons for the phobia!
<John_Ventureville> BJ, if you had near limitless resources, would you do the celebrity thing and fly around in your own goldbricked Gulfstream jet?
<Eliezer> That just reinforces your chance of making other bad decisions later.
<BJKlein> Eliezer, not if Death = Oblivion
<John_Ventureville> would you feel safe then?
<BJKlein> no
<John_Ventureville> ok
<BJKlein> i'd sail
<John_Ventureville> HEY
<outlawpoet> BJKlein, that's rather the point. driving is more death than flying.
<John_Ventureville> cryonicists have been lost at sea
<Eliezer> BJ, if DEO, then make the correct decision regardless of your phobia. The correct decision is to fly.
<BJKlein> with lots of ice on board
<John_Ventureville> even those skilled at sailing
<FutureQ> I'm a pilot and as soon as I'm back on my feet I'm flying again for the shear joy of it.
<Mind> I am going to buy one of those new "speed chutes", as a pacifier when I fly....probably won't ever use it...but it will ower the stress levels
<eclecticdreamer> Eliezer, do you see things in black & white, or sometimes grey?
<BJKlein> no, eli Death with no Information intact = no Cryonics
<BJKlein> that is real oblivion
<eclecticdreamer> Perhaps Bruce has more knowledge & control over a car..
<eclecticdreamer> And the events that take place within & around him
<John_Ventureville> a TRULY dedicated cryonicist would not drive
<John_Ventureville> he/she would either ride the bus or drive their own!
<FutureQ> a tank John!
* NatashaVita has joined #immortal
<outlawpoet> BJKlein, your math makes no sense. There is more actual death in driving than flying, the statistics about what kind of death it is doesn't magically overwrite the previous statistics.
* BJKlein waves to NatashaVita
<BJKlein> no Philip yet..
<John_Ventureville> howdy, Natasha!
<BJKlein> may not show.. but we shall see
<Jonatan21> Natasha can you add NanoAging to your suggested site on extropy.org or .com
<Eliezer> BJ, you want to minimize p(oblivion). Right?
<John_Ventureville> I am going to buy for myself and BJ (when I win the lotto) a couple of Ferrets
* NatashaVita has quit IRC (Quit: JWIRC applet)
* NatashaVita has joined #immortal
<BJKlein> outlawpoet, remember the recoverable factor of brain information from flying verses car crash
* BJKlein nods to Eliezer
<John_Ventureville> it's a British armored car which was designed with maneuverability in mind
<NatashaVita> Hello everyone
* Jonatan21 has quit IRC (Quit: http://www.nanoaging.com/)
<Eliezer> if p(oblivion|driving) > p(oblivion|flying) you should fly, right?
<John_Ventureville> howdy
<BJKlein> hi NatashaVita!
<BJKlein> Philip has not showed yet.. may not tonight
<outlawpoet> man, I wish I could draw squares in IRC
<BJKlein> heh
<BJKlein> you can with a white board
<outlawpoet> if something ever needed a lovely frequency matrix, this does.
<John_Ventureville> Natasha, what is your take on being an immortalist and handling the risks involved with flying vrs. driving a commuter car?
<NatashaVita> I know Philip has been in France and Belgium - is he back?
<eclecticdreamer> Eliezer.. since you don't respond here, I'm going to query you :)
<John_Ventureville> Eliezer, who is getting married in Canada?
<Eliezer> John: childhood friend
<NatashaVita> Driving cars is like handling a loaded gun
<BJKlein> hmm, he said he would try to make the chat tonight.. not sure if he's back yet
<FutureQ> He's jnust busy ED like a bulldog with a bone.
<John_Ventureville> Eliezer, ok, I thought it might be Simon Smith
* BJKlein nods again to Eli
<eclecticdreamer> [08:22:20] <eclecticdreamer> have you checked out Stephen Thaler's AI?
<eclecticdreamer> [08:22:33] <Eliezer> it's uninteresting
<eclecticdreamer> why is it uninteresting?
<eclecticdreamer> and what do you know about it Eliezer?
<BJKlein> on the p(oblivion) car vs fly
<RJB> BJ: but other than in a fire (or rotting in a grave) it is a very complex question as to whether the brain is sufficiently disassembled to prevent a recovery.
<Eliezer> BJK: p(oblivion|driving) = p(oblivion|accident)*p(accident|driving)
<BJKlein> but there are many unknowable variables
<eclecticdreamer> exactly Bruce
<BJKlein> from what info we have
<Eliezer> p(oblivion|flying) = p(oblivion|accident)*p(accident|flying)
<eclecticdreamer> they don't account for those
<Eliezer> yes?
<eclecticdreamer> humans take for granted that often
<eclecticdreamer> Eliezer, how about giving simple examples to make it easier to understand?
<BJKlein> change accident to recoverable brain info > 90%
<outlawpoet> eclecticdreamer, Thaler isn't building an AI, his "Creativity Machines" return recombinant design factors from preselected domains.
<eclecticdreamer> Instead of some elitism functions that you will throw out as soon as you don't identify with human interpretation ;)
<NatashaVita> What is the focus of NanoAging?

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 12 July 2004 - 09:05 PM

Good Morning, Bruce,

First, my sincerest condolences on your Mom. Losing a loved one to death is never easy. And for you, a thought leader whose passion is so clear about fighting death with creativity and intelligence, this would be especially poignant. I am truly sorry.

You may have wondered why I have not joined Imminst as a full member.
Here is the reason.

Bruce, eighty percent of the human beings in the WORLD who signed up for cryonic suspension last year did it owning life insurance by Hoffman Cryonics Insurance. This is not a coincidence, but a function of outstanding pricing and service levels, along with huge levels of psychological and administrative extra effort to provide the most cost effective and appropriate cryonics coverage possible.

I am an independent broker, giving me access to literally thousands of life insurance carriers. However, I only use highly rated A+ carriers who have put IN WRITING on their letterhead signed by a corporate vice president that the carrier has NO problems with cryonics organizations as owners or beneficiaries.
I provide quotes for term, limited pay universal life, and even single pay policies, depending on what is most appropriate. Most people buying term life to fund their suspension do not realize what happens in later years, when most people, especially rational life extenders, will tend to need the coverage. So, while I often recommend and make available very affordable term policies for folks in their 20s, 30s, and 40s, the term I sell is UPGRADABLE with NO evidence of insurability, even crediting the earlier premiums toward the permanent policy.
And it is typically cheaper, or as inexpensive, as term from Internet search engines like Quotesmith or "Eterm" etc.

In fact, I am writing today a policy on a young lady who did an internet spreadsheet, spent a lot of time, and was going to be paying 17 bucks a month for $250,000 of term. Because of my "commercial grade" spreadsheeting, the same policy with the same carrier is 14 bucks and change a month.

And, there is much to say for expertise and legal, financial, and medical information that I bring to the table especially for my cryonics friends and clients. People who have worked with me will generally tell you, I don't go the second mile, but the tenth, to help navigate clients through the shoals of underwriting, paperwork, logistics, full copies and documentation to both carriers and cryonics organizations, overnighting applications both ways at my cost, etc.

I also provide a $50 donation to ALCOR for every policy I sell, in the case of the smaller policies this is a high percentage of my commission. Fortunately, most of my actual income comes from my investment sales and side of my practice, allowing me to spend perhaps 80% of my time working to get the best policies for my cryonics friends. While not financially independent yet, I don't need the money from my cryonics sales...I am motivated by ideology and passion for this incredible idea.

The bottom line is that I was disappointed, and a bit hurt, that you did not even give me the chance to bid for your cryonics business.

And, when asked last night by "Mind" on the chat for a recommendation specifically for cryonics life insurance, you referred him to strangers on some Internet spreadsheet who are not going to understand cryonics at all. And who will in most cases cause great hassle and extra work and hoops to jump through, with not as good a result.

Bruce, you are a thought leader in the Immortality and Cryonics movements. I like and respect what you have done, and are continuing to do. I want to join with you in a collaboration against aging and death.

But I am passionate, and highly skilled, in helping people obtain their cryonics life insurance. I pay licenses in some 27 states to support my vision, thousands of dollars a year in overnite costs, thousands of hours over the last ten years to be the best resource possible for my clients.

And I want to be clear that I am willing to support your outstanding vision if, and to be honest I should say only if, you support mine.

We both are investing huge amounts of our precious time and resources to establish the value of preserving individual lives. We need to be on the same team.

Please be honest with your thoughts.
Warmly and Sincerely Yours,

Rudi

Rudi Hoffman CFP CLU
Certified Financial Planner
Chartered Life Underwriter
/"Planning Tomorrow, Today"/

#4 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 12 July 2004 - 09:06 PM

Rudi, (cc: Mind)

This looks to be a lack of information problem.

I was thinking you charged more for your services. When Mind asked for resonable prices, I defaulted toward what I had used in the past and what I considered low cost. I had you classified as premium, with more overhead, commission, etc.

From now on I will remember to reference anyone asking a similar quesion that you have excellent service and that you have lower prices in order to help the cryonics movement.

If I would have known this before signing up with Alcor, I would have surely asked you to help with my insurance.

Take care,
Bruce

#5 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 12 July 2004 - 09:06 PM

Just wanted to see what I had typed:

<Mind> BJK, My wife and I are looking into life insurance and cryonics right now...do you have any suggestions...best place to go...reasons why?
<BJKlein> quickquote.. (i think) good price

Rudi, would you mind if I posted your email to the chat topic here:
http://www.imminst.o...=ST&f=63&t=3803

and include my reply to you?

BJK

--

Not at all, Bruce.

And thank you.

Rudi

Rudi Hoffman CFP CLU
Certified Financial Planner
Chartered Life Underwriter
"Planning Tomorrow, Today"




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users