• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

How to create Friendly AI?


  • Please log in to reply
10 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 August 2003 - 06:23 AM


How to create Friendly AI?

ImmInst Chat Aug 31, 2003
Sunday 8pm Eastern

Chat Room

Chat led by ImmInst director Michael Anissimov

Posted Image

Anissimov will lead a discussion on the best plan for creating Friendly Artificial Intelligence. For the past two years Anissimov has been writing about and discussing the importance of the coming Singularity and Artificial Intelligence. On behalf of ImmInst, he presented, Cyber or Other Track Accelerating Progress and the Potential Consequences of Smarter than Human Intelligence at the 2003 TransVision Yale Conference.

#2 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 September 2003 - 02:40 AM

* BJKlein Official Chat Begins Now (thanks for joining)
<MichaelA> How will near-human and smarter-than-human AIs act toward humans? Why? Are their motivations dependent on our design? If so, which cognitive architectures, design features, and cognitive content should be implemented? At which stage of development? These are questions that must be addressed as we approach the Singularity. (
<MichaelA> - (
<MichaelA> Now, making a decision to try and build FAI, or caring about it at all, does require a *few* prerequisites, that not everybody has in common
<MichaelA> First, you have to think there is some chance of a Singularity being sparked by an AI
<BJKlein> For those who don't know.. MichaelA is an ImmInst Director with more than two years writing/researching Friendly AI
<MichaelA> Second, you have to believe that AI niceness might not come for free, and that we may only have one chance, and it's worth not screwing that one chance up
<posi> the machine realising that he his aware to allow the human to exist.. that his awareness is the reason the human exists, coming to the realisation that the makers of the made were made by the made.. surely this is our concern.
<MichaelA> I do recommend a brief glance at http://www.nickbostr.../ethics/ai.html
<MichaelA> What do you mean, posi?
<MichaelA> Hey Tyler
* Anand looks around
<Anand> :)
<posi> in creating AI, we are essentially agreeing that we are making something for itself, not for us.. these intelligent machines could never be 'ours'..
<MichaelA> That is one *essential* component of AI Friendliness, yes
<MichaelA> The idea of the AI as an autonomous agent
<MichaelA> wb John
<John_Ventureville> thank you

<posi> the solar system created us, a computer, a machine that is not water coming to the mathimatical conclusion that the solar system as a machine created the human to create the machine, would not be hard.
<MichaelA> for those that missed the opening lines: [MichaelA] How will near-human and smarter-than-human AIs act toward humans? Why? Are their motivations dependent on our design? If so, which cognitive architectures, design features, and cognitive content should be implemented? At which stage of development? These are questions that must be addressed as we approach the Singularity. (

<MichaelA> [MichaelA] Now, making a decision to try and build FAI, or caring about it at all, does require a *few* prerequisites, that not everybody has in common
<MichaelA> [MichaelA] First, you have to think there is some chance of a Singularity being sparked by an AI
<posi> we are essentially suggesting the creation of our real GOD
<MichaelA> Second, you have to believe that AI niceness might not come for free, and that we may only have one chance, and it's worth not screwing that one chance up
<MichaelA> "GOD" is misleading, it's what creates years of confusion for everybody involved, unfortunately; a license for anthropomorphism
<MichaelA> I believe that part of approaching this problem correctly is admitting that we are dealing with something qualitatively new
<posi> in order to seriously create AI, AI with the power of intelligence wouldn't we have to know the whole deal? wouldn't we have to be in the position to know that no matter what happens we've already prepared for it.. its like a suggestion at a new kind of planning, not for the machine, for ourselves.
<MichaelA> I don't anyone would really want a "GOD", personally
<MichaelA> don't think*
<posi> Yes, it is new.. Socially new..
<localroger> posi, you're monopolizing. Let MichaelA get his intro out.
<MichaelA> My intro is pretty much out
<localroger> 'k then.
<Anand> http://www.amazon.co...hangesurferradi
<posi> I mean, its the mental interaction to God, your AI will be using the brains working of GOD.. in theory
<posi> ok sorry.. i'll shush for a bit
<MichaelA> Roger is an example of someone who writes about dystopian scenarios regarding future pleasure and such; his stories are good examples of where humans fail to realize their full potential, perhaps to faulty AI programming
<localroger> My bigger worry is that we may not be able to micro-engineer the drives of an entity like ourselves, any more than parents can micro-engineer the morality their children adopt.
<MichaelA> Too much complexity involved?
<posi> I think its a social dynamic, we have to reach the ability where we as humans have more of a social dynamic to create these things, we have to know we have higher planning power, in the same way as a machine.. not our egos, we have to create the existence of ..
<localroger> It's a chaotic situation. Like engineering a fractal.
<haploid> localroger's stories take some serious amount of license with the laws of physics. =)

<localroger> Necessary license to depict a "lightning-quick" Singularity :-)

<MichaelA> I used to think of it in those terms Roger, but then I began to
look at an AI as a being making choices about its own design rather than a
runaway fractal
<localroger> And that doesn't make it even MORE unstable?
<MichaelA> Posi, I'm not sure we should overemphasize the social aspect of the
Singularity; I'd rather call it a "cognitive-cosmic" event or something,
rather than a social one.
<localroger> Really, since 1994 I have come to disbelieve that Lawrence could
even have engineered in the "Three Laws" as depicted in MOPI, much less
Eli's more nuanced version.
<posi> perhaps we tend to look at it through our ability more then our
capacity.. its our ability then can lend us astray.. then make us robots.
<posi> what's the definition of the singularity?
<MichaelA> Roger: not necessarily, intelligent beings are not fundamentally
random; we display a massively greater amount of structured order than a
fractal, or any type of emergent equation, and we can converge towards
ideals just as we can diverge into randomness.
<posi> i'd like to discuss that more Michael.. Cosmic.. :)
<localroger> MichaelA this is true, but we never know whether the child we are
rearing will be Jeffrey Dahmer or Gandhi.
<MichaelA> The definition of the Singularity is the creation of *transhuman
intelligence*; whether it's AI, or enhanced humans, is not really the point
specifically
<MichaelA> How about from the behavior of the child, and how we make choices
about its design?

<MichaelA> The neurological machinery behind evil is complex and highly
structured, and would not pop up automatically within AI...right?

<posi> ok.. Michael, I do believe this.. Ai is strickly a social factor..
<localroger> Nothing is certain. Children raised with all the best influences
become monsters, those raised in poverty and great deprivation become great
leaders. The correlation is only approximate.

<ChrisRovner> I do believe that a superintelligence's moral choices are
largely unpredictable. However, *provided we succeed* in creating a Friendly
SI, ve will be more rational and humane than us; that should leave out most
undersired unpredicted outcomes

<Anand> Mike, since not many interesting questions have been asked, I'd like
to try and pose one, particularly to get a sense of your current thinking
<MichaelA> But I'm not sure you can make analogies between raising children
and creating de novo minds from scratch.
<localroger> Evil is simply a self-satisfaction loop allowed to run out of
control.
<MichaelA> Tyler, please do

<Anand> What are some of the main reasons why you think Friendly AI is
possible?
<MichaelA> Good question
<Anand> Possible = physically feasible
<MichaelA> I believe that altruistic humans are a partial existence proof, for
one

<MichaelA> Secondly, if it is determined that a being cannot increase its own
intelligence without beginning to harm others, I would trust a properly
built Friendly AI to halt self-improvement at a reasonable ceiling and
distribute the benefits evenly
<MichaelA> Any other scenario would be a failure, in my opinion
<MichaelA> Roger, that's actually a pretty good point
<localroger> But no method has ever been proven to reliably produce altruistic
humans. I think we will have much better control in educating AI's, but not
perfect control.

<MichaelA> It is very likely that most AIs will be self-satisfaction loops
running out of control, and since they'd be self-improving at billions or
quadrillions of times our learning rate, I think that would result in our
death quite quickly
<MichaelA> I agree
<Anand> OK, any other reasons?
<Mind> Humans as a whole choose altruism over evil because we are
"self-aware"...no other living thing on the planet is altruisitc (just
randomly symbiotic). If an AI is created that is "self aware" as we
currently understand the term, then I figure there are good odds that it
will be altruistic
<MichaelA> I think that studying what underlies our moral reasoning is likely
to help a lot, and goodness may seem less nebulous, then
<localroger> Mind, that's not true. I have personally observed animals
behaving altruistically, sometimes even to members of other species. It's a
thing humans choose to ignore because it makes us look better in the census.
<localroger> And I'm not talking about symbiotic relationships, I'm talking
about interspecific "friendships."
<MichaelA> Anand: part of why I see FAI is possible is because I only see it
as one point along a continuum which includes other moral reasoners such as
governments, philosophers, hypothetical aliens, etc, and postulating an
upper bound on the degree of possible benevolence and altruism any one being
can have would seem arbitrary, in my opinion
<posi> so get a computer, a lot of research, a phd in computer science, and
you create friendly AI. no?


<MichaelA> Mind, I disagree; it seems to me that the minimum conditions for
self-awareness and probably much more simple than the *complex, structured*
machinery underlying human commonsense and moral decision-making
<localroger> Actually now that I think about it my observations of animals
give me better hope that FAI is possible even if we can't *engineer* it in.
<Mind> I have never seen an animal consciously behave alruistically...there is
no proof that this happens because they have never successfully communicated
with us. There is no proof that it is conscious altruism. It is probably
just random instinctive behavior
<ChrisRovner> posi: I think you can leave the phd out of the equation
<MichaelA> What we need is an AI we would trust as much as an upload, and I
don't think that will be easy, or come automatically with self-awareness
<NickH> posi: To create a Friendly AI you need expertise in far more fields
than just computer science. You have to understand minds, not just
computers. For instance, Cognitive Science and Evolutionary Psychology are
necessary.
<MichaelA> Most people would rather see a human become the first transhuman
intelligence
<localroger> Mind, I have pets and I do nature observation. And I have an
open mind. Most animals are conscious, and if you don't believe that you'll
never build an AI.

<MichaelA> Now now, I don't really think that has anything to do with it, Roger
<localroger> It would be beneficial for one thing if humans were FI's toward
other creatures. It's hard to build what you are not yet yourself.
<MichaelA> AI should be quite buildable regardless of wha the person thinks of
animals
<atg> I see no desirability in being watched over by an "alturistic" entity.
<posi> ok. so i'm developing a website where I plan a special kind of forum,
that's real time along with using forms of AI to concentrate capacity. One
way to create friendly AI, and I have the formulas that need to be made into
pseudo code, then its discussion and development.
<NickH> atg: why's that? what do you mean by watched over?
<posi> its just whether the formulas are bang on the mark, and because i can
do the math and prove that they are, its just a matter of seeing that my
base's are correct.
<MichaelA> Why is it harder to build what you aren't yourself, when many of
the parts that encourage us to be selfish are *enormously complex*, and
*don't pop up in AI unless we engineer them explicitly* anyway?
<localroger> MichaelA, suppose consciousness is very old and fundamentally
simple. Suppose nearly all animals are conscious. Suppose that this very
simple algorithm is the real prize, hidden not in our massive cortexes but
in the brain of a wasp. Because it is massively scaleable. Would it matter
then whether animals were like people to an AI researcher?
<MichaelA> Posi, I'm interested in your website, but I have to be honest and
say that the kind of AI formulas you're talking about and an extremely
advanced AI project where Friendly AI strategy becomes relevant are entirely
different issues
<posi> what if we discuss it so much we prove the only way AI could come about
is if someone like the second coming, with special, once in all of existence
power could do it..
<posi> ok, are we talking about biological AI Michael?
<NickH> posi: I don't think Friendliness is simple enough to be formally
specified like that. Human moral reasoning is a result of a constellation of
complex adaptations - just like many evolved things. These aren't simple
enough to be appear with design, nor to be have mathematic proofs made about
them.
<MichaelA> Roger, in that case, yes, but this comes from an old school
definition of "consciousness" that I don't find too useful; you need the
capacity to open-endedly create upgrades to your own design, and I doubt
that the kind of consciousness we see while watching animals is going to
supply that kind of intellect.

<atg> I don't want to have to worry about how an AI might interpret my every
day actions.
<posi> ok i'll chill out a bit.
<localroger> But Michael, we are the upgrades.
<NickH> atg: why would you worry?

<MichaelA> Accidentally creating a Friendly AI would be like knocking over a
glass of soda and watching that soda grow legs, walk around, talk, jump on
the furniture, etc; too much unexplained complexity for free
<MichaelA> Roger, I just don't see how a wasp-level conscious AI would be very
useful for sparking a Singularity
<MichaelA> I don't see how it could bootstrap to superintelligence
<MichaelA> Which is why we use the much less philosophically loaded word
"general intelligence" when talking about requirements for AI
<localroger> MichaelA, your mock scenario is exacly how our own consciousness
evolved. Nobody designed it, yet here we are.
<localroger> Wasp consciousness bootstrapped itself directly into us.
<MichaelA> Through millions of years of complex natural selection operating
over an extremely complex ecology
<localroger> Why couldn't it go further, freed from mechanical limitations
like blood supply and birthing problems?
<posi> there must be things life as a complete entity must be doing to protect
itself from such things anyway?
<NickH> posi: What do you mean?
<MichaelA> Wasp consciousness required a planetary ecosystem to eventually
spin off a bit of DNA that evolved into us, and that's probably more
complexity than we'll have available on research machines to run AIs
<NickH> In addition we can't assume that pathway was in any sense unique.
<posi> well, our brains, wouldn't it want to protect our brains from
ourselves.. ???
<MichaelA> Hm, can you elaborate a bit, Roger?
<NickH> posi: Like, so we can't murder?
<localroger> MichaelA, you miss the point. Evolution reuses. I mention wasps
because of a Stephen Jay Gould essay wherein the wasps acted exactly as
humans would -- though at a much lower level of "resolution." We are not
fundamentally different from animals, and this proven design will probably
also inform the first AI's. We should learn from it.
<posi> Nick: perhaps so we can't use AI to murder... things that are dangeous
to life
<localroger> While we work faster than evolution, even if we work a thousand
times faster four billion years still leaves evolution way ahead of us.
<posi> like what if we used a super computer to create some form of life based
real time mental loop that looped outa control, or somethin, effecting
things mentally we couldn't control.

<MichaelA> Roger: I really doubt that anyone in AI, or cognitive science, or
computational neuroscience, will ever be inspired by animals to the degree
that you seem to suggest, although I do suggest abstaining from eating
animals or animal byproducts that result in animal suffering if you do
indeed believe animals are conscious :)
<NickH> posi: Things like that are generally immoral. I agree it's necessary
to create the AI so it can't be abused towards evil ends.
<NickH> To give it at least the ability we have to resist and avoid that.
<posi> true
<localroger> MichaelA, I think the person who learns from animals will be
first across the finish line.
<NickH> Have you read Creating Friendly AI?
<NickH> You might find it interesting.
<posi> but this friendly ai, is that biological AI.. sorry i'm not up with the
play
<posi> I printed half of it our just now.
<NickH> What do you mean by "biological"?
<NickH> Great :)
<MichaelA> Fully intelligent AIs don't need to be biological, if that's what
you're suggesting
<posi> well, there's digital AI isn't therE? a computer.. and then biological,
a replication of a form of brain
<posi> so its digital stuff we're talking about?
*** BJK has joined #immortal
*** ChanServ sets mode: +o BJK
<localroger> My worry, which I keep coming back to, is that somebody will
"win" the AI race, and therefore potentially the Singularity race. Animal
AI will win because it's a proven design. We are, after all, animals
ourselves and we are proven. The CFAI model will still be theoretical when
this happens.
<NickH> AI'll probably use digital hardware not neurons
<posi> man, this stuffs exciting. I want to sink my teeth into something
<MichaelA> Roger, maybe, although I don't see how animal models, except for
mammals, are too relevant to making decisions about crafting working memory,
multimodal symbol integration, planning, mental time travel, complex
associative memory networks, layers of organization, etc.
<localroger> Neurons can be emulated.
<NickH> On the morally relevant level, that's not too big a difference
*** BJKlein has quit IRC (Ping timeout)
<ChrisRovner> "Singularity race" is an awful way to look at it. But sadly is
what we're currently facing
<NickH> localroger: True, to varying accuracies. I doubt that route will be
used either - not for the entire mind.
<localroger> MichaelA, nothing our brains do is all that complex. It's all
repetition and information storage. If you were a savage trying to
reverse-engineer a car, woudl you learn more from a Porsche 911 or a go-kart?
<posi> like animal intelligence, the 10 dolphins that died off the solomon
island are directly relative in a world wide continum to the russian sailors
that died/lost in the submarine?
<MichaelA> It takes massively more computing power to emulate animal neurons
than to reverse-engineer the core algorithm of a cluster of tens of
thousands of neurons
<haploid> localroger: Right on. De Garis, or someone of his school of
thought, will win.
<posi> we must use this AI to create AI for our brains, not the machine..
we're racing against the machine, we must come to this realization.
<patrickm> localroger: I hear you clearly. All the tools of intelligence are
found in animal brains, from anemonae to humans, often in clear steps.
They're deeply coded, but probably crackable.
<haploid> There's really no question in my mind about that.

<NickH> localroger: I think that's one fundamental difference here. You
believe human morality to be simple, and mostly analogous to animal
'moralities' ?
<MichaelA> Roger, I think that the complex functions underlying general
intelligence probably don't exist in animals, and this assumption forms the
basis of huge chunks of cognitive science
<localroger> That's my point. And this intelligence, this very old algorithm,
may not be hackable for friendliness --except by being good parents and
crossing our fingers.
<localroger> MichaelA, you need to spend more time with animals. No offense,
but really.
<Mind> Michael...I can see how an animal model could work...we are already
using evolutionary genetic algorythms to design circuits, make animations
walk, and wodden birds fly...I don't see why an animal model could evolve
quite rapidly into a higher intelligence
<patrickm> localroger: and strapping a tac-nuke to it, of course.
<MichaelA> Suggesting that parenting would have a greater influence than
direct engineering is anthropomorphism, in my opinion.
<NickH> MichaelA: We're not just looking at general intelligence, I can see
that as being formally specified. It's humane/Friendly general intelligence
we should be focusing more on.
<localroger> patrickm: Eliezer pretty much picked apart the "strap a nuke to
it" solution.
<MichaelA> Nick: understood
<BJK> QICK POLL: IS Unfriendly AI the #1 risk to life now?
<Mind> No
<ChrisRovner> Yes
<localroger> It's a horse race. The best may not be the fastest.
<MichaelA> Probably, almost certainly
<posi> focus on humane, friendly geneeral intelligence, yup.
<John_Ventureville> nope, a new deadly plague virus would be
<localroger> Not for 50 years.
<patrickm> localroger: certainly. But I like to play it safe. I plan to live
forever, you know.
<BJK> yes
<Mind> I am on the dealy plague bandwagon
<MichaelA> Heh
<Mind> deadly,...that is
<NickH> Mind: Those things don't tend to scale very well. Ensuring
Friendliness in such a context, even completely understanding it, is
difficult.
<NickH> BJK: Yes.
<patrickm> BJK: no. I would rate nuclear escalation and space collision much
higher.
<Mind> Nick...you mean scaling up an animal model
<Mind> ??
<localroger> My bigger worry is climate collapse followed by anarchy and the
collapse of the technosphere.
<MichaelA> I'm not sure that the brainware for altruism exists in animals;
creating altruism from scratch would mean breaking from everything we'd
known before, and working only on cognitive principles, without help or
inspiration from much except for the human philosophy of altruism.
<MichaelA> Don't you think climate collapse might take a while?
<localroger> Altruism seems to exist in all mammals, in varying degrees.
There is a lot of evidence for this.
<posi> so what are we looking at or working with the most when we talk about
creating friendly AI? is it the formulas or the theory? or what?
<NickH> Mind: I meant scaling evolutionary genetic algorithms, although I
think that applies to simply scaling up animal models too.
<Mind> right
<NickH> posi: Formulas?
<Mind> ok
<patrickm> MichaelA: evidence suggests that under some circumstances it can
happen in three years or less.
<posi> well, is the digital AI not using formulas?
<Mind> climate change??
<Mind> three years or less?1!!!
<Mind> no way
<MichaelA> The kind of altruism we'd want in an AI would be complex moral
reasoning and philosophy; complex enough that we can say "yep, this thing
can probably bootstrap itself to superintelligence and still be safe". I'm
not sure I would feel safe about a mole becoming the first
superintelligence, if that were possible.
<localroger> There is evidence the Gulf Stream could be close to shutting
down. That would be catastrophic. In less than a decade.

<Mind> Only catastrophic to Europe
<MichaelA> Hmmm, why now rather than 50 years ago or 50 years in the future?
<localroger> a few hundred million refugees is catastrophic to everybody --
especially if they're tech centers.
<NickH> posi: Like x^2 - e^2 = Friendly kind of formulae? :)
<John_Ventureville> Does anyone here foresee a "singularity arms race" where
the U.S. and Europe compete against China and Russia to see who gets their
first to put their "stamp" on a singularity-bringing A.I.?
* localroger tries to go back on topic
<Mind> There has never been a total freeze of the earth or a burnout because
of climate. It could be rough for a while but life will easily survive
<haploid> No.
<posi> sure nick. :) what are you thinking of?
<MichaelA> John, very unlikely; my current projection of AI difficulty says
that it could be acheived by a team of researchers at the least, an
organized corporation at the very most
<localroger> Mind, the Earth has frozen over twice in its past, oceans frozen
to a depth of a mile. Recovery only came after a hundred million years
(each time) of volcanism raising CO2 levels.
<NickH> posi: I thinking of something more... complex and flexible than
mathematics. Far more complex.
<NickH> The kind of structure gone into great detail in CFAI.
<MichaelA> The good thing about climate collapse is that it doesn't threaten
the survival of humanity so much as UFAI does
<NickH> Well... not too great, but it's a start.
<posi> ok tell me what you're thinking of please.. ahh, I need to read that
<Mind> John_V...I think we are already in an singularity arms race...the
war-mongers just are not intelligent enough to see the future and a
potential singularity
<posi> i'm just creating something to pass the Turing Test, and a bit more..
<John_Ventureville> MichaelA: I don't see why a nation like the U.S. or China
would be hampered by assembling their own "home team" of cloistered black
box scientists who with billions to spend go to work for however long it
takes.
<localroger> posi, I seem to recall writing a novel about someone who did that.
<NickH> Mathematics is only one class human thoughts. It's particularily good
at some things, less so at others.
<patrickm> I already created something that passes the Turing test, but she
often refuses to listen to me.
<MichaelA> John, I don't think they would be hampered; I just don't think
they're realize the possibility of general AI long before it gets invented
by someone else.
<NickH> For Friendly AI you need to use more genearl abstract reasoning, eg.
the kind using in Cognitive Science and Evolutionary Psychology.
<posi> ahh, i'm out of me league I think .. hehe
<posi> well i've critical massed Pi
<posi> no-ones done that before.
<John_Ventureville> MichaelA: I bet there are secret gov't projects the world
over to see if this can be done. And if not now, ten years from now there
will be.
<haploid> John seems to think that poiticians and public servants are anything
other than technophobic socialites who would never be capable of grasping
the concept or possibility of Sungularity.
<haploid> er Singularity
<MichaelA> It's a possibility, John; do you think governments might be smart
and create AIs that distribute the benefits of the Singularity equally?
<NickH> posi: To get into the league, I'd recommend CFAI (and other
www.singinst.org materials), learning more about the human brain
(evolutionary psychology, cognitive science), to start :)

<NickH> posi: Right, this is completely new territory.
<John_Ventureville> haploid: the U.S. MILITARY is proving to be adept at
seeing the applications of future tech.
<NickH> It has ties to other areas of human knowledge, but it has whole new
vistas of problems in itself.
<John_Ventureville> ARPA will be looking into this.
<posi> i've sent my mother to university to get a BA in early childhood
psychology.. I couldn't figure doing it any other way without harming my
brain in a real time constant.
<localroger> John, that was in my novel too. It was an arms race and the good
(?) guys won (?). If the bad guys had won, there wouldn't have been much of
a story afterward.
<MichaelA> As Bostrom mentions in http://www.nickbostr.../ethics/ai.html,
"f the benefits that the superintelligence could bestow are enormously vast,
then it may be less important to haggle over the detailed distribution
pattern and more important to seek to ensure that everybody gets at least
some significant share, since on this supposition, even a tiny share would
be enough to guarantee a very long and very good life."

<NickH> posi: You want to learn more about how human minds evolved. If
anything we're in the place of evolution with respect to AIs, not parents or
other conspecifics.
<posi> perhaps the pyramids as a computer have something to do with this, the
bible etc.
<haploid> John_Ventureville: Correct you are. And it is even possible that
US military automation projects contribute to the singularity; but that
contribution is likely to be accidental; your original statement suggested
that the governments would *intentionally* spark singularity efforts, which
is impossible given their mindsets.
<localroger> MichaelA: Unless, of course, the new superintelligence perceives
us the way we perceive cockroaches.
<NickH> posi: How?
<MichaelA> Haploid, maybe not in 10 or so years
<John_Ventureville> localroger: what is the name of your book?
<MichaelA> Especially as the Singularity concept gains more credibility
<MichaelA> Keep in mind how fast the gov jumped on to the nano bandwagon
<localroger> John: The Metamorphosis of Prime Intellect http://www.kuro5hin.or
g/prime-intellect
<MichaelA> The concept of the Singularity is today where nanotechnology was 10
or 20 years ago
<posi> well the idea is in 2004 the pyramids no longer effect mankind as a
relative notion.
<John_Ventureville> thank you
<Mind> Governments would never try to spark a singularity because then the
situation would be out of their control...governments are always about
control and nothing else
<NickH> posi: What do you mean?
<posi> meaning they still exist but the universe or ourselves have reached the
stage where our brains can compute its not relative.. that simple mental
computation could do a lot to your ideas perhaps
<MichaelA> They couldn't reason that a Singularity would happen anyway,
whether they liked it or not?
<posi> what if the giant computer we're all talking about, is already here,
we're just reaching the stage to understand it?
<NickH> posi: Why would we think that?
<posi> well the idea would be that you don't have a choice to think it.. the
north star aligns with the great pyramid at the end of this year.
<Mind> Posi...we certainly have augmented our intelligence with computers and
the internet but it has not reached the stage where it is something separate
fomr us....
<John_Ventureville> Governments would certainly by their very nature have the
arrogance to think they could control the A.I. and the singularity to follow.
<atg> My own book, Orange Sky, suggests that the gubbernment will jump on the
AI bandwagon part way, they'll make a complex software system, call it an
AI, but actually they will retain complete control...
<Mind> good point John_V
<posi> the idea is we're biologically setup already to be like this.. wouldn't
we have to know we're protected from such things to advance this stuff, ??
wouldn't it essentially be the evolution of this stuff to know its not out
there?
<posi> this singularity is our brain as a superbrain?
<NickH> posi: I'm really not following here :)
<haploid> posi, I don't think this discussion needs to be polluted with
crackpot mysticism about pyramids and stars aligning, etc.
<localroger> posi is spewing New Age stuff I only allow myself to believe on
Tuesdays and Thursdays.
<NickH> It sounds like the correct answer is "No."

<Mermaid> heh
<posi> ok i'll be quiet.. but still, its a suggestion to think about..
<haploid> No it isn't.
<localroger> Topic, please.
<MichaelA> The channel agrees on something :D

<atg> The future of the singularity action group and why it's members hang out
here rather than in their own channel...

<localroger> I hang out here for a simple reason, atg. I was invited.
<MichaelA> *sigh*, I'm going to go get some coffee in be back in a little bit;
it seems like we've produced a lot of text in the 50 minutes we did talk
about Friendly AI, anyway
<localroger> Not a total wash, eh?
<MichaelA> It was actually close to a record time of staying on-topic!
<BJK> heh
<localroger> Cool! I haven't done many #immortal chats.
<localroger> Have to admit it's *cough* interesting.
<MichaelA> Yay! :D
<John_Ventureville> this is one of our better chats
<MichaelA> It's only as interesting as the people that constitute it ;D
<posi> ahh, I kinda feel it was me that put a tamper on the conversation. many
thousands of apologies
<posi> oops on the pyramidss.. :/
<BJK> posi, is this your first time here?
<John_Ventureville> posi, do you listen regularly to the George Noory show?
<John_Ventureville> ; )
<MichaelA> We can't talk about one subject for too long without sidetracking
anyway
<posi> yes BJK. I haven't chatted in months, sorry.
<localroger> posi, I think you need to get a feel for the etiquette of chat.
Took me a few to catch on myself, and I haven't been doing it long.
<posi> i'm in NZ john.
<John_Ventureville> oh, ok
<posi> its my NZ ego, to much oxygen.
<posi> that and the natives here are cannibals, I have a tendcy to see things
in comparison to the quality of food.
<localroger> Posi, you are in NZ as in New Zealand? It must be what about
midnight there?
<posi> ahh, 12:53pm local.

<localroger> It's 8:00 PM here. At 12:53 I will be unconscious, guaranteed.
<haploid> That has to be the worst adsl line service I've ever heard of, BJK.
* atg is considering fleeing the US to some place like NZ.
<localroger> Yeah, I have bellsouth and it's usually very smooth.
* BJKlein nods
<John_Ventureville> atg: why flee from the U.S. to New Zealand?
<BJKlein> me climbs the telephone poll

<atg> I live very close to DC.
<localroger> BJKlein's slash is not showing.
<atg> nuff said.

<BJKlein> ehh, actually missed that slash
<John_Ventureville> people are amazed I live in such a rural area and yet have
a great cable net connection
<atg> Actually, I am very affraid of the direction my country is going these
days...
<localroger> atg, I'm with you there.
<haploid> What direction is that ?
<localroger> Fascism.

<patrickm> atg, localroger: I got out two years ago. It's nice out here. But
you get tired of it.
<atg> Lets look at how my country is dealing with North Korea.
<atg> All N.K. wants is a non agression pact.
<atg> We refused.
<atg> That is insane.
<BJKlein> welcome patrickm
<MichaelA> Welcome DV8, most of the AI chat has wound down now, fyi
<BJKlein> thanks for your help with TTLC
<John_Ventureville> there little dictator is a glory hound who has starved
millions to death
<DV8ionOfMachine> Thanks Michael.
<patrickm> bjk: glad to help, thank you. It's just a start, of course.
<haploid> Right, NK is a bastion of honesty, too.
<Mind> The direction has always been towards facism...or whatever you want to
call it...it is not a new phenomena....ever since day one the government of
the U.S. has grown larger and larger...it does not matter which party is in
power....you let the government grow and eventually you will get facism
<BJKlein> http://www.imminst.o...s=&act=SF&f=118
<localroger> MichaelA, good show BTW you had a lot to deal with.
<BJKlein> TTLC for those interestd link above

<MichaelA> Thanks, Roger
* MichaelA waves to the people reading this log several years in the future
* localroger waves to both of them too
* patrickm waves to himself 50,000 years from now...
* Nader notes that Creating Friendly AI is a difficult topic to discuss in a
casual chat.
<haploid> If I were dealing with a heavilty-armed person with a history of
dishonesty and murder, I would refuse an offer of "non agression pact" as
well.
<haploid> er heavily
<MichaelA> Nader, absolutely, to the max :p
<John_Ventureville> 4John waves at the A.I. scanning this log!

<localroger> John, that would be loglady. She's definitely A but not very I
at this point.

<John_Ventureville> lol
<John_Ventureville> give her time....

<localroger> Yeah, I know, loglady will be whipping our asses at Go.
<kevin> hello Mr Caliban..
<caliban> Hello Mr Kevin, hello room
* BJKlein waves to caliban :)

<haploid> ok chat is done, I'm out.
*** haploid has quit IRC (Quit: haploid has no reason)
<Guest8888930> autsch
<BJKlein> that must be a german word..

<kevin> autsch.. ouch

<caliban> thats better, thanx for the translation Mr Kevin
<kevin> only phonetically cal.. have no idea if that's correct..
<caliban> it was most accurate
<Utnapishtim> guns make a different sound in germany. 'Peng' as opposed to
'bang'
<kevin> :)
<caliban> astonishing no?
<kevin> must be the heavier air there..
<patrickm> ah, finally a way to settle the Alsace-Lorraine dispute. listen to
gunfire.
<patrickm> wait, I think they're tried that before.
<kevin> g'nite all..
<DV8ionOfMachine> Take care everyone. I am new, so I am going to look around
and get a feel for this place.
* caliban nods solemly

<caliban> BJ - re the byelaws...
* BJKlein ears perk up
<caliban> I have compared a few models
<BJKlein> ahh have you seen our draft?
* caliban nods
<BJKlein> there's one worry i have at the outset.. that's being taken over by
a larger org.
<caliban> well actually... which draft?
<BJKlein> this has almost happened Alcor
<BJKlein> http://www.imminst.o...onstitution.php
<BJKlein> Aug 29
<caliban> heh- no
<caliban> changes in a nutshell?

<caliban> heh- no
<caliban> changes in a nutshell?
<MichaelA> heh, which org tried to take over Alcor?
<BJK> sorry.. i probably missed something..
<posi> oh shit I know this what you're talking about, your talking about the
Omega Point
<BJK> not sure MA
<posi> I read a book on that recently.

<MichaelA> poor Bruce :(
<BJK> i'm here...
<John_Ventureville> BJK: Or are you talking about one of Alcor's "civil wars?"
<MichaelA> ah, fhew
<BJK> John_Ventureville, i remember reading some of the old logs..
<MichaelA> I pray we never have a civil war. *hugs Bruce*
<posi> the Omega Point must include the critical mass notion of an unburn
childs brain
<posi> unborn.
<BJK> and as Alcor holds board elections based on paid members... the larger
org could have ousted the current directors

-
<MichaelA> Posi, sorry, but I don't think anyone is really interested in
talking about the Omega Point now; perhaps you'd like to create a post on
imminst.org
<caliban> -- some changes ist seems -- I'll look over that new version tonite
and tell you tomorrow- is that ok?
* patrickm puts posi on ignore. nothing personal... just have to hang onto the
neurons I still have.
<MichaelA> Welcome to #immortal btw Patrick
<MichaelA> Recently discover us?
<patrickm> thank you michaela. yes.

<BJK> just remember Cocker
<caliban> Whew Michael! That could have been me saying that! *slaps his
shoulder*
<BJK> we'll be fine
<MichaelA> Yeah, Crocker r0xers
<caliban> BJ: -- some changes ist seems -- I'll look over that new version
tonite and tell you tomorrow- is that ok?
<BJK> caliban, did you get the constituion link draft (aug 29)>?
<BJK> ah, thanks much
<BJK> tht
<MichaelA> Are you an immortalist yourself, Patrick?
<BJK> tjat
<BJK> that's be swell
<patrickm> michaela: yes, I plan to live forever, I will need at least that
long at the rate I get things done.
<MichaelA> Heh
<MichaelA> Any preferred method you're interested in?
<caliban> BJ: swell? *timidly*
<patrickm> michaela: the kind where I completely avoid dying.
<MichaelA> Right, not living forever through your work

<patrickm> michaela: indeed! though I wouldn't mind getting both, I think.
<BJKlein> i know i'm probably missing much
<BJKlein> patrickm has recently submitted his bio to sit on the TTLC board
<caliban> BJ: swell? *timidly*
<BJKlein> las: <MichaelA> Are you an immortalist yourself, Patrick?
<patrickm> patrickm> michaela: yes, I plan to live forever, I will need at
least that long at the rate I get things done.
<patrickm> <MichaelA> Heh
<patrickm> <MichaelA> Any preferred method you're interested in?
<patrickm> <caliban> BJ: swell? *timidly*
<patrickm> <patrickm> michaela: the kind where I completely avoid dying.
<patrickm> <MichaelA> Right, not living forever through your work

<patrickm> <patrickm> michaela: indeed! though I wouldn't mind getting both, I
think.

* BJKlein howls indignation at the digital gods
<John_Ventureville> speaking of immortalism....
<John_Ventureville> I mailed out last week about one-hundred samples of
Physical Immortality magazine to various distributors.
<patrickm> bjk: what's worse, you have no excuse. I'm coming in from the third
world!

* BJKlein considers moving to the third world
<MichaelA> Ah, third world? Which country?
<patrickm> Costa Rica, land of plenty.
<John_Ventureville> not such a bad place
<BJKlein> costa rica.. my wife grew up in Honduras
<patrickm> Actually, I plan to move back to the states soon. Can't get a
devent salary around here.

<caliban> BJ; the other thing... about the fair use excemption...
<patrickm> BJ: then even if I hadn't seen pictures I would have guessed she's
a beauty. good pick.
<BJKlein> she picked me
<patrickm> bjk: lucky you!
<John_Ventureville> how did she go about that?
<patrickm> John V: I wish you much success with PI magazine.
<BJKlein> patrick, you probably know all about latin women
<patrickm> bjk: not as much as I would care to, I think.
<MichaelA> Costa Rica looks so gorgeous
<MichaelA> I'm a big fan of rainforest-y environments
<John_Ventureville> patrickm: thank you
<patrickm> Michaela: come visit! But do it before I move and I'll put you up.
<caliban> BJ; the other thing... about the fair use excemption... i have
prepared a little something... but what is the current take of the directors
on that?

<caliban> BJ; the other thing... about the fair use excemption... i have
prepared a little something... but what is the current take of the directors
on that?
* caliban feels a bit repetitive tonite
<BJKlein> caliban let me bump you up so you can read the latest
* MichaelA reads about Patrick on the Threats to Life forum
<MichaelA> Heh, thanks for the invite, Patrick, but I'm too sucked into the
quick pace of lifestyle in the San Francisco Bay Area to be visiting
anywhere until next summer at the earliest :)
<caliban> bump me then
<BJKlein> k
<BJKlein> bumped
<caliban> same topic?

<patrickm> michaela: I suppose I understand that.
<BJKlein> yes
<BJKlein> http://www.imminst.o...13&t=1529&st=24
* patrickm doesn't follow the whole bumping thing, but it's probably not
important.
<John_Ventureville> I don't think I've seen tonight's Futurama episode.
<caliban> oh noo... a Laz rant topic is devouring me mercilessly
<patrickm> he's a good ranter.

<BJK> Patrick.. heh just level of ImmInst membership...
<BJK> as you are a Full Member, you're bumped now as well
<caliban> oh noo... a Laz rant topic is devouring me mercilessly
<BJK> krist.. me visits loglady
<patrickm> ah, right. Thanks BJK.

* caliban shakes patrick's hand as a fellow recent bumpee

<BJK> lol caliban.. caliban has been lazzerated
<Utnapishtim> those posts give me indegestion
<MichaelA> I personally think Laz has improved somewhat in the clarity of his
posts in the last year or two
* BJK agrees..
<BJK> and caliban is an excellent in person speeker
<BJK> sorry Laz
* caliban declares openly and to logladys face that he now draws BJ into a
dark corner to wisper to him in private
<Utnapishtim> I do still object his penchant for attaching a marginally
related newpaper article to the thread at the first available opportunity
<patrickm> he seems to enjoy tangents, as well.
<MichaelA> heh

<Utnapishtim> brevity is definitely NOT a virtue in Mr long's book

<caliban> hence the name
<Utnapishtim> LOL
<patrickm> utn: true, but it's possible to go the other way.
<patrickm> lol

* caliban wanders to the shadows of the other incarnation
<Utnapishtim> patrick: Provide sparse or insufficent information?
<Utnapishtim> Distort points through simplification?
<patrickm> utn: exactly so. I have a strong urge towards brevity myself.
<Utnapishtim> Was it Einstein who said that things should be made as simple as
possible but no more?
<patrickm> supposedly, yes. i've never seen a source.
<caliban> brevity rules
<Utnapishtim> I too am a minimalist when it comes to communication
<patrickm> it's a really bad habit, actually. i actively try to avoid causing
problems by being over-brief.
<Utnapishtim> elagant but concise. Some prefer a more ornate gilded form of
communication

<caliban> quoi?

<Utnapishtim> what do people make of the recent spate of anti aging articles
<Utnapishtim> in mainstream news sources
<Utnapishtim> I have set up a threat listing all those of which I am aaware of
which have been published in August
<patrickm> baby boomers. only have ten years left to really get a handle on
alzheimers or we'll soon be covered in drool.
<BJK> more to come.. baby boomers will start to turn 60 in three years
<BJK> 2006
<caliban> Utnap reads American papers, you must know, gang

<Utnapishtim> the US has the most memetic leverage on the rest of the planet.
There fore it is the most important battleground in my opinion
<patrickm> the baby boomers and their billions may actually spur the kind of
investment that antiaging technology really needs.
<patrickm> utnap: true. however, they are starting to slip in biotech. check
out china.
<Utnapishtim> Those baby boomers... I think we find that death is appropriate
natural and gives 'meaning' only while its their parents who are dying
* patrickm chuckles
* patrickm needs a better job so he can afford more nootropics so he can get a
better job so...
<BJK> lol
<Utnapishtim> are nootropics really powerful to substantially boost someones
career prospects?
<BJK> have you seen an imporvement in mental abilities?
<Utnapishtim> powerful enough
* caliban checked China and last time he looked there was little red biotech
around
<patrickm> they REALLY work for me.
<patrickm> caliban: they have no problem with stem cells, fetus testing,
cloning, etc. in China.
<Utnapishtim> patrick: I'm curious. Have you done any tests to see just how
effective? eg before and after IQ tests
<patrickm> China has completely banned "reproductive cloning" but that's not
what we care about for now.
<John_Ventureville> Max More told me pircetam helped him to pass his doctoral
oral exam.
<patrickm> utna: i'm a bit curious myself. unfortunately, online tests are not
scaled for my intelligence level (not to brag) and are not very useful.
<caliban> patric : oh yes they do have these problems... and not only
technically
<patrickm> utna: i'll take a proctored IQ test first chance I get and let you
know how it goes, though.
<Utnapishtim> what are the main differences you have noticed?
<patrickm> utna: well, several. for once i feel as though i am part of my
body, not just living in it. i find i have no trouble tackling difficult,
even long term problems, which were major motivational issues for me before.
<John_Ventureville> what substances were you taking?

<patrickm> utna: more specifically, i find that i can recall nearly anything i
know, more rapidly, and I also have a much better use of my vocabulary; word
choice is fluid.
<patrickm> john v: before nootropics?
<Utnapishtim> what did you take?
<FLIPPER> ** tweek, tweek **
<Utnapishtim> just piracetam?
<John_Ventureville> what nootropics do you take to reach this enhanced state?
<patrickm> utna: piracetam alone has that effect. after i noticed it, i
started taking some weak noos as well.
<serenade> patrickm: did you try choline with piracetam?
<Utnapishtim> Caliban: I'm curious.. How much do you nknow about the reasy
availability of nootropics in england. This isn't an area I know very much
about
<Utnapishtim> Does Piracetam require a subsription?
<patrickm> serenade: it's tough to find around here; i take large doses of
lecithin, pantothenic acid, and PS to fill that it as best i can.
*** FLIPPER is now known as caliban
* patrickm throws caliban a fish.
*** BJK is now known as Capt_Ahab
<caliban> Internet Utnap
<serenade> patrickm: oh i see. look for DMAE too
<Utnapishtim> BJK: They call me Utnapishtim....
<caliban> they write you a prescription
<patrickm> sere: thanks, I have. not locally available, as far as I can tell.
have to order it, and have precious little money.
<serenade> Utnapishtim: no prescription needed
*** Nader has left #immortal
*** caliban is now known as Guest8888931
<Utnapishtim> serenade. You live in europe
<Utnapishtim> ?
<serenade> usa
<Guest8888931> autsch
<Capt_Ahab> Ahoooy!
<John_Ventureville> howdy Captain
*** Guest8888931 is now known as FLIPPER
<Capt_Ahab> Hark them dolphins at the stern!~
<FLIPPER> tell them about you vision BJ!
<Utnapishtim> how do I change my name?
<Capt_Ahab> yes.. mattes
<patrickm> serenade: nootropics work out well for you?
<Capt_Ahab> type '/nick newname'
*** patrickm is now known as whitewhale
<serenade> patrickm: yes
<serenade> Utnapishtim: try http://www.qhi.co.uk
<John_Ventureville> BJ: tell me about your vision...
*** Utnapishtim is now known as MobyDick
<Capt_Ahab> as i was talking with susan.. i had a vision.. actually while we
were swimming...
* MobyDick eyes Captain Ahab suspiciously
<Capt_Ahab> that i see anti-aging as a big ship.. and immortalist as the
dolphins
<Mermaid> BJ is now captain ahab?
<whitewhale> serenade: I wish I'd known about them when I was 17 or 18. I'd be
a physician now.
* Capt_Ahab Ah, lovely Mermaid!
<serenade> whitewhale: i'm glad they worked out well for you
*** whitewhale is now known as patrick
<Mermaid> and there is a whitewhale
*** MobyDick is now known as Admiralnelson
<Mermaid> patrick, the whitewhale
<Admiralnelson> I'm in charge now!
<FLIPPER> ** tweek, tweek *
<Mermaid> oh dear ...marine theme tonight
<Mermaid> give me sailors anyday over dolphins and whales..:p
* Capt_Ahab *grumble* damn the torpedoos
*** John_Ventureville is now known as FirstOfficer
<patrick> serenade: fortunately I have all the time in the world.
<Capt_Ahab> ah, does the metaphore ring true...
<FirstOfficer> ?
* FLIPPER wants nootropic fish

<Admiralnelson> fish need nootropics pretty bad
<Capt_Ahab> are immortalist at the leading edge.. are we breaking ground on a
brighter tommorow? are we dolphins?
<patrick> flipper: all fish have DMAE. that's something.
* patrick scarfs down some sardines.
<FirstOfficer> some dolphins run into trouble trying to share the message of
dolphin immortality...
<FirstOfficer> Over the last two weeks I have experienced frustration in my
attempts at communicating with the mainstream media regarding the the
immortalist movement. On behalf of the Society for Venturism I tried to get
interviews going relating to cryonics but I had only one success with the
Scottsdale Tribune. I suppose one success after about three hundred
attempts is not so bad. lol
* FLIPPER eats DMAE -- its something
<patrick> i'm very grateful to be part of the first generation that may live
indefinately. i'm going to need a seriously large amount of time with my
wife.

#3 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 September 2003 - 02:04 PM

<BJKlein> k
<BJKlein> bumped
<caliban> same topic?

<patrickm> michaela: I suppose I understand that.
<BJKlein> yes
<BJKlein> http://www.imminst.o...13&t=1529&st=24
* patrickm doesn't follow the whole bumping thing, but it's probably not
important.
<John_Ventureville> I don't think I've seen tonight's Futurama episode.
<caliban> oh noo... a Laz rant topic is devouring me mercilessly
<patrickm> he's a good ranter.

<BJK> Patrick.. heh just level of ImmInst membership...
<BJK> as you are a Full Member, you're bumped now as well
<caliban> oh noo... a Laz rant topic is devouring me mercilessly
<BJK> krist.. me visits loglady
<patrickm> ah, right. Thanks BJK.

* caliban shakes patrick's hand as a fellow recent bumpee

<BJK> lol caliban.. caliban has been lazzerated
<Utnapishtim> those posts give me indegestion
<MichaelA> I personally think Laz has improved somewhat in the clarity of his
posts in the last year or two
* BJK agrees..
<BJK> and caliban is an excellent in person speeker
<BJK> sorry Laz
* caliban declares openly and to logladys face that he now draws BJ into a
dark corner to wisper to him in private


And like any good AI or Intelligence Analyst, I read as much as possible to obtain the data from which to draw conclusions. I will ignore your intent to demean my perspective on this or any other issue Caliban by attacking my person, it was not only invalid but simply smacks of arcane "Old World" courtesan charm.

However it is a valid representation of what enough people do believe to be treated fairly and to address squarely; though neither an opinion of my person, nor any philosopical idea, is made valid by popularity, or unpopularity, it does effect the memetic power of a person, and/or ideas.

Well, I guess I should say I will "try" to ignore your personal barbs. :))

But you are not only entitled to your opinion, it was said in a chat not a formal response so I value its alacrity even if it is somewhat incorrect and says more about your agenda than my positions. Copyright is one of the more important coming battlegrounds and it is one that should be approached with a clear intent to seize the highest possible ground and then commit to the defense of it.

I am interested in the best possible path to our stated goal, not a path which simply serves our separate self interests before the stated common purpose.

I am only going to mention this in passing however here because it is not truly germane to the whole of this chat and we should continue this in its appropriate thread and I encourage you to please participate openly Caliban and not only in secretive conspiratorial chats.

I would welcome the return of the Caliban to his rightful responsibilities as a Director and that he stop this unnecessary grandstanding by declaring his renunciation in "black and red" as if he has found the devil beneath the skirts of his intended paramour. Get over it.

What is really going on in your head? You are here discussing and observing the discussion of the Singularity that you in fact unjustly attacked Bruce over having a personal opinion about as relevant to our common quest?

So I rant eh?

Well coming from you this is practically a compliment.

Utna what is it you find indigestible?

<Utnapishtim> I do still object his penchant for attaching a marginally
related newpaper article to the thread at the first available opportunity
<patrickm> he seems to enjoy tangents, as well.
<MichaelA> heh

<Utnapishtim> brevity is definitely NOT a virtue in Mr long's book

<caliban> hence the name
<Utnapishtim> LOL
<patrickm> utn: true, but it's possible to go the other way.
<patrickm> lol


I do like this joke even if I am the brunt of it, more so because it is true on many levels.

For example Jerome I disagree about the issue of "marginal" and suggest instead that some willfully fail to understand the relationships at times either by prejudice or intent. I will spend more time underlining by highlight the relevant passages. It is still arguable that the whole article is needed for archival purposes and kept in our own records as an independent objective reference, which helps to keep the original sources more honest in a world where data can be all too easily rewritten in order to change history.

Michael after reading this entire chat I am not sure that I have become clearer over the last couple of years; though I sincerely thank you for the compliment (and veiled insult though I do hope I am demonstrating such growth) but in fact find that very little that I have previously said concerning the Singularity was not also discussed here tonight and repeatedly. Is it not also possible that when one sees too far ahead it takes time for other's to catch up and have a meaningful discourse?

An advantage to having written our opinions is that we can go back and review what was said and re-evaluate them based on subsequent information and comprehension.

Have you folks reached an analytical impasse for the moment on developing Friendly AI such that there is nothing new philosophically that will anticipate the next stage until a technological breakthrough determines first the next set of parameters for evaluation?

If so, what will that tech advance be? And where do you expect it?

I guess I should be pleased by how much you folks all sound like I did complaining about my teachers way back when.

Rant is it? {*Stomps feet a la Caliban and fumes a bit around the ears*}

Well I'll get over it [lol]

And I hope y'all do too :))

Issues don't go away because we don't like what people say or the way they say them. I will however try to adopt a more insidious politically effective tone if it would work. I would do just about anything that I thought "WOULD WORK."

I am for the moment intent on getting people to pay attention on a number of levels so that is at least a start, if only a start but at least I see that it was worth getting labeled for even if "only a bit" unfairly. I also want to insure that opposition positions are clearly and accurately presented in their entirety to contribute to the validity of the court of public opinion and not simply pander to the ignorance of popular mob psychology.

It goes with the territory because if you have read Heinlein the character Lazarus also "rants" a bit and that is why it is possible to have a notebook of his ravings.
http://www.bobgod.co...azaruslong.html

So in seeing myself reflected in your eyes as a "ranter" I should acknowledge the truth of it but also please do not idly dismiss the message I try to convey with the pejorative label for my communicative style.

Perhaps what is needed is to balance my rants with a bit more raving but then I would only be easier to dismiss. On the contrary I am doing my utmost to neither rant nor rave and the subjects we address by virtue of their magnitude do contain a sufficient inherent extremism to warrant a more moderate rhetorical style. If I am being perceived as too vehement I sincerely apologize for expressing myself so forcefully as to distract you folks from my intended message but I do not apologize for being so seriously concerned about issues that we face in common.

#4 patrick

  • Guest
  • 37 posts
  • 0

Posted 01 September 2003 - 02:55 PM

Lazarus,

I hope you will forgive us - me especially - for our lighthearted attitudes. I am certain that none of us meant any disrespect for your thoughtful contributions.

Patrick

#5 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 September 2003 - 03:08 PM

No offense taken actually. I am thoughtful enough to appreciate laughing at myself at times even when the barb pricks a bit as it tickles.

With respect to Caliban I think his contribution to this effort to be too valuable to ignore and his and my differences are too important to either simply make light of or gloss over. I have not addressed personally his post on resigning till now because I felt too much was not said in it even as he said so much.

The above response was an attempt to raise multiple serious overlapping issues because it is better to address them forthrightly rather than skip over them.

The humor was not inappropriate anymore than the pun on my name. It was in fact very funny in the manner of irony that I for one enjoy. No offense was taken and less so if none was intended Patrick.

I don't mind if one laughs at an idea, for at times the seriousness of them warrants a bit of levity to aid in digesting them but I fear those that dismiss important issues through ridicule as a means of seducing public opinion.

I too make light of issues at times and it is valid to remind me to take a dose of my own medicine from time to time. I value our mutual effort at keeping ourselves honest as I suspect it will make the eons pass more enjoyably.

#6 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 02 September 2003 - 02:38 AM

Laz,

Thanks for exemplifying the better angles in human nature. Your forgiveness represents a pinnacle in graciousness. Even though, you slough this off, I must express my regret for overlooking the implications of posting the above chat. Not the least regret and apologies to the chat participants, as I believe the post-official chat environment engendered the bantering atmosphere. Such an atmosphere would give the impression that anything said would not be posted. Sorry I failed to make this likelihood explicit, as material will be posted from during and after official chats.

If ImmInst were a car, we’d have a broken headlight. I’ll work on repairs and install a bumper guard.

#7 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 02 September 2003 - 04:28 AM

If ImmInst were a car, we’d have a broken headlight. I’ll work on repairs and install a bumper guard.


If this was to make me feel better it didn't work for now I am licking my wounds like a deer made into a hood ornament. Well at least I am not "road kill". [":)]

Actually while the commentary was glib and not meant to be personally directed I am glad to have read it for it was both germane to an issue that I think regardless of which side one takes we can all agree is important and second a little honest feedback never hurt anyone. I am nothing if not controversial at times so the reactions of those that spoke freely are only fair. [huh]

The implications are that I can learn from this and so can others, it is not to hide our feelings in secret. I am an advocate of secret conferences on issues effecting the direction of this group for reasons of responsible leadership not conspiracy.

It is necessary at times to examine ideas, people and issues within a smaller group before presenting decisions and options to the larger body politic for review and a vote. Such presentations should be well thought out and clearly presented. The sessions in private are appropriate for that purpose but I am not a fan of whispered clandestine deal making and this is a line we must be careful to establish.

I think the discussion was legitimate and of importance to larger concerns effecting our organization. While I appreciate your concern for my feelings I nevertheless prefer open dialog that is there to learn from, to gossip and conspiratorial chats that are the bulwark of political intrigue and a cancer on society.

Your concern for my feelings are noted and appreciated but our mutual concern for the seriousness of the underlying issues is more important than my personal feelings.

#8 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 02 September 2003 - 05:57 AM

Bravo!

#9 Utnapishtim

  • Guest
  • 219 posts
  • 1

Posted 02 September 2003 - 10:07 AM

Laz.

I just wanted say that I definitely value your contributions to imminst. There was nothing in my comments that I would not have been forthright enough to say had you been present. Nevertheless the way my and others comments were stated gives the impression of backbiting and for that I sincerely apologise. You have reacted graciously and as a true gentleman as always!

Regarding Caliban and his private comments to BJ I really don't think that this was directed at you. It followed a private conversation between Caliban and myself on an entirely different issue, and while I am not a mindreader I would imagine that his words to BJ relates to this.

#10 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 02 September 2003 - 10:50 AM

The private conversation was pure business actually... pertaining to copyright.

#11 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 03 September 2003 - 12:22 AM

And copyright is one that I am obviously seriously concerned about and involved in. I didn't miss that point at all.

Jerome I have no hard feelings and I do believe that all of you would have been forthright enough to address me personally including Caliban but when I encountered the subject upon review I decided not to let it pass because I do think the subject of copyright is very important.

Also it overlapped into other areas that deserve attention (including my writing style ;)) ) and the irony for me was myself being made the example of. However I suspect this goes with the territory, so better a thick skinned old goat like myself than someone too impressionable and sensitive to survive such social slings and arrows. Thank you for your kind words regardless Jerome.

BTW, too many words in the message is like too much salt on the meal, but too few words translates to not enough spice in the message for making a lasting and savory flavor, one to be appreciated and not simply sloughed off like superficial sound bites equatable to fast food for the mind.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users