• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Politics of Uploading, Simulations & Singularities


  • Please log in to reply
10 replies to this topic

#1 PaulH

  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 11 February 2004 - 06:25 PM


Concerning Uploading, and assuming that the overall model of brain complexity can be duplicated on non-biological and presumably more compact and faster substrates, then:

"Will we save ourselves, or will we even be allowed to?"

This is the most important question we can ask about uploading I think. First of all, will we be allowed to upload? And if so, if we are allowed, will we control the entirety of our upload, or will it be under the control of either a human agency, AI, or both? And if it us under the control of another agency, will they process a perfect copy, or will they modify “us” for their purposes rather than ours. Will our copy actually be a bastard child offspring totally re-configured and programmed to do their bidding?

Finally, if the answer is no to all of these questions, and we instead are given complete control over our own upload, the simplicity of it means that our upload would do our bidding because it would be us. This may differ for some people, but I highly suspect anyone willing to upload themselves would also have the strong goal of wanting their uploaded selves to figure out a way to upload their human copy too, so they can experience the upload paradise as well and not have to live out the rest of their lives trapped within biological limits. In either case, it would seem the compassionate thing to do. So assuming this scenario is the most likely, it would be wise to have enough compassion for yourself, BEFORE getting uploaded.

This ties in nicely with the Utopian or Oblivion concept in the book of the same title by Buckminster Fuller, an idea that presupposes that all entities that even survive a singularity are all compassionate and loving, otherwise they never would have made it to the singularity in the first place. Of course at this point people really start to worry, that if that’s true, then humanity with all its hatred and violence is doomed. This could happen, if indeed we are living at the base reality of real biology, rather than as a simulation, which is astronomically more likely.

Interesting speculations, which of course I have thought about often in my thoughts since I proposed the Sans-Ceiling Hypothesis on the extropian list about 6 years ago. Nick Bostrom has done a paper demonstrating that we are most probably running in a simulation. And it's my guess, that if that's true the chances are the entities running it are compassionate, and wouldn't simulate a conscious being with deep desires for immortality or an afterlife unless it planned on delivering. :-)

But the question still remains about the continuity of consciousness if we screw up. Do they re-boot the whole simulation or do the allow us to continue like we are? My guess is they will allow us to continue by not allowing us to blow ourselves up. If we blow ourselves up, the whole thing is wasted, and they/we have to start over again. By allowing us to continue with only the minimal amont of intervention (minimal necessary variable tweaking), they eventually get new beings equal to themselves, but who evolved under very different circumstances.

Why would they do this, besides just being compassionate? Probably because they’re lonely, and they need someone to talk to. They look at us as novelty, and can’t wait for our own singularity birth to occur. We are their mind children. And they in a funny way are ours. In a very real sense they are ourselves in the future giving birth to us in their future.

As far as I know Eliezer Yudkowski disagrees with this conclusion (i.e Nick Bostrom's), saying that if we were in a simulation then he should be able to ask for a banana and the simulation materializes one for him. Since it doesn't, we either are not in a simulation or our simulators lack benevolence. And since beings of such magnitude would in all probability be benelovent, then we are not in a simulation. I think this conclusion is premature because of our own limited idea of what benevolence is. Personally I have grown and matured substantially from my negative experiences, and more precisely from healing them. So I can not discount the possibility that my current suffering is not a lack of benevolence on the part of the simulators.

Read this and other articles by Paul Hughes (planetp) at http://planetp.cc/

Edited by planetp, 02 March 2004 - 07:58 AM.


#2 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 11 February 2004 - 07:36 PM

Paul, when and how did you get into extropianism/transhumanism?

And did Nick change his name from Szabo to Bostrom? If so, when and why? Thanks.

#3 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 12 February 2004 - 02:39 AM

Hi Paul! Comments:

If uploads end up being radically reconfigured versions of the original persons, then it is arguable that the upload and the original human being are actually different people, for all practical purposes.

I'm not sure why the idea of an upload uploading our human selves is presented; wouldn't we expect a normal Moravec-type transfer where the original is disassembled as the pattern makes its way over onto the superior (performing) substrate? Otherwise you're just *copying* the being, creating two different people with the same memories, and each with their own rights. Also, if our uploaded selves chose to upload their human "originals", then wouldn't that mean that the original would still exist, since you imply the original still exists after the first upload? I am slightly confused.

Next: why should we presuppose that such a filter would leave only compassionate people? Under your scenario, wouldn't malevolent uploads still be possible? Also, couldn't benevolent uploads accidentally self-modify into indifferent or malevolent states in the absence of sufficient knowledge or safeguards? Also, what about the possibility of creating a more compassionate society by upping the incentives to be good and simply disallowing harmful actions using an elegant and safe detect-and-response system? Is the only alternative to upload benevolent people only? I'm poking around in the dark a little bit here, but hey.

And it's my guess, that if that's true the chances are the entities running it are compassionate, and wouldn't simulate a conscious being with deep desires for immortality or an afterlife unless it planned on delivering. :-)


To me this sounds like a traditional optimistic bias (which is present in all humans to some degree, of course.) Why on Earth would the high likelihood that we are running in a simulation imply that the simulators are even paying much attention to us, or that they have tweaked the initial conditions in such a way that the eventual emergence of sentient beings desiring longer lives would ultimately lead to those wishes being fulfilled? (That would take quite a lot of computing power (simulating which Big Bangs lead to successful immortalists), but they would have need to have done it unless they possess the capability to modify variables *aside from* the initial conditions; and this does not currently seem to be the case. Psychology emerges from biology emerges from geology emerges from stellar dynamics emerges from gravity; there is no evidence that simulator controllers intervened at any point in this process to change the fundamental rules. See also John Barrow's paper: http://www.simulatio.../barrowsim.pdf)

If we blow ourselves up, the whole thing is wasted, and they/we have to start over again.


One intelligent race finds it easy to simulate worlds. It simulates billions just for fun, but simulated so many that it can't pay attention to or manage them all. It doesn't care when a few destroy themselves; that's tough luck. Another intelligent race finds it difficult to simulate worlds. It can only simulate a few dozen, although it does pay them close attention.

If the chance that we are in a simulation is really quite high, then which type of the above intelligent races is it likely that we are being simulated by?

As for the last point, I agree with Eliezer on this one. The "growth" you experience from horrible and trying experiences is merely a change in brain chemistry and organization; these changes could be enacted merely by reshuffling the neurons directly. To me, it seems more probable that this world is an ignored simulation rather than an intently watched one (benevolent watchers or otherwise), because:

1) There is no evidence that physical laws have been messed with at any point in our past.
2) It seems anthropomorphic to imagine simulators behaving in the same way that humans would probably behave if we had the capability to simulate worlds right now.
3) Our fundamental physical laws seem suspiciously simple; it seems that there is a decent chance that we live in a universe similar to the simplest-of-all-possible-worlds-that-can-contain-observers, simulation or otherwise, rather than a special, more complex simulation being watched over by simulators. Simple structures tend to be more prevalent, regardless of what level you are looking at, and I think that this probably applies to universes as well. (See also Max Tegmark's paper on the subject.)

Well, there's my ideas! Some of them could be blatantly wrong of course; my experience with anthropics only counts as dabbling. Very interesting essay, it's great to see people tossing these ideas around, even though our experience in these areas as a species is very low.

Oh wait, one more thing. Why use the word "politics" to describe the dealings of nonhumans? "Politics" carries the strong implication of *human* dealings; bribes, power plays, social hierarchies, representatives, the Establishment vs. the Uprisers, social unrest, grit teeth, majority votes, and the entire twisted network of safeguards designed to cancel out the inherent selfish aspects of Homo sapiens. Since we would be able to transcend most (or all) of this nonsense after a successful Singularity, why is the word "politics" appropriate as a title? I just think that it encourages people to anthropomorphize posthumans, and I suggest we do anything we can to avoid that.

sponsored ad

  • Advert

#4 PaulH

  • Topic Starter
  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 14 February 2004 - 06:19 AM

Hi Michael,

You wrote:
[quote]If uploads end up being radically reconfigured versions of the original persons, then it is arguable that the upload and the original human being are actually different people, for all practical purposes.[/quote]

Yes, that’s correct, never meant to imply otherwise.

[quote]I'm not sure why the idea of an upload uploading our human selves is presented; wouldn't we expect a normal Moravec-type transfer where the original is disassembled as the pattern makes its way over onto the superior (performing) substrate?[/quote]

That is the type of initial upload I’m talking about, and that is precisely the point I was making. Your human self in this scenario is left behind, unless the new uploaded self finds another way to get you into post-human status without copying, but upgrading using a more advanced methodology (i.e. nanobot brain re-engineering).

[quote]Otherwise you're just *copying* the being, creating two different people with the same memories, and each with their own rights. Also, if our uploaded selves chose to upload their human "originals", then wouldn't that mean that the original would still exist, since you imply the original still exists after the first upload? I am slightly confused.[/quote]

You seem to be operating under the assumption that uploads can only occur by copying, rather than by some more advance methodology, such as upgrading existing hardware via nano re-engineering the brain, as Kurzweil describes.

[quote]Why should we presuppose that such a filter would leave only compassionate people? Under your scenario, wouldn't malevolent uploads still be possible?[/quote]

Technically speaking, this is true. However, what I’m referring to are the large macro-economic and political forces that will be in play in the years leading up to upload capability. My point, which is backed up by a lot of historical thought, is that technologies capable of creating an upload are also the same technologies enabling complete “genie out of the bottle” annihilation. Ask yourself this – what kind of society would have to exist where only benign nanotech is in play? Will it be a totalitarian solution, or some kind of universal transparent society as David Brin is advocating? In either case, I have yet to hear a convincing argument that such technologies could exist for long without some kind of global benignity being pervasive, either forced, co-axed, or freely chosen. The only alternative is species annihilation or SI totalitarianism (which I think proceeds this technology, so that scenario is mute, despite Yudkowsky’s claim to the contrary). I disagree that SI’s can be created using simple software tricks as Minsky and Yodkowsky suggest. I will state here unequivocally, that SI’s will not exist, until we are able to match the complexity of the human brain via mapping and equivalent molecular complexity (streamlined or not) as Kurzweil suggests. That means that grey-goo like nanotech will be around before we can engineer the first SI. I know Eli is working very hard and fast to prove this wrong and I wish him luck, but I am not optimistic that he will suceed. If I understand Eli correctly, he agrees with Marvin Minsky's sentiment, that once we figure it out, we will be able to run an SI on a Intel x286!! I think Minsky is a genius, but this statment is absurd. The only way I can see this being possible, is if this same 286 has a storage capacity in excess of 10^40 bits (or something extreme like that), and it runs a universe/artifical life simulation for thousands of years. And assuming this happened, this SI would be running 1000's of times slower than we are right now.

[quote]Also, couldn't benevolent uploads accidentally self-modify into indifferent or malevolent states in the absence of sufficient knowledge or safeguards?[/quote]

That is a definite risk.

[quote]Also, what about the possibility of creating a more compassionate society by upping the incentives to be good and simply disallowing harmful actions using an elegant and safe detect-and-response system?[/quote]

I think it's necessary... utopia or oblivion. There is no third way.

[quote]Is the only alternative to upload benevolent people only?[/quote]

That’s a good question, and my speculation is, assuming we even get to upload capability, is we won't have a "malevolent people" problem by the time we reach this stage of technology. Otherwise we will have destroyed ourselves before we got there. How is this possible? My guess is we are going to see radical improvement in mental health because or a much deeper and thorough understanding of brain chemistry over the next couple of decades. I expect to see more improvements in mental health over the next 20 years, than in all of human history combined. I highly recommend everyone read David Pearce’s The Hedonistic Imperative for a good introduction and future roadmap on how this is both possible, pracitcal and as I am arguing, necessary.

[quote]Why on Earth would the high likelihood that we are running in a simulation imply that the simulators are even paying much attention to us, or that they have tweaked the initial conditions in such a way that the eventual emergence of sentient beings desiring longer lives would ultimately lead to those wishes being fulfilled?[/quote]

Do you agree or disagree with Nick Szabo simulation argument? You’re answer here would in turn determine the best way to answer your question.

[quote](That would take quite a lot of computing power (simulating which Big Bangs lead to successful immortalists)[/quote]

I completely disagree with this, because they would already have a working model – themselves. So all that wasted computer power would never be expended in the first place.

[quote]but they would have need to have done it unless they possess the capability to modify variables *aside from* the initial conditions; and this does not currently seem to be the case.[/quote]

How would we ever know?

[quote]One intelligent race finds it easy to simulate worlds. It simulates billions just for fun, but simulated so many that it can't pay attention to or manage them all. It doesn't care when a few destroy themselves; that's tough luck. Another intelligent race finds it difficult to simulate worlds. It can only simulate a few dozen, although it does pay them close attention.[/quote]

This doesn’t make sense to me. An intelligence capable of simulating billions would also be able to monitor to ALL of them with total ease. Using our current model, of all this computation occurring in a Pentium without knowledge is a very poor analogy to how future fully integrated intelligence will work in the future. Do you require I expound on this?

[quote]If the chance that we are in a simulation is really quite high, then which type of the above intelligent races is it likely that we are being simulated by?[/quote]

By benevolent ones, no question.


[quote]Our fundamental physical laws seem suspiciously simple; it seems that there is a decent chance that we live in a universe similar to the simplest-of-all-possible-worlds-that-can-contain-observers, simulation or otherwise, rather than a special, more complex simulation being watched over by simulators. Simple structures tend to be more prevalent, regardless of what level you are looking at, and I think that this probably applies to universes as well. (See also Max Tegmark's paper on the subject.)[/quote]

I think our universe's simplicity is not at odds to a higly streamlined artificial life/emergent complexity simulation. So you do disagree with Nick's and Moravec's simulation arguement then?

[quote]Oh wait, one more thing. Why use the word "politics" to describe the dealings of nonhumans?[/quote]

Because I’m not talking about post-humans, but human politics leading up to uploads and the singularity. After that, all bets are off of course. :-)

Edited by planetp, 14 February 2004 - 07:12 AM.


#5 PaulH

  • Topic Starter
  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 14 February 2004 - 06:50 AM

Bruce wrote:

Paul, when and how did you get into extropianism/transhumanism?


I don't think there was any one point when I "became" a transhumanist, rather it was a gradual process. I can honestly say I already had transhuamnist leanings as early as the age of four when I watched the Apollo Moon landing live on TV. By the time I was 10, I was already identifying with all the immortals in SF. When I was 13 (1978), I read an absolutely brilliant piece on immortality by Robert Anton Wilson in Future Magazine. So there was no question that after reading that article, I called myself an immortalist, and have every day since.

And did Nick change his name from Szabo to Bostrom? If so, when and why? Thanks.


Good question! I have no idea.

Edited by planetp, 30 May 2004 - 07:36 PM.


#6 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 14 February 2004 - 08:13 AM

Paul,

Thanks for your lengthy response!


[quote][quote]If uploads end up being radically reconfigured versions of the original persons .. for all practical purposes.[/quote]

Yes, that’s correct, never meant to imply otherwise.[/quote]

Whoops, I just noticed the (appropriate) quotation marks in this statement; "will they process a perfect copy, or will they modify “us” for their purposes rather than ours". The identity of the original human being would terminate at that point, as we have agreed.

[quote]That is the type of initial upload I’m talking about, and that is precisely the point I was making. Your human self in this scenario is left behind, unless the new uploaded self finds another way to get you into post-human status without copying, but upgrading using a more advanced methodology (i.e. nanobot brain re-engineering).[/quote]

Why is there a human self that is left behind at all? Wouldn't uploadees practically universally desire that their biological neurons be deleted as they are reinstantiated as cybernetic neuron-equivalents? In a Moravec transfer, there is no "human self left behind"; the subject moves from meatspace into cyberspace in one fluid movement. And if there are cautious folks trying to avoid destructive uploading, then I would figure that they would take the incremental cognitive enhancement route rather than uploading, right?

[quote][quote]Otherwise you're just *copying* the being, creating two different people with the same memories, and each with their own rights. Also, if our uploaded selves chose to upload their human "originals", then wouldn't that mean that the original would still exist, since you imply the original still exists after the first upload? I am slightly confused.[/quote]

You seem to be operating under the assumption that uploads can only occur by copying, rather than by some more advance methodology, such as upgrading existing hardware via nano re-engineering the brain, as Kurzweil describes.[/quote]

My original quibble here was "why would there be a human original left at all?" Why not just a fluid movement? Yes, incremental enhancement is definitely a possibility.

[quote]Technically speaking, this is true. However, what I’m referring to are the large macro-economic and political forces that will be in play in the years leading up to upload capability. My point, which is backed up by a lot of historical thought, is that technologies capable of creating an upload are also the same technologies enabling complete “genie out of the bottle” annihilation.[/quote]

Certainly; but the technologies allowing the creation of uploads would *also* allow for "benevolence enhancement", breaking humanity's upper bound on kindness, and the intelligence enhancement to effectively implement genuinely benevolent goals. Whether society continues to become more benevolent and safe in the world of uploads, or rapidly falls into a destructive attractor, may very well depend upon the first being to kickstart the avalanche.

[quote]Ask yourself this – what kind of society would have to exist where only benign nanotech is in play?[/quote]

A society composed of kinder-than-human intelligence as well as human intelligence, in which the society has the technological capability to detect and respond to potential disasters before they happen. (This would probably require near-ubiquitous intelligences operating on nanosecond timescales.) Over time, I would expect the society to settle into a happy equilibrium as far as potential disasters are concerned, just as the vast majority of our internal homeostatic mechanisms operate normally and silently, preserving the basic foundation and form of the human organism. Solely-human societies are just not stable in the long run. (On this I figure we agree.)

[quote]Will it be a totalitarian solution, or some kind of universal transparent society as David Brin is advocating? In either case, I have yet to hear a convincing argument that such technologies could exist for long without some kind of global benignity being pervasive, either forced, co-axed, or freely chosen.[/quote]

I feel that we need to turn our attention to the structure of the minds underlying the civilizations, rather than political systems representing special cases of human political organizations; whether theoretical as in Brin's scenario or historical as in the case of totalitarianism. (This may just be a difference in our semantics.) For society to survive with advanced technology, it must be composed of greater portions of minds with *hardware-level dispositions* to acting rationally (in the Bayesian sense), cooperatively, compassionately, and so on. For arbitrary levels of technological advancement, Homo sapiens is bound to break down eventually; on this I think we agree. Some combination of the options you suggest is likely to take place.

[quote]I disagree that SI’s can be created using simple software tricks as Minsky and Yodkowsky suggest. I will state here unequivocally, that SI’s will not exist, until we are able to match the complexity of the human brain via mapping and equivalent molecular complexity (streamlined or not) as Kurzweil suggests.[/quote]

You seem to be saying that nothing less than a direct hit on the precise neurological pattern corresponding to a certain type of Earth-dwelling, protein-based, evolved, predator-descended homonid will be sufficient to create a living example of general intelligence. In some ways, this seems to me like an alien civilization discovering a functioning PC and saying "nothing less than an atomically precise match will be sufficient to replicate this machine". Evolution didn't know what it was doing; it probably messed up a lot in (accidentally) creating a particular special case of general intelligence. As Nick Bostrom says,

"The number of clock cycles that neuroscientists can expend simulating the processes of a single neuron knows of no limits, but that is because their aim is to model the detailed chemical and electrodynamic processes in the nerve cell rather than to just do the minimal amount of computation necessary to replicate those features of its response function which are relevant for the total performance of the neural net. It is not known how much of the detail that is contingent and inessential and how much needs to be preserved in order for the simulation to replicate the performance of the whole. It seems like a good bet though, at least to the author, that the nodes could be strongly simplified and replaced with simple standardized elements."

Do you disagree with most of the points made in http://www.nickbostr...telligence.html?

[quote]That means that grey-goo like nanotech will be around before we can engineer the first SI. [/quote]

Hopefully not, but this may be the case. Although you mention "SI" here, any substantially smarter-than-human or kinder-than-human intelligence could substantially decrease the chances of nano-disaster, as would intelligent policymaking by human beings. (Case in point: Center for Responsible Nanotechnology.)

[quote]If I understand Eli correctly, he agrees with Marvin Minsky's sentiment, that once we figure it out, we will be able to run an SI on a Intel x286!! I think Minsky is a genius, but this statment is absurd.[/quote]

I have never seen or heard Eliezer say this. Do you have a reference? What has he said that gave you the impression that he holds this view? I expect AGI to be technologically more feasible than you do, but not *that* feasible.

[quote][quote]Also, what about the possibility of creating a more compassionate society by upping the incentives to be good and simply disallowing harmful actions using an elegant and safe detect-and-response system?[/quote]

I think it's necessary... utopia or oblivion. There is no third way.[/quote]

Agreed.

[quote]That’s a good question, and my speculation is, assuming we even get to upload capability, is we won't have a "malevolent people" problem by the time we reach this stage of technology. Otherwise we will have destroyed ourselves before we got there. How is this possible? My guess is we are going to see radical improvement in mental health because or a much deeper and thorough understanding of brain chemistry over the next couple of decades. I expect to see more improvements in mental health over the next 20 years, than in all of human history combined. I highly recommend everyone read David Pearce’s The Hedonistic Imperative for a good introduction and future roadmap on how this is both possible, pracitcal and as I am arguing, necessary.[/quote]

Will 20 years be enough to eliminate malevolent or self-centered intentions in everyone on Earth? What about mistakes made through ignorance, like someone who enhances her own intelligence, accidentally wires her motivations so that she desires nothing but cupcakes, and proceeds to enhance her own intelligence furthermore, acquire nanotechnology, and turn everyone on Earth into cupcakes? And that's just one example; many other things could go wrong in the absence of outright malevolence.

[quote]Do you agree or disagree with Nick Szabo simulation argument? You’re answer here would in turn determine the best way to answer your question.[/quote]

Agree, of course. :)

[quote]I completely disagree with this, because they would already have a working model – themselves. So all that wasted computer power would never be expended in the first place.[/quote]

The silent assumption I made in my original statement was that simulators only influence their simulated universes by fine tuning physical constants and the shape of the tiny dimples on the original Big Bang Particle. Projecting which sets of physical constants and tiny dimples are likely to give rise to successful immortalists is a task about as large as simulating the universe itself. But I acknowledge that this argument is moot if simulators have intervened at some point after the Big Bang to push the odds towards the creation of successful immortalists.

[quote][quote]but they would have need to have done it unless they possess the capability to modify variables *aside from* the initial conditions; and this does not currently seem to be the case.[/quote]

How would we ever know?[/quote]

Because our current universe appears to follow from the Big Bang. This could simply be a complex illusion, but that would require postulating a little bit of extra information; the simulators are trying to hide from us, but not *that* diligently. (Otherwise we wouldn't even be able to form hypothetical scenarios about them.) Assuming that simulators (if they exist, as it seems they do) have only manipulated physical constants and initial conditions requires postulating no extra information.

[quote][quote]One intelligent race finds it easy to simulate worlds. It simulates billions just for fun, but simulated so many that it can't pay attention to or manage them all. It doesn't care when a few destroy themselves; that's tough luck. Another intelligent race finds it difficult to simulate worlds. It can only simulate a few dozen, although it does pay them close attention.[/quote]

This doesn’t make sense to me. An intelligence capable of simulating billions would also be able to monitor to ALL of them with total ease. Using our current model, of all this computation occurring in a Pentium without knowledge is a very poor analogy to how future fully integrated intelligence will work in the future. Do you require I expound on this?[/quote]

No; it seems now my original argument was quite weak to begin with. It seems less convincing in retrospect. To be honest, I'm just a beginner in anthropics. I would probably defer to Nick Bostrom if I had to make some massive decision that depended upon sensitive anthropic information.

Wording comment: instead of saying "how intelligence will work in the future", shouldn't you say "how intelligence is likely to work in the class of all worlds technologically capable of simulating isolated subworlds with sentient inhabitants"?

[quote]By benevolent ones, no question.[/quote]

But hey, where's my banana at?

[quote]I think our universe's simplicity is not at odds to a higly streamlined artificial life/emergent complexity simulation. So you do disagree with Nick's and Moravec's simulation arguement then?[/quote]

It isn't. My current model includes both the Simulation Argument and the Simplest-of-All-Possible-Worlds Argument. Do we agree? Do you think there are there others like us? Shall we start a club...? [g:)]

[quote]Because I’m not talking about post-humans, but human politics leading up to uploads and the singularity. After that, all bets are off of course. :-) [/quote]

Gotcha. Just to let you know, this conversation has been a major pleasure!

#7 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 14 February 2004 - 08:20 AM

When I was 12 (1977), I read an absolutely brilliant piece on immortality by Robert Anton Wilson in Future Magazine. So there was no question that after reading that article, I called myself an immortalist, and have every day since.


Lucky! Must have been sort of lonely being a transhumanist for a while without many people to agree with you... but I have spoken to a few people who went through this before. Becoming an immortalist as a child is great stuff, isn't it?

Good question! I have no idea.


Nick Szabo and Nick Bostrom are different people; googling either of them should lead to hours of interesting reading.

#8 Eliezer

  • Guest
  • 4 posts
  • 0

Posted 15 February 2004 - 04:55 AM

Actually, I said something along the lines of: "When I first read Marvin Minsky's statement that intelligence could probably run on a 386, I thought he was crazy. I owe him an apology." (286 is pushing it a bit, but I'd stand by the apology even so.)

I quite agree that this sounds insane. I would add only one thing; the reason I owe Minsky an apology is that I had the hubris to think I knew what sounded insane before I could do the math. This is one of those cases where, if we knew how to calculate an answer, we'd see how utterly pathetic and futile it was to have an opinion without doing the math. At this instant in time, "intelligence on a 386" sounds quite reasonable to me - but it is only an opinion. I still can't do the math, but, what do you know, what "sounds insane" to me has completely changed. What a surprise, who would have thought it. Perhaps someday I'll be able to do the math, and "intelligence on a 386" will sound insane again. Meanwhile, the moral of the story is that things are only allowed to sound insane if you can actually calculate that they're impossible - intuition doesn't cut it. That's why I owe Minsky an apology; I shouldn't even have tried to guess, or at most, should have guessed that he was "probably wrong", not insane.

#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 February 2004 - 05:14 AM

If we're talking about the *minimum theoretical bounds of intelligence*, then a 286 doesn't seem out of the question; the mind can conceivably operate on arbitrarily long timescales, using (almost) arbitrarily inefficient algorithms for thinking. Even if it takes a million years to solve a very simple problem, this would probably meet the definition of "intelligence" as we currently know it. The complexity of *humanly engineerable* intelligence is expected to be far above the hard lower limit for mind complexity. We can only expect to discover the nature of the hard lower limit in retrospect, once millions or quadrillions of new minds have been created and we know all about their fundamental operating principles.

Eliezer, nice to see you around on the ImmInst boards! *waves frantically until falling over*

#10 PaulH

  • Topic Starter
  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 15 February 2004 - 10:21 PM

Why is there a human self that is left behind at all? Wouldn't uploadees practically universally desire that their biological neurons be deleted as they are reinstantiated as cybernetic neuron-equivalents? In a Moravec transfer, there is no "human self left behind"; the subject moves from meatspace into cyberspace in one fluid movement. And if there are cautious folks trying to avoid destructive uploading, then I would figure that they would take the incremental cognitive enhancement route rather than uploading, right?


You make some good points. I think the point I was aiming at, was there might be advantages of having two of you around – one remaining biological and the other uploaded. Advantages would include being able to talk to yourself in both matrices. There is a good chance that both perspectives will be different enough that a positive feedback loop is created, each learning from the other. Perhaps using neurologically imbedded nanobots as a computer-brain interface will allow our human original and upload copy to run in synchrony. This could have a synergetic/symbiotic advantage. There is a good chance that our upload selves might go into shock not having a body. I adamantly disagree with Hans Moravec, who says the senses have no future. I think this is one of his major oversights, and is an indication of Moravec’s personality than rigorous thinking on the subject. Without senses we have no connection to the physical universe. As I am arguing in my book-in-progress, the senses have more of a future than they do now. These vastly enhanced senses will be completely synathestically customizable and form one very vital milieu of what it means to be conscious. I don’t know about you, but I don’t want to upload unless I will have more than I do now, and that includes sensory experience. Since this will take longer to simulate or interface than mere upload capability, then it might behoove us to retain a human connection to the physical universe until we figure it out.

Certainly; but the technologies allowing the creation of uploads would *also* allow for "benevolence enhancement", breaking humanity's upper bound on kindness, and the intelligence enhancement to effectively implement genuinely benevolent goals. Whether society continues to become more benevolent and safe in the world of uploads, or rapidly falls into a destructive attractor, may very well depend upon the first being to kickstart the avalanche.


I think we are in complete agreement here. I certainly share your goals, and perhaps we are simply seeing this from slightly different perspectives. Eli is pinning his hopes that this avalanche will start with a benevolent seed AI. I am not as optimistic that he will succeed in time, so my emphasis has been on upgrading the human animal, and also demonstrating how a combination of a transparent society, radically improved living standards, mental care and hedonic engineering will mitigate the forthcoming dangers. I admit this is just as radical, if not more radical than Eli’s position. My biggest concern with Eli’s strategy is that that technologies necessary for an SI will proceed those technologies capable of destroying the species – namely – malevolent nanotech. I’m not at odds with Eli… I think his efforts are commendable. I think the best course we can all take is to work on all fronts, simultaneously. I also think Eli is the smartest person I have ever come across, so obviously I have a vested interest in seeing his efforts suceed in maximizing our odds of reaching Apotheosis.

Over time, I would expect the society to settle into a happy equilibrium as far as potential disasters are concerned, just as the vast majority of our internal homeostatic mechanisms operate normally and silently, preserving the basic foundation and form of the human organism. Solely-human societies are just not stable in the long run. (On this I figure we agree.)


Yes, we are in complete agreement here. The first third of my book goes into detail about the kinds of technological and economic forces that can bring us this harmonious equilibrium.

For arbitrary levels of technological advancement, Homo sapiens is bound to break down eventually; on this I think we agree.


Agreed.

You seem to be saying that nothing less than a direct hit on the precise neurological pattern corresponding to a certain type of Earth-dwelling, protein-based, evolved, predator-descended homonid will be sufficient to create a living example of general intelligence.


LOL. J What I’m saying, is this highly arbitrary form of intelligence (you just described), also happens to be the best example we have, and therefore the most likely path of how we will achieve SI.

Do you disagree with most of the points made in http://www.nickbostr...telligence.html?


I disagree only with how easy it will be. I think the first clue that people in this camp vastly underestimate the difficulty is Bostrom's suggestion that human-level AI could happen as early as 2004! So ask yourself, looking at all the technological advancements we've seen in the last 6 years since Bostrom published this essay, what would have had to happen to achieve human-level AI this year? The very notion borders on silliness. I find it somewhat embarrassing that I have been repeating this message for so long, against the position of several PhD's including Bostrom and Moravec, but in my opinion:

The idea that human intelligence can be reduced to just a neural network is false.

Moravec is a bright guy when it comes to computational matters, but his neuroscience seems sloppy to me, otherwise why does he overlook the role of neurotransmitters or physiological factors (sensory experience) on cognition? I think each neuron is WAY more complicated than a simple on/off switch, and every PhD neuroscientist will agree with me on this. Until we can emulate the entire array of neurotransmitter activity we will not have a human equivalent upload. This much is obvious right? Think about what kind of “human” you would have left if you eliminated all the effects of seratonin, epinephrine, norpinephrnes, acetylcholine, GABA, glutamate, ATP, etc, etc. All of these beautiful chemicals are part and parcel of the complexity of human cognition and consciousness, and if anything, our uploaded selves will have even more complexity and richness, greater control and finesse over this “functional soup” than we do now.

Will 20 years be enough to eliminate malevolent or self-centered intentions in everyone on Earth? What about mistakes made through ignorance, like someone who enhances her own intelligence, accidentally wires her motivations so that she desires nothing but cupcakes, and proceeds to enhance her own intelligence furthermore, acquire nanotechnology, and turn everyone on Earth into cupcakes? And that's just one example; many other things could go wrong in the absence of outright malevolence.


I'm very interested in hearing about your solutions in this area. No doubt, these are tough questions I can only guess the answers at. My guess is these types of dangerous motivations will be much better understood and avoided. If nothing else, there might be an incubation or upload quarantine phase (like purgatory?? lol) where uploads roam free in a simulated universe. Only when their motivations are not destructive are they granted greater access to external physical tools. These are good questions… do you know anyone who has tackled them?

Michael – this converation has been a major pleasure as well. :)

Edited by planetp, 17 February 2004 - 08:11 AM.


#11 7000

  • Guest
  • 172 posts
  • 0

Posted 16 February 2004 - 12:29 AM

Uploading is real and will soon succeed in the next generation.Human shall achieve immortality through this concept because it gives answer to every question.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users