• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Simulation argument - unethical?


  • Please log in to reply
186 replies to this topic

#121 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 11 October 2007 - 10:39 PM

By living people you mean AIs, right? That's a very important moral issue. Maybe the laws in the future won't let any simulation that's created to be shut down, avoiding the extermination of all AIs inside of it.

Or maybe the laws will allow (the experience of) death in a simulation as long as the simulated entity is a subset of the simulator's mind. It could be that advanced AIs inject a small part of their own personality into the simulation (without any memory of their true nature) in order to understand the world and possibly their past from the perspective of the limited beings who inhabit the simulated world. We could all be just a subset of a single AI and will get integrated back into the main personality at some point.

Edited by shepard, 23 November 2007 - 05:34 AM.


#122 Athan

  • Guest
  • 156 posts
  • 0

Posted 11 October 2007 - 11:20 PM

My only arguement against simulation is that it's cruel and a MURDER.


How is that an argument against a simulated reality? Assuming you've played video games, you delete characters, turn off the console/computer or restart from many points in the game without feeling any guilt. But how do you know that those entities are not somehow conscious? Perhaps the runners of the simulation don't know that we think that we truly exist - we're just bits and pieces of programming, after all. It wouldn't be cruel in their eyes because we're not something they can physically sense, and ergo there is no emotional response to our 'existence'. It's only cruel to us because we do know we're conscious - they don't think it's murder at all.

And even if they did know, do you really think they'd care that they are murdering us?

Edited by shepard, 23 November 2007 - 05:34 AM.


sponsored ad

  • Advert

#123 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 12 October 2007 - 04:10 AM

How is that an argument against a simulated reality? Assuming you've played video games, you delete characters, turn off the console/computer or restart from many points in the game without feeling any guilt. But how do you know that those entities are not somehow conscious? Perhaps the runners of the simulation don't know that we think that we truly exist - we're just bits and pieces of programming, after all. It wouldn't be cruel in their eyes because we're not something they can physically sense, and ergo there is no emotional response to our 'existence'. It's only cruel to us because we do know we're conscious - they don't think it's murder at all.



You can't possibly make an analogy between videogame characters and AIs inside a simulation. Both are completely different, and there's obviously no doubt that post-singularity beings will be well aware of that.

They know we're conscious, if they built the simulation they can track our very words right now and read it. They could be able to know every single thought that runs through any individual's brain inside the simulation.



And even if they did know, do you really think they'd care that they are murdering us?


That's another issue. Again, i think that there will be some laws prohibiting them of destructing AIs that were created by the simulations. We're conscious beings, and by any standart to just kill us after we were created wouldn't be right. A good movie about this issue is "The Thirteenth Floor".

Edited by shepard, 23 November 2007 - 05:35 AM.


#124 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 12 October 2007 - 04:14 AM

Or maybe the laws will allow (the experience of) death in a simulation as long as the simulated entity is a subset of the simulator's mind. It could be that advanced AIs inject a small part of their own personality into the simulation (without any memory of their true nature) in order to understand the world and possibly their past from the perspective of the limited beings who inhabit the simulated world. We could all be just a subset of a single AI and will get integrated back into the main personality at some point.



This is a very interesting idea, similar to the one i've previously exposed. There is a remote possibility that me, or you, basho, are already post-singularity beings, just having some fun in a virtual reality and wanting to see how old fashioned 100% biological beings saw the world.

Edited by shepard, 23 November 2007 - 05:35 AM.


#125 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 12 October 2007 - 05:00 AM

I kind of doubt we are living in a simulaton. If we assume that the creators of a simulation underwent evolution and then reached and survived a singularity in the "real" universe, then they would hopefully have had a strong system of ethics in place (or at least would have created a strong system of ethics right immediately after the singularity occurred). If this system of ethics was not in place, a civilization might not survive in a post singularity world. We have a system of ethics in place already to some extent. For the most part, cruelty to animals is a crime in many countries. So for a post-singularity intelligence to run a simulation that could in all probability lead to sentient life and all of the terrible things that have happened in our own world would be very unethical. If an ethics lapse of that magnitude was allowed to occur in a post singularity world, then that doesn't spell good news for us. This would mean that a post singularity world wouldn't be all its cracked up to be. Suffering would continue to exist on a fairly large scale, assuming that many simulations were run without regards to any ethical consequences. Though you could say as a society we already allow a lot of ethical lapses, so maybe a post singularity world won't be any different from what it is now. That is a somewhat disturbing conclusion, which I hope will turn out to be wrong.

#126 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 12 October 2007 - 06:04 AM

I kind of doubt we are living in a simulaton. If we assume that the creators of a simulation underwent evolution and then reached and survived a singularity in the "real" universe, then they would hopefully have had a strong system of ethics in place (or at least would have created a strong system of ethics right immediately after the singularity occurred). If this system of ethics was not in place, a civilization might not survive in a post singularity world. We have a system of ethics in place already to some extent. For the most part, cruelty to animals is a crime in many countries. So for a post-singularity intelligence to run a simulation that could in all probability lead to sentient life and all of the terrible things that have happened in our own world would be very unethical. If an ethics lapse of that magnitude was allowed to occur in a post singularity world, then that doesn't spell good news for us. This would mean that a post singularity world wouldn't be all its cracked up to be. Suffering would continue to exist on a fairly large scale, assuming that many simulations were run without regards to any ethical consequences. Though you could say as a society we already allow a lot of ethical lapses, so maybe a post singularity world won't be any different from what it is now. That is a somewhat disturbing conclusion, which I hope will turn out to be wrong.

So the only simulations they would be allowed to run were ones where those being simulated were happy all the time with no hardship? That wouldn't be a very accurate (or interesting) simulation.

Edited by shepard, 23 November 2007 - 05:35 AM.


#127 platypus

  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 12 October 2007 - 10:15 AM

But why do humans and animals look like they have been shaped by evolution, if evolution never took place in this simulation?

Why do humans and animals look like they have been shaped by evolution in a computer game if evolution never took place in the game?

What computer game are you referring to?

I doubt that "we" could do it and I'm virtually certain that beings orginally residing on another planet or galaxy cannot do it.

All the evidence indicates otherwise. Arguments that the human mind is non-computational and that there is no possibility that it could run on a universal computational medium simply do not hold up very well.

What evidence are you referring to? How much qualia have pieces of software created so far?

http://en.wikipedia.org/wiki/Qualia

Edited by shepard, 23 November 2007 - 05:36 AM.


#128 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 12 October 2007 - 12:05 PM

But why do humans and animals look like they have been shaped by evolution, if evolution never took place in this simulation?

Why do humans and animals look like they have been shaped by evolution in a computer game if evolution never took place in the game?

What computer game are you referring to?

I am, of course, referring to Q*bert


Posted Image

Edited by shepard, 23 November 2007 - 05:37 AM.


#129 Grail

  • Guest, F@H
  • 252 posts
  • 12
  • Location:Australia

Posted 12 October 2007 - 12:52 PM

I dont think we can even imagine the ways in which such advanced civilisations may think.
Basho you have a point (lol @ Qbert). The simulation may be advanced enough to replicate evolution, or it may simply be created from an already exisitng model. Either way, if we are in a simulation, we have no way of having any real concept of reality. It may be something we have not even dreamed of.

#130 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 12 October 2007 - 02:31 PM

So the only simulations they would be allowed to run were ones where those being simulated were happy all the time with no hardship? That wouldn't be a very accurate (or interesting) simulation.

Well I would hope we could run an accurate simulation without conciousness (maybe something roughly approximating it or running the simulation backwards). I'm not sure if that could be done or not, though. I don't know about the "interesting" thing. If a hitler arises in one of our computer simulations and kills millions of people, is that "interesting"?

#131 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 12 October 2007 - 06:46 PM

Well I would hope we could run an accurate simulation without conciousness (maybe something roughly approximating it or running the simulation backwards).

Maybe we are just an approximation of true consciousness.

Edited by shepard, 23 November 2007 - 05:37 AM.


#132 Athan

  • Guest
  • 156 posts
  • 0

Posted 12 October 2007 - 07:25 PM

You can't possibly make an analogy between videogame characters and AIs inside a simulation. Both are completely different, and there's obviously no doubt that post-singularity beings will be well aware of that.


How are they so different? You're arguing subjectively, ascribing your point of view and morals to these possible 'beings'. It's really quite simple to make an analogy between video games and artificial simulated intelligence - they're not completely different at all. Both are programmed existences, where the intelligences don't know they're in a simulation. The only major difference is that we can speculate on it - but, that must be within the boundaries of our code. We would simply be programmed entities with no value - except perhaps an entirely scientific one - in the 'real' world. How do you know they would be aware of our consciousness? Unwitting creations litter our past and our present, and we may simply be an unintended - and therefore perhaps unnoticed - side effect.

They know we're conscious, if they built the simulation they can track our very words right now and read it. They could be able to know every single thought that runs through any individual's brain inside the simulation.


You can't extrapolate the knowledge of consciousness - we don't even know how our macroscopic, physical brains achieve consciousness. Defining consciousness is even a murky matter. Yes, it's quite possible that our future selves could 'unlock' the secrets. And if we're conscious, perhaps an unintended variable put into the simulated universe created our consciousness and they simply don't know that we are conscious.

And how could you possibly read any thought that goes through an individual's mind? Reading a mind is to feel all of the senses that they experience, surface thoughts, deep thoughts, current feelings and emotions all layered on top of one another in a jumble that is most likely entirely illegible.



And even if they did know, do you really think they'd care that they are murdering us?

That's another issue. Again, i think that there will be some laws prohibiting them of destructing AIs that were created by the simulations. We're conscious beings, and by any standart to just kill us after we were created wouldn't be right. A good movie about this issue is "The Thirteenth Floor".


If we can so easily kill our physical brethren, we can much more easily kill our nonphysical, simulated, fake brethren. You get your standard from your morals, not from the morals of human beings. We're often very cruel existences with little to no regard for others or sometimes ourselves; a simulation would be easy to destroy without any moral complications. We're just pieces of code.

Edited by shepard, 23 November 2007 - 05:38 AM.


#133 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 18 October 2007 - 12:15 AM

Maybe we are just an approximation of true consciousness.


I guess anything is possible, but we still feel pain even if we are an approximation. I would hope a future civilization wouldn't create an approximation of conciousness that could feel pain.

#134 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 18 October 2007 - 12:19 AM

I guess anything is possible, but we still feel pain even if we are an approximation. I would hope a future civilization wouldn't create an approximation of conciousness that could feel pain.

It wouldn't be very accurate in its interaction with others, would it? If what you are striving for is realism, then you have to actually make it somewhat real, or the interactions are not genuine. If we created a simulation to study, and did it without the simulations being able to feel pain, the results would be pretty worthless. (or, perhaps, they did that in an earlier simulation and it didn't work out well, so they are trying this now, who knows)

Edited by shepard, 23 November 2007 - 05:38 AM.


#135 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 18 October 2007 - 12:48 AM

It wouldn't be very accurate in its interaction with others, would it? If what you are striving for is realism, then you have to actually make it somewhat real, or the interactions are not genuine. If we created a simulation to study, and did it without the simulations being able to feel pain, the results would be pretty worthless.


Maybe knowing psychology we could model how a simulated person (or animal) would react in a specific situation. So instead of simulating their actual conciousness, we might just be able to simulate how they would respond in a specific situation modeled from the psychological testing of actual humans. We already know quite a bit about the brain and psychology, so I think it might be possible. I can predict how people (and animals) might act in certain situations, as most humans can do naturally anyway. The pleasure/pain axis is the main compenent that drives an organisms behavior. So it seems like we could just apply that knowledge to a computer simulation. I don't think that the AI in videogames is concious. However the AI in videogames can act as if it IS concious. This is done by making it react in a specific way to the actions of the video game player. This is all speculation, but I think it iwould be better to do than actually simulating "real" pain and suffering. If a person gets a burn, they will react in a specific way (pain avoidance). I think this can be applied to almost any human emotion and makes the modeling of actual conciousness superfluous. If the model doesn't fit with the actual "reality", then we can always continue to adjust it until it approximates conciousness fairly well without actually being concious. It might be more difficult if your modeling when conciousness first arose on the earth, but there may be ways around that. Animal psychology of course would be more difficult, but an animals conciousness would probably be simpler to model than that of a human. Just a few of my opinions of course. I definitely wouldn't want to model conciousness, that could lead to suffering, before all other options were explored.

Edited by hrc579, 18 October 2007 - 12:58 AM.


#136 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 18 October 2007 - 02:04 AM

Maybe knowing psychology we could model how a simulated person (or animal) would react in a specific situation. So instead of simulating their actual conciousness, we might just be able to simulate how they would respond in a specific situation modeled from the psychological testing of actual humans. We already know quite a bit about the brain and psychology, so I think it might be possible. I can predict how people (and animals) might act in certain situations, as most humans can do naturally anyway. The pleasure/pain axis is the main compenent that drives an organisms behavior. So it seems like we could just apply that knowledge to a computer simulation. I don't think that the AI in videogames is concious. However the AI in videogames can act as if it IS concious. This is done by making it react in a specific way to the actions of the video game player. This is all speculation, but I think it iwould be better to do than actually simulating "real" pain and suffering. If a person gets a burn, they will react in a specific way (pain avoidance). I think this can be applied to almost any human emotion and makes the modeling of actual conciousness superfluous. If the model doesn't fit with the actual "reality", then we can always continue to adjust it until it approximates conciousness fairly well without actually being concious. It might be more difficult if your modeling when conciousness first arose on the earth, but there may be ways around that. Animal psychology of course would be more difficult, but an animals conciousness would probably be simpler to model than that of a human. Just a few of my opinions of course. I definitely wouldn't want to model conciousness, that could lead to suffering, before all other options were explored.

If you are going to model the "possibility" of something happening, you might as well just model that thing happening to see if it lines up with your predictions. If I were building a simulation, I certainly would build the actual simulation to see what would happen instead of simulating the simulation in individual pieces and trying to hypothesize about what interactions would be. That seems 1) too roundabout of a way to do it, 2) less accurate, and 3) a couple of orders of magnitude more complicated than just simulating the darn thing.

Edited by shepard, 23 November 2007 - 05:39 AM.


#137 platypus

  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 18 October 2007 - 08:40 AM

If we live in a simulation, our current perception of the possibilities of technology and simulation are worthless. Then the simulation argument becomes about as believable than claiming that we're creatures in Brahma's dream (the Brahma dreams really a lot, so if we find ourselves to be conscious, we're much more likely to be creatures in Brahma's dream than physical beings). How good an argument does that sound?

#138 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 18 October 2007 - 10:05 AM

If I were building a simulation, I certainly would build the actual simulation to see what would happen instead of simulating the simulation in individual pieces and trying to hypothesize about what interactions would be. That seems 1) too roundabout of a way to do it, 2) less accurate, and 3) a couple of orders of magnitude more complicated than just simulating the darn thing.

We don't know yet what shortcuts we can use to get a very accurate simulation. Aren't most researchers trying to create an artificial intelligence that is non-concious? The alternative is creating a simulation where things suffer just like in this world, which seems like it would be cruelty to animals (or cruelty to whatever happened to evolve in the simulation). I don't know, would an ethcs committee allow that type of simulation to be run, no matter how valuble it was? I guess it depends on the motivations of our future civilization. I don't know what the ethical consensus among experts is on running a simulation that might lead to concious beings. I not an expert on ethics, so I'm not sure what direction would likely be taken with this situation in the future.

#139 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 18 October 2007 - 04:56 PM

We don't know yet what shortcuts we can use to get a very accurate simulation. Aren't most researchers trying to create an artificial intelligence that is non-concious? The alternative is creating a simulation where things suffer just like in this world, which seems like it would be cruelty to animals (or cruelty to whatever happened to evolve in the simulation). I don't know, would an ethcs committee allow that type of simulation to be run, no matter how valuble it was? I guess it depends on the motivations of our future civilization. I don't know what the ethical consensus among experts is on running a simulation that might lead to concious beings. I not an expert on ethics, so I'm not sure what direction would likely be taken with this situation in the future.


True....and that would be one of the steaks in the heart of the whole thing, if we are unable or unwilling to build such simulations. If we build them, it is exceedingly likely that we live in one, but if we don't then it is probably a much smaller chance that we do. Only time will tell if it is possible/morally allowed/whatever.

Edited by shepard, 23 November 2007 - 05:39 AM.


#140 Futurist1000

  • Guest
  • 438 posts
  • 1
  • Location:U.S.A.

Posted 18 October 2007 - 10:27 PM

True....and that would be one of the steaks in the heart of the whole thing, if we are unable or unwilling to build such simulations. If we build them, it is exceedingly likely that we live in one, but if we don't then it is probably a much smaller chance that we do. Only time will tell if it is possible/morally allowed/whatever.

I think that if we exploit animals for our own benefit (for food, drug testing etc.), then it seems like it is also possible that in the future humans may run computer simulations (with simulated conciousness). It will probably only be done, though, if running that type of simulation happens to serve our own interests. So maybe my hope that we won't run these simulations is wishful thinking.

#141 dimasok

  • Guest
  • 193 posts
  • 6

Posted 27 October 2007 - 07:59 PM

I have a question.
Since it appears that those running our simulation (or any simulation for that matter, however advanced) are "in for the show" and expect great entertainment for it, what are the merits of running our particular simulation?
I mean, the only true entertainment in the sense of violence & gore (as someone here proposed earlier that might be the factors that contributed to the incessant suffering of humans and animals since the beginning of the world) that one could get in our simulation would be movies, animes, video games, books, comics or one's imagination, but certainly not our daily lives (all our histories combined with the immeasureable death toll could not possibly, in terms of impact, equal what we see in "escapist" venues which would be a much better medium to emulate life in).
Isn't the world depicted in such bloody japanese animes (which are awesome by the way IMHO) like Claymore LEAPS & BOUNDS better to simulate than our simulation?
I mean, it makes no sense for this world to be here if we are indeed being simulated - it's too god damn primitive considering the infinite potential inherent in the hands of the simulants.
Honestly, imagining myself as someone with this much power (and since we're confident we can manufacture "friendly AI" in the future that would guide our evolution from there on, I don't think it's a stretch for me to assume that my motives are not much different from the motives of the simulation fellas), I would take the boldest imaginative endeavours achieved on this plane of existence and simply "make them manifest" as existences as real as the one we're experiencing now - that's truly the only purpose this world could serve, if any... but is it enough? Was it worth creating all this chaos for the dubitable joy of depicting ostentatious violence (in animes) that could simply be enacted "for real" instead?

Sorry for the messy post, but I don't know how to say it better. The gist of my message is that considering the infinite possibilities of the simulants, the endresult is depressingly bleak and unworthy of their attention. I would expect much better results of beings that are God-like to us, as they might be human-like to other being even higher than them.

Edited by dimasok, 27 October 2007 - 08:10 PM.


#142 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 27 October 2007 - 09:06 PM

I have a question.
Since it appears that those running our simulation (or any simulation for that matter, however advanced) are "in for the show" and expect great entertainment for it, what are the merits of running our particular simulation?
I mean, the only true entertainment in the sense of violence & gore (as someone here proposed earlier that might be the factors that contributed to the incessant suffering of humans and animals since the beginning of the world) that one could get in our simulation would be movies, animes, video games, books, comics or one's imagination, but certainly not our daily lives (all our histories combined with the immeasureable death toll could not possibly, in terms of impact, equal what we see in "escapist" venues which would be a much better medium to emulate life in).
Isn't the world depicted in such bloody japanese animes (which are awesome by the way IMHO) like Claymore LEAPS & BOUNDS better to simulate than our simulation?
I mean, it makes no sense for this world to be here if we are indeed being simulated - it's too god damn primitive considering the infinite potential inherent in the hands of the simulants.
Honestly, imagining myself as someone with this much power (and since we're confident we can manufacture "friendly AI" in the future that would guide our evolution from there on, I don't think it's a stretch for me to assume that my motives are not much different from the motives of the simulation fellas), I would take the boldest imaginative endeavours achieved on this plane of existence and simply "make them manifest" as existences as real as the one we're experiencing now - that's truly the only purpose this world could serve, if any... but is it enough? Was it worth creating all this chaos for the dubitable joy of depicting ostentatious violence (in animes) that could simply be enacted "for real" instead?

Sorry for the messy post, but I don't know how to say it better. The gist of my message is that considering the infinite possibilities of the simulants, the endresult is depressingly bleak and unworthy of their attention. I would expect much better results of beings that are God-like to us, as they might be human-like to other being even higher than them.

Why does it have to be entertainment as opposed to scientific inquiry? Why does it have to be anything that we can think of? If it is entertainment, why couldn't they install themselves as leader of a country to have a "real world" game of WoW or other online games that are popular? Why do we think we can get "inside the head" of beings so much more advanced than us? (I can think of tons of reasons to run a simulation that I would like to see, and I am just a "normal" human, so I can imagine there would be many more if I were a trillion trillion trillion times smarter, or whatever.)

Edited by shepard, 23 November 2007 - 05:31 AM.


#143 dimasok

  • Guest
  • 193 posts
  • 6

Posted 27 October 2007 - 09:33 PM

Why does it have to be entertainment as opposed to scientific inquiry? Why does it have to be anything that we can think of? If it is entertainment, why couldn't they install themselves as leader of a country to have a "real world" game of WoW or other online games that are popular? Why do we think we can get "inside the head" of beings so much more advanced than us? (I can think of tons of reasons to run a simulation that I would like to see, and I am just a "normal" human, so I can imagine there would be many more if I were a trillion trillion trillion times smarter, or whatever.)

Same reason why we're all so sure it should be "friendly AI" and why opposite points of view are dismantled as quickly as they're assembled. If we naively assume that we can get into the minds of future AI, then I will too naively assume that the motives of higher-beings are the same as ours and you know, most of what I read here and on other blogs goes as far as embracing that as the truth itself and not merely speculation.
Besides, a world without the things I mentioned truly is desolate and boring. Why would I want to live in a simulation of "scientific inquiry" as opposed to "entertainment" (although the WoW example you mentioned I already talked about before). Death might actually be needed for what I have in mind, but it would be merely a transition to another world (like in video games where you observe the proceedings after getting killed before deciding to join in to the same place again or elsewhere) and not what we suppose it is
Why does it have to be anything that we can think of you ask? Well, it doesn't of course. But since you and me assume that these putative beings are trillion and trillion to the power of trillion times more advanced that we are and we are already stretching the limits of epistemic knowledge by assuming they exist at all (with all the extra baggage that comes with this presumption like their nearly infinite ability to do anything), why would it be surprising for you that our limits are merely a launch pad that they take for granted (just like we take our lives for granted) and they already are living lives which we can't possibly even conceive in principle? Entertainment is on everyone's minds anyway, and scientific inquiry usually gets in the way (think of the supernatural "fun" stuff that constantly gets debunked because science doesn't align with it). Why should these higher-beings back off from it and instead opt for other reasons for their simulations that many of us already find unappealing (where are my super powers that could grant me everything that I would ever need?)? Perhaps these higher-beings are less imaginative than we are? That would be sad. Power without imagination is a bane.

In fact, according to modal realism, every kind of world (be it anime, comics, books, movies, our imaginations, etc) already exists out there. Heh, who knows, perhaps our imagination is creating all of these wonderful realities on screen by getting direct-feeds from those other dimensions :p

Edited by dimasok, 27 October 2007 - 09:46 PM.


#144 Live Forever

  • Topic Starter
  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 27 October 2007 - 10:24 PM

Same reason why we're all so sure it should be "friendly AI" and why opposite points of view are dismantled as quickly as they're assembled. If we naively assume that we can get into the minds of future AI, then I will too naively assume that the motives of higher-beings are the same as ours and you know, most of what I read here and on other blogs goes as far as embracing that as the truth itself and not merely speculation.

Aah, see, the initial Bostrom argument only looks at the likelyhood of whether it will happen or not, and doesn't go into motives. Trying to get into motives and thought processes is about impossible, so personally I don't try to go down that road. (especially don't try to argue for any certain motive over another) I can see the appeal, however, but take any such speculation with a grain of salt.

Besides, a world without the things I mentioned truly is desolate and boring.

To you, I find this world as it is very interesting

Why would I want to live in a simulation of "scientific inquiry" as opposed to "entertainment"

1) who says that the simulators live here and aren't just studying us? 2) the key word here is "I"; just because you don't want something doesn't mean that others won't (I for one would, so that is at least one person), and others more advanced than us could have completely different desires.

(although the WoW example you mentioned I already talked about before).

Good, then you see the point of the interest in being a world leader in "our" world, as it would be like that.


I won't get into the fantasy parts of the rest of what you said. You are free to fantasize about anything all you want. (I like to do the same, in fact) [thumb]

Edited by shepard, 23 November 2007 - 05:32 AM.


#145 cyborgdreamer

  • Guest
  • 735 posts
  • 204
  • Location:In the wrong universe

Posted 28 October 2007 - 02:29 AM

Same reason why we're all so sure it should be "friendly AI" and why opposite points of view are dismantled as quickly as they're assembled.


The difference is that we would create the AI. Therefore, if we're careful, we'd have control over its motives. On the other hand, the simulators presumably exist independently of us. So, their desires would probably have no relationship to ours.

Edited by shepard, 23 November 2007 - 05:32 AM.


#146 dimasok

  • Guest
  • 193 posts
  • 6

Posted 28 October 2007 - 04:49 AM

The difference is that we would create the AI. Therefore, if we're careful, we'd have control over its motives. On the other hand, the simulators presumably exist independently of us. So, their desires would probably have no relationship to ours.

So you assume that whatever it is the simulators "simulate" in our simulation (sorry for the chain of tautologies) does not have any relationship to them? I find it hard to believe. Independence means that an individual organism is not necessarily related in any way, shape or form to another organism, but we as humans, no matter how independent we attempt to make ourselves (due to the inherent subjectivity and due to other factors too) we are still pretty interdependent and have plenty of things in common.

#147 cyborgdreamer

  • Guest
  • 735 posts
  • 204
  • Location:In the wrong universe

Posted 28 October 2007 - 03:09 PM

So you assume that whatever it is the simulators "simulate" in our simulation (sorry for the chain of tautologies) does not have any relationship to them? I find it hard to believe.


We may or may not have some relationship to our simulators. I'm just saying that it doesn't make sense to assume any one specific relationship. Particularly, there's no reason to think that their desires are analagous to ours. Assuming they even exist, we know nothing about these beings except that, for whatever reason, they ended up creating a universe.

Edited by shepard, 23 November 2007 - 05:33 AM.


#148 platypus

  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 28 October 2007 - 05:46 PM

http://www.simulism....iar.27s_Paradox

#149 dimasok

  • Guest
  • 193 posts
  • 6

Posted 29 October 2007 - 01:26 AM

I'm just saying that it doesn't make sense to assume any one specific relationship. Particularly, there's no reason to think that their desires are analagous to ours. Assuming they even exist, we know nothing about these beings except that, for whatever reason, they ended up creating a universe.

But don't you think that our desires and our simulation are, at least partially related to these higher-beings desires?

sponsored ad

  • Advert

#150 cyborgdreamer

  • Guest
  • 735 posts
  • 204
  • Location:In the wrong universe

Posted 29 October 2007 - 03:17 AM

But don't you think that our desires and our simulation are, at least partially related to these higher-beings desires?


It depends on what you mean by 'related'. Our desires our a product of theirs (unless they created us by accident) so in that sense the two are related. However, there's no reason to assume that their desires are similar enough that we could understand and empathize with them. It's possible, of course, but very unlikely. In the space of all possible minds that would want to create a universe, only a small fraction would have human-like emotions.

Edited by shepard, 23 November 2007 - 05:33 AM.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users