• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
* * * * * 1 votes

Utility theory vs life extension


  • Please log in to reply
8 replies to this topic

#1 Nootropic Cat

  • Guest
  • 148 posts
  • 36
  • Location:meow

Posted 04 February 2010 - 02:00 AM


And will we create a society of paranoiacs?

A concept originating in economics, the utility of a thing is a subjective measure of its value. Although it is subjective and therefore not transferrable from one person to another, it can still be treated in a logically rigorous manner.

For example, someone offers you $1000 to miss a day of work without telling your boss. What chance of being fired would you accept to take this deal? If you say a 1% chance would be acceptable, it follows that for $100,000 you would quit the job straight away. In reality we’re missing some variables from this equation; for example the excitement you get from taking the risk and from the novelty of the event. These would be Utility Value (UV) positive and would cause the absolute value of the $1000 to go down. Nevertheless in theory we could include all factors and come up with the exact value $1000 had to you relative to keeping your job, and thus the exact value to you of your job. In fact if this situation were to occur, the logical way to approach it would be to decide upon the sum you would accept to leave your job, then reverse engineer the rest.

Now suppose you are offered $1M for a 1% chance at losing your life. Many people would go for this. If you are one of these people presumably you would be prepared to die right now in exchange for $100M…?

Clearly something is wrong with this picture. Because the utility of our own lives is infinite, right? Apparently not. People sacrifice their own lives to save those of others, not just in thought experiments but in real life. They even do it for nebulous entities like nation-states and moral values. And if we placed limitless value on our own survival all of us, or at least the most logical among us, would be far more cautious than we are about anything (read: everything) that carried any risk at all.

Let’s see if a better definition of life UV will clear things up. I propose:

UV <life> = UV <moment> * sum<moments>

Physics is still undecided on the nature of time, but I’m going to assume that the human mind perceives it as being granular, not continuous. I think it is central to our sense of consciousness, selfhood and sanity to perceive this moment as having some intrinsic ‘thingness’ that is partitioned off from a frigidified past and a fluid future. Without this, perception would be nothing but blur, process-interaction-movement ad infinitum. If we could time these moments (probably again both subjective and variable, but bare with me) then we could count how many went into an individual’s lifespan. Here’s my point: a finite but experientially continuous life would contain infinite ‘nows’ and therefore infinite UV, regardless of the value of the individual moments, and this evidently is not the case.

The UV of a moment, we could call ‘the incremental value of life’ – it doesn’t just refer to the mood you are in right now. If you feel bad but expect to feel well again, your momental UV is mostly likely still positive. When a person reaches the point of suicide, they feel that they would prefer nothing rather than this, and don’t expect things to ever improve.

So I am proposing that an individual’s lifespan (not their life per se, which I don’t think ‘exists’ outside of time) has a specific value to them which is finite and by the nature of utility theory, transferrable. But hold on, I haven’t dealt with the currency of exchange for accepting guaranteed death right now. Well the reason this breaks down in my opinion is that all of the value-able things we can conceive of need time to make their effects known. In other words they too are granular, and so their exchange rate is restricted. Perhaps you would agree to live for just ten more years in exchange for $10M. There is nothing we know of that would emulate the pleasure one could derive from splashing that money around for a decade, in just one moment. That’s not to say that such a thing could not exist though. One can conceive of a VR sim that gave you the subjective experience of that length of time in what was, to your physical body, merely an eyeblink.

Now let’s relate this all to radical life extension. Life expectancies have been going up steadily but slowly – but no one is going to have noticed much difference in their relationship with risk. Yes, we’re all better educated about harmful things to avoid, but no one’s saying, ‘Man, I expect to live to 74, it was 64 not long ago, I think I’ll stop driving now.’

What happens when we approach and then reach escape velocity?

There will always be the possibility of death, so for the sake of this discussion I’m going to presume that once biological immortality has been reached, the average life expectancy before accidental death will have reached 700 years, or a roughly tenfold increase.

Well let’s see what this looks like from my perspective. These numbers are mostly plucked out of thin air but it’s a start:

Chance that escape velocity will be reached during my expected lifetime, given unhindered progress – 50%
Chance that natural or manmade catastrophes or political machinations will hold back the necessary scientific progress beyond my lifetime – 25%
Chance that I get successfully cryonically preserved and revived at a propitious point in the future – 25%

(50*.75)% I make it +(50*.25)% I make it, so 50%

50% chance of a tenfold increase in lifespan. According to utility logic that means a fivefold increase in risk-aversion. I’m not even sure what to make of that and that’s partly why I’m making this thread. I'm not sure that our intuition is good at dealing with these concepts. Up until now I’ve been ok with riding in a car or a plane, but was I 5x above my required threshold? Hard to say since I used to have my intuition tuned in to a different station.

And what about when everybody’s living 700 years? Will everyone be scared to leave the house? Or will we see the ‘admirable human spirit’ getting on with things as usual? And if so, will that in fact be an example of ‘typical human stupidity’?

#2 lunarsolarpower

  • Guest
  • 1,323 posts
  • 53
  • Location:BC, Canada

Posted 04 February 2010 - 05:08 AM

I see you're pretty new here so I just want to clarify that for many of us here immortality doesn't just imply an ageless biological body. I really like the beginning portion of Kenneth Hayworth's article (PDF). It describes a plausable means by which a biological human being could be digitized. Ultimately to become "immortal" it is necessary that you expand indefinitely in space over time. This is much easier to achieve if you can duplicate your consciousness or at least identity to multiple locations.

When considering the question of risks I understand that I may not be absolutely stone-cold logical about avoiding all of them but in general I do make efforts to reduce them. If non-biological substrates become available to us in the next 500 years as I think they will, then the level of risk a particular copy or node of one's self can be subjected to becomes immense. I personally plan to visit the sun and only leave the consciousness of the visiting body/entity once it is no longer able to transmit the experience back to my still-surviving self living on elsewhere.

In the mean time, yeah I'll have to settle for a bit lower level of risk to have a decent chance of getting to that point.

#3 Nootropic Cat

  • Topic Starter
  • Guest
  • 148 posts
  • 36
  • Location:meow

Posted 04 February 2010 - 05:43 AM

I didn't mention non-biological immortality, but when dealing with probabilities it makes sense to approximate within the most currently conventional guidelines. It's easy to forget that there would be trade-offs there as well. Look at this New Scientist article for examples of the perils of digital information storage. I do however appreciate the argument that at some point things will be extrapolating to an extent that current conventional thinking couldn't keep up with. I sure hope that happens within the next few decades, but none of us can know that. In the meantime I'm left with the predicament that extrapolatory change is likely to leave me more protected by technology (hostile AI excepted) but less protected by humanity (who are more likely to be turned on by what's in it for them than by making sure that 'no one gets left behind').

sponsored ad

  • Advert

#4 JLL

  • Guest
  • 2,192 posts
  • 161

Posted 04 February 2010 - 08:35 AM

I'm sorry, maybe I'm a bit slow today, but

For example, someone offers you $1000 to miss a day of work without telling your boss. What chance of being fired would you accept to take this deal? If you say a 1% chance would be acceptable, it follows that for $100,000 you would quit the job straight away.


How does the latter follow from the former?

#5 Teixeira

  • Guest
  • 143 posts
  • -1

Posted 04 February 2010 - 10:35 AM

And will we create a society of paranoiacs?

A concept originating in economics, the utility of a thing is a subjective measure of its value. Although it is subjective and therefore not transferrable from one person to another, it can still be treated in a logically rigorous manner.

For example, someone offers you $1000 to miss a day of work without telling your boss. What chance of being fired would you accept to take this deal? If you say a 1% chance would be acceptable, it follows that for $100,000 you would quit the job straight away. In reality we’re missing some variables from this equation; for example the excitement you get from taking the risk and from the novelty of the event. These would be Utility Value (UV) positive and would cause the absolute value of the $1000 to go down. Nevertheless in theory we could include all factors and come up with the exact value $1000 had to you relative to keeping your job, and thus the exact value to you of your job. In fact if this situation were to occur, the logical way to approach it would be to decide upon the sum you would accept to leave your job, then reverse engineer the rest.

Now suppose you are offered $1M for a 1% chance at losing your life. Many people would go for this. If you are one of these people presumably you would be prepared to die right now in exchange for $100M…?

Clearly something is wrong with this picture. Because the utility of our own lives is infinite, right? Apparently not. People sacrifice their own lives to save those of others, not just in thought experiments but in real life. They even do it for nebulous entities like nation-states and moral values. And if we placed limitless value on our own survival all of us, or at least the most logical among us, would be far more cautious than we are about anything (read: everything) that carried any risk at all.

Let’s see if a better definition of life UV will clear things up. I propose:

UV <life> = UV <moment> * sum<moments>

Physics is still undecided on the nature of time, but I’m going to assume that the human mind perceives it as being granular, not continuous. I think it is central to our sense of consciousness, selfhood and sanity to perceive this moment as having some intrinsic ‘thingness’ that is partitioned off from a frigidified past and a fluid future. Without this, perception would be nothing but blur, process-interaction-movement ad infinitum. If we could time these moments (probably again both subjective and variable, but bare with me) then we could count how many went into an individual’s lifespan. Here’s my point: a finite but experientially continuous life would contain infinite ‘nows’ and therefore infinite UV, regardless of the value of the individual moments, and this evidently is not the case.

The UV of a moment, we could call ‘the incremental value of life’ – it doesn’t just refer to the mood you are in right now. If you feel bad but expect to feel well again, your momental UV is mostly likely still positive. When a person reaches the point of suicide, they feel that they would prefer nothing rather than this, and don’t expect things to ever improve.

So I am proposing that an individual’s lifespan (not their life per se, which I don’t think ‘exists’ outside of time) has a specific value to them which is finite and by the nature of utility theory, transferrable. But hold on, I haven’t dealt with the currency of exchange for accepting guaranteed death right now. Well the reason this breaks down in my opinion is that all of the value-able things we can conceive of need time to make their effects known. In other words they too are granular, and so their exchange rate is restricted. Perhaps you would agree to live for just ten more years in exchange for $10M. There is nothing we know of that would emulate the pleasure one could derive from splashing that money around for a decade, in just one moment. That’s not to say that such a thing could not exist though. One can conceive of a VR sim that gave you the subjective experience of that length of time in what was, to your physical body, merely an eyeblink.

Now let’s relate this all to radical life extension. Life expectancies have been going up steadily but slowly – but no one is going to have noticed much difference in their relationship with risk. Yes, we’re all better educated about harmful things to avoid, but no one’s saying, ‘Man, I expect to live to 74, it was 64 not long ago, I think I’ll stop driving now.’

What happens when we approach and then reach escape velocity?

There will always be the possibility of death, so for the sake of this discussion I’m going to presume that once biological immortality has been reached, the average life expectancy before accidental death will have reached 700 years, or a roughly tenfold increase.

Well let’s see what this looks like from my perspective. These numbers are mostly plucked out of thin air but it’s a start:

Chance that escape velocity will be reached during my expected lifetime, given unhindered progress – 50%
Chance that natural or manmade catastrophes or political machinations will hold back the necessary scientific progress beyond my lifetime – 25%
Chance that I get successfully cryonically preserved and revived at a propitious point in the future – 25%

(50*.75)% I make it +(50*.25)% I make it, so 50%

50% chance of a tenfold increase in lifespan. According to utility logic that means a fivefold increase in risk-aversion. I’m not even sure what to make of that and that’s partly why I’m making this thread. I'm not sure that our intuition is good at dealing with these concepts. Up until now I’ve been ok with riding in a car or a plane, but was I 5x above my required threshold? Hard to say since I used to have my intuition tuned in to a different station.

And what about when everybody’s living 700 years? Will everyone be scared to leave the house? Or will we see the ‘admirable human spirit’ getting on with things as usual? And if so, will that in fact be an example of ‘typical human stupidity’?

When you imagine immortality produced by thecnology, of course it cames a point in time (700 years or something like that) where the probabilities of some tipe of serious accident to kill you, becomes real high. So, if we increase the amount of time, the risk of accidental death also increases. And so, when time tends to infinite the probability of mortal accident tends to one, and so much for the immortality: "soon", everybody would be dead!
If you search for immortality, you must find a way where you have a field around you that modifies the probabilities of any type of accident, towards zero. Otherwise you will never get immortality, but only some extra years (or centuries) and nothing else. I wonder where bio-tech is going to find such a field!?
(I had avoid the mathematics of these things because it could be boring).

Edited by Teixeira, 04 February 2010 - 10:37 AM.


#6 Nootropic Cat

  • Topic Starter
  • Guest
  • 148 posts
  • 36
  • Location:meow

Posted 04 February 2010 - 07:35 PM

I'm sorry, maybe I'm a bit slow today, but

For example, someone offers you $1000 to miss a day of work without telling your boss. What chance of being fired would you accept to take this deal? If you say a 1% chance would be acceptable, it follows that for $100,000 you would quit the job straight away.


How does the latter follow from the former?


Look at it from a gambling perspective. For a bet to be a good one, you need the right price at the right odds. Most commonly when people bet they are taking odds, meaning that they can win more than they can lose. Invariably this type of bet will lose the majority of the time, but it can still be a good one if the odds being layed are better than the frequency of the win, e.g. if you bet $100 to win $400 and win the bet 1 time in 3, you profit $33.33 per bet on average. In my original example you are in fact the one laying the odds - the vast majority of the time you win the bet and keep the $1000, but there's a small chance of losing in which case the penalty will be severe. Now if this is a favourable bet for you, which we'll asume it is (slightly) since 1% is the most risk you are prepared to take, then taking a wager of $2000 with a 2% chance of losing is also a favourable bet. By the same logic you would happily flip a coin for keeping the job or winning 50K, and give the job up altogether for 100K.

In the gambling world there is also the issue of variance - the level of 'swinginess' someone is prepared to take on is a function of their risk-averseness and their bankroll. Ironically in this example the $1000 bet is the most volatile because the distribution of payouts is so highly skewed, yet I suspect most people would opt for the small amount of money/small chance of losing job thinking they were being conservative.

#7 Nootropic Cat

  • Topic Starter
  • Guest
  • 148 posts
  • 36
  • Location:meow

Posted 04 February 2010 - 07:41 PM

When you imagine immortality produced by thecnology, of course it cames a point in time (700 years or something like that) where the probabilities of some tipe of serious accident to kill you, becomes real high. So, if we increase the amount of time, the risk of accidental death also increases. And so, when time tends to infinite the probability of mortal accident tends to one, and so much for the immortality: "soon", everybody would be dead!
If you search for immortality, you must find a way where you have a field around you that modifies the probabilities of any type of accident, towards zero. Otherwise you will never get immortality, but only some extra years (or centuries) and nothing else. I wonder where bio-tech is going to find such a field!?
(I had avoid the mathematics of these things because it could be boring).


I agree with this; it certainly is a fly in the ointment for hopes of 'true immortality', and even in non-biological scenarios it's hard to imagine that there wouldn't be some way of being deleted or switched off. It's amusing to see you talk about probability fields as being possibly manipulable - isn't that exactly how Douglas Adams described the fifth dimension? Well I always thought that guy understood the universe better than anyone else, so I hope you're right.

#8 Teixeira

  • Guest
  • 143 posts
  • -1

Posted 05 February 2010 - 12:43 AM

When you imagine immortality produced by thecnology, of course it cames a point in time (700 years or something like that) where the probabilities of some tipe of serious accident to kill you, becomes real high. So, if we increase the amount of time, the risk of accidental death also increases. And so, when time tends to infinite the probability of mortal accident tends to one, and so much for the immortality: "soon", everybody would be dead!
If you search for immortality, you must find a way where you have a field around you that modifies the probabilities of any type of accident, towards zero. Otherwise you will never get immortality, but only some extra years (or centuries) and nothing else. I wonder where bio-tech is going to find such a field!?
(I had avoid the mathematics of these things because it could be boring).


I agree with this; it certainly is a fly in the ointment for hopes of 'true immortality', and even in non-biological scenarios it's hard to imagine that there wouldn't be some way of being deleted or switched off. It's amusing to see you talk about probability fields as being possibly manipulable - isn't that exactly how Douglas Adams described the fifth dimension? Well I always thought that guy understood the universe better than anyone else, so I hope you're right.

It´s very curious because I don´t know the work of Douglas Adams. About the probability fields, it´s more exactly probability waves (see quantum mechanics).
These probability waves that can change the probabilities of events are part of the "defense system" of an immortal body, like the immunity system is part of the defense system of a human body, as you know. Of course we can expect a more sophisticated defence system in an immortal body than in a human one, because the first one is vastly superior.
Regarding the properties of those waves, you don´t need to manipulate anything, they do all the job by themselves because they have a kind of "resident intelligence". They work the same way as the immune system. You don´t tell anything to, say T4 cells, because they know exactly what to do to protect you! We are just talking of another plan a higher plan of things.

#9 lunarsolarpower

  • Guest
  • 1,323 posts
  • 53
  • Location:BC, Canada

Posted 05 February 2010 - 05:01 AM

I didn't mention non-biological immortality, but when dealing with probabilities it makes sense to approximate within the most currently conventional guidelines. It's easy to forget that there would be trade-offs there as well. Look at this New Scientist article for examples of the perils of digital information storage.


Although the link I provided did describe a scenario where the entire biological original was destroyed I think that would be a foolish step for a true immortalist to take until many alternate substrates are available. Biology may not be optimized beyond what it needs to be but it is hardy. Nearly infinite generations of biological survival attest to that. It's just like data center backup policies now. There are live mirrored hard drives in arrays where any drive can fail without causing a problem. Then the servers have backups in different geographic locations. Each server has a main power connection plus a battery/generator backup. Each server is connected to the network by multiple sources. Then you have tape backup archives that are stored offsite in case of catastrophic accident or a user error erasing all the live data.

Given a few thousand years to work out the details immortalists of the future should possess quite amazing survivability.

If you search for immortality, you must find a way where you have a field around you that modifies the probabilities of any type of accident, towards zero. Otherwise you will never get immortality, but only some extra years (or centuries) and nothing else. I wonder where bio-tech is going to find such a field!?


I previously proposed something of this nature. I called it a Spacetime Fortress for lack of a better name. I don't know if it's truly possible to create something of this nature but if so it probably wouldn't even be able to be seen by the rest of the universe as it would bend all light away that happened to come near - the inverse of a black hole.

Posted Image




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users