• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Knowing what to Worry About


  • Please log in to reply
10 replies to this topic

#1 outlawpoet

  • Guest
  • 140 posts
  • 0

Posted 31 August 2003 - 09:19 PM


Drawing in some degree from patrick's excellent topic, and others as well, I thought it would be helpful to get a bit of background on what threats are most likely, most dangerous, and most worrying.

Now, first of all, existential risks are the most dangerous and worrying kind, generally, because they're almost totally nonrecoverable. But probability gives us rules to discount severity against probability. So it's important to cast existential risks against the background of /normal/ deaths. Approximately 150 to 160 thousand people will die today, and tommorro. The number will rise slighly behind population increase, although not as fast as it used to. This constant death is not as exciting as existential risk. But it's important to remember that in 50 years, as many people will die as would die if every human on earth were killed right now, so long as rates keep their projected trends. Those trends include WHO optimistic predictions of hospitals and medical technology in 3rd world countries. Include the increasing average lifespan. Do not include the possibility of new plagues like AIDS or a more virulent SARS. You are overwhelmingly likely to die of cardiorespiratory failure. (This because you're probably in your twenties or above, and in a western nation if you are reading this) But these risks are known. They're easy to look up. http://www.census.go...www/idbnew.html

So taking into account the constant risks, what other kinds of risks can we organize into a comprehensive framework?
Some existential risks are simple to compute. The earth's size, the debris in the solar system, and the datable impacts on this and other planets suggests that we should be hit by a dangerously large object from space every few million years. It has been some time since the last impact. The last impact of any size was the Tunguska event, early in the last century. Are we at risk? Probably not within the next 50-100 years. However, the actual results of a large impact have two attributes that deserve attention despite the small chance of collision. 1st, there is no way to stop such an impact. Regardless of what you may see in movies, there is no existing credible way to stop space impacts from happening. There is not detection mechanism in place to find them, and there is no space based forces able to prevent collision could they be detected in time. 2nd, such an impact could totally destroy our civilization. It's instructive to note that while an impact is credited with the destruction of the dinosaurs, they did not perish by flame or overpressure. They were destroyed by the mangling of a complex and interdependent system of dinosaur food chains, social realtions, and ecosystems. They died AFTER the impact, as their complex environment was unable to compensate for the new variables. We are profoundly dependent on our technological and agricultural systems for survival, have past the point of local self-sufficiency in many areas some time ago. So we must add threats to our support systems to existential risks.

What about harder to compute probabilities? Particularly futurological issues, like nanotechnology, artificial intelligence, and more dangerous technologies. It seems germane to address them by their known variables, and possible risks. All future technologies are potential existential risks. This is for two reasons. First, because new technologies are arbitrarily powerful being new, and because the implications of such technologies are not yet known. (A simple example: suppose you invent a new radio that operates on gravitic signals, or high energy EM, and it's extraordinarily successful. And some time later it's discovered that it causes sterility in males. At ranges far enough to penetrate beyond line of sight. The entire human race is sterile, bam boom, existential risk, no knowing). Also there is the possiblity of future development in deliberate attempts to destroy our race. species suicide is not unheard of, and is becoming increasingly likely as every member of our species becomes more capable. So we must try to organize these difficult to evaluate risks by what we do know.

Military Future Tech. The Military spends a lot of time building things to kill people. This alone would merit listing here, but in particular their technologies, while crude by futurrist standards, are far more likely to be used, and developed. In my opinion, the Military is the primary source of several future risk scenarios, particularly nanotechnological disaster, super-virii, nuclear exchange, and a new Dark Ages. The militaries of the world also have a lot of money and an interest in the future, so unlike other future risks, I believe the probality of military species suicide increases for some time, and then will begin to decrease, as they gather more information about risks involved in certain technologies.

Lone Crazies. The lone crazy profile is a simple and appealing one for many people, having been acquainted with several variations on this theme, the lone crazy gun nut, the Unabomber crazy intellectual, the crazy religious fanatic. But small groups or solo projects in general are not very credible threats to a significant fraction of people. While individual capability is growing, incorporate capability is growing much faster. So while small groups may cause issues, they greater threat is from larger organizations. There are several organizations who wish to destroy who number in the hundreds, but most of these groups are relatively dispersed or ineffectual. Terrorist organizations, for example, operate generally on the cell principle, where they gather in small isolated groups to accomplish simple goals at greater likelihood. Other small groups with dangerous ideas can be discounted on the basis of ineffectual means. An example would be the anthrax 'attacks' on media centers following september 11, 2001. This was a profoundly ineffectual delivery system that resulted in almost no casualties. We will not investigate the fact that it was weaponized anthrax from an american production facility. Likewise the earlier anthrax attacks of the Aleph sect in Japan. this extremely weathy and motivated sect, that despite several attempts and multiple deployment systems, succeeded in having 3000 people hospitalized, and almost no deaths.

There is however an exception to the rule of ineffective small groups. And it is leveraged technologies. Certain technologies provide an extreme imbalance of power to what could be a relatively small group. Unfortunately new technologies are difficult to predict, but I can lay out a few possibilities. The first is an expert system or tool-AI project. Such a project could give a disproportional amount of capability to a small group. The nightmare version of this scenario is a general AI project. The consequences of a rogue AI project are impossible to predict, and can't be reliably bounded. It could literally result in a worse than existential risk. Such as torturing to death of more people than have ever existed, or other such pleasant scenarios. Second is financial terrorism. This is an extremely dangerous possibility of a group or individual exploiting instabilities in the financial world to intentionally destabilize it. Such a scenario could result in a catastrophic meltdown of global or even intranational trade. literally starving, depriving, overheating, and killing everyone dependent on trade(everyone in a major city, cashcrop farmers, anyone in longterm medical care, career employees, people with heart conditions in hot areas, diabetics, literally anyone). There are several other possiblities that spring to mind, but you get the idea, i think. Leveraged technologies are difficult to appraise and impossible to legislate against. In fact, legislation may actually increase the likelihood of such scenarios occuring, becuase of the outlawing phenomena(independent activities may be spotlighted and made more attractive by illegality).

There are more risks than can be enumerated, but we must focus on those that are likely, threatening, and preventable. It helps little to worry about Gamma Ray Bursts, Nemesis, Strange Matter/Physics Disasters, or alien invasions, because there is little we can do to prepare or prevent such disasters.

And so, I'll list what I think are the most likely, dangerous, and worrying scenarios, please feel free to critique, add, or edit your own.

1. Leveraged Technologies by a small or large group. (Super virii, nanotechnology, unFriendly AI, unknown. Even fifth or sixth generation nuclear weapons would suffice(we don't have Tbombs yet, but we're working on it)) We need to decrease the likelihood of leveraged weapons ever beind deployed by any group or nation be decreasing tensions and inequalities that cause such weapons to be used. Going after the causes of military, terrorist, or governmental action is the only viable way to reduce the likelihood of them occuring.

2. "Natural" Deaths. Everybody dies. That must stop, if we're to credibly consider ourselves to be removing threats to life. This includes accidents and homicide/suicide.

3. A Dark Age. A dark age could be triggered by several stimuli, it could be social or political warring, such as caused the last dark age. It could be another nonexistential disaster that destroys the social commons. It could be a repressive world government. In any case, steps must be taken to ensure that we are more resilient, our support systems more reliable, and our society more decentralized to reduce this risk. A dark age would cause millions of unneccesary deaths and derail attempts at progress for an arbitrary period of time.

4. "Natural Disasters" (meteors/asteroids, magnetic disturbances, solar abnomalities, superfast climate changes,) natural deaths are no better than artificial ones. And defenses and countermeasures should be built or investigated for the most likely and best understood of natural disasters.

5. The Great Filter. The Fermi Paradox is the biggest question mark in the universe. There are no aliens. No superstrong radio signal, no stellar engineering, no macroscale objects. no gravitic radios. Why is the galaxy so empty? why the crushing silence? Is there some great filter that crushes civilizations before they can reach the interplanetary stage? And will the answer come looking for us? The question of why is vital. We must understand, because we are on the verge of becoming visible to other civilizations. What keeps us from seeing them may prevent us from expanding as well.


please feel free to be as complimentary or critical as you can. I declared Crocker's Rules a long time ago because i'm more interested in information than esteem. http://sl4.org/crocker.html

#2 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 31 August 2003 - 10:12 PM

I just addressed this independently in Patrick's excellent post and suggest that while the "lone variant" is the myth put forward to rationalize empowerment of various socio-economic interests it is not a myth when examined as a larger statistic involving the actual numbers of people being killed world wide.

Essentially "lone gunmen" are not the threat but the intentions of those that manipulate the mythology of such threats is a threat through both clandestine profit and fear mongering.

I allude to an "Unholy Alliance of Mutual Interest" in this post http://www.imminst.o...iew=getlastpost and suggest that any serious student of history will confront the disturbing reality that "plausible deniability" is a very old practice for those that build empires out of the ignorant mass of mankind.

#3 patrick

  • Guest
  • 37 posts
  • 0

Posted 01 September 2003 - 01:15 AM

This is a dense pack of good ideas. I have only a couple of comments right now.

First, I think it does do some good to worry about physics disasters, alien invasions, gamma ray bursts, and the like. Even if you and I are individually unequipped to minimize the risk, minimize the damage, or assist in recovery afterwards, that does not mean that strategies - perhaps quite effective ones - could not be developed.

Second, I am glad that you include non-total-destruction effects such as slow individual deaths and dark ages. I particularly enjoyed your comparison of 'natural' and catclysmic deaths - I would like to think that way more intuitively, myself.

Third, I like the way you categorize and handle Leveraged Technologies. It sounds to me as though that area would be the most useful focus of a study on Threats To Life - and solid research in this area might allow people to focus on truly harmful practices and Threats, instead of the latest "fad" threats.

Patrick

sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#4 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 September 2003 - 12:49 PM

BTW, the concept of "Leveraged Technology" is both potentially positive as well as negative and the motivations and resources of the groups developing and applying them determine much of their outcome. I do see a sort of "Space Race," which was historically a spin off of the Arm's Race, beginning with respect to computational power and its specific variant the "AI Race".

There is a balancing sort of psychosocial ecology with respect to some groups seeking AI for power over social infrastructures and individual behavior and others that want to pursue "Friendly AI." These two competing "leveraged technologies" both mitigate and accelerate one another but collectively insure that the outcome will be other than entirely predictable yet both are contributing to the creation of a human driven "Singularity".

A similar effect can be seen in the peaceful versus military uses of NBC technologies. Bio-weapons research for example is getting us ever closer to fully understanding the immune system and creating diagnostic and detection methods that may save much more than kill "if" the weapons produced by the applications of this new awareness are not used first.

Nuclear is historically obvious and exactly parallel in application and nanotech is the ultimate application of Chemical weapons technology and also subject to the "dual use paradigm." It is through the competition of those that seek power for one side or another AND the the competition of those that seek "benign use" of this developing power that "Leveraged Technology" is pragmatically seen as both threatening and also opportunistic for the alleviation of "threat."

What is more likely is an ever increasing level of complexity that continues to balance one side against the other creating new increasing levels of threat concurrent with every new level of hope AND visa versa, a more viable methodology of hope contained within each new legitimate and understood threat.

#5 outlawpoet

  • Topic Starter
  • Guest
  • 140 posts
  • 0

Posted 01 September 2003 - 10:29 PM

Lazarus, an excellent point. While the lone gunman may not be a true threat per se, the use of this archetype is so widespread as to be extraordinarily important as a social phenomena.

Patrick, it's true, we should never stop thinking of a scenario, because we are helpless against it NOW. But I was trying to formulate some kind of consideration/action plan. And it seemed best to concentrate on what we can change. What kinds of strategies or attentions would you pay to unstoppable cataclysms like these?

Laz. It's very true, leveraged technologies are only dangerous if the person using them is dangerous. A wonderful example is the difference between my dystopian AI terrorist, and SIAI http://www.singinst.org . Another example is beneficial financial manipulation, like greenspan's job. (although his focus of attention is far to small, US only)

#6 patrick

  • Guest
  • 37 posts
  • 0

Posted 02 September 2003 - 12:48 AM

Outlawpoet, like you I want to live forever, and I think we both want to minimize the risk of extinction. Of course, we will find that it is impossible to formulate an effective strategy to stop an "unstoppable cataclysm". But in the case of existential threats, calling them "unstoppable" rules out the possibility of doing anything about them. This seems fatalist to me.

I'll give some feeble examples. There are not meant to be real points of argument; I do not desire to defend these ideas. They are merely to demonstrate that it is *conceivably possible* to do something.

It helps little to worry about Gamma Ray Bursts, Nemesis, Strange Matter/Physics Disasters, or alien invasions, because there is little we can do to prepare or prevent such disasters.


Gamma Ray Bursts:
Strategy, minimize chance: probably none, except stellar husbandry.
Strategy, minimize damage: 1. Spread viable human colonies in a 1000 ly radius. 2. Develop a shield and ensure that adequate resources (including humans, naturally) are shielded at all times.

Nemesis (by which I presume you mean rogue stars or black holes passing through or near the Solar system)
Strategy, minimize chance: probably none in the short term, but technology may give us techniques for displacing massive objects at some point, and these possibilities should be investigated.
Strategy, minimize damage: spread viable human colonies in a 10 ly radius.
Strategy, recovery: deploy a neighborhood watch system, actively scanning for massive objects, to provide maximum warning.

Alien Invasion
Strategy, minimize chance: deploy passive and active scanning systems surrounding the Solar system to detect the existence and location of alien lifeforms (intelligent or not). An early warning system, if you will.
Strategy, minimize damage: develop weapons and defense technologies likely to be effective at the stellar level. Hide a portion of humanity, possibly deep underground.
Strategy, recover: Spread viable human colonize over a large, unpatterned area of galactic space, including "stealth" interstellar space stations.

Etc. It is possible to being to strategize on even "big problems". I would be disappointed if humanity turned to "Peace On Earth", cleaned up their act, and was suddenly annihilated because nobody bothered to keep an eye on the stars.

Patrick

#7 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 03 September 2003 - 12:34 AM

Actually we have a prejudicial common perspective of lumping together all cosmic threats, as much of it can be broken down into short and long term, local and distant. This link to another thread exemplifies this as we both have significant evidence of repeated impacts from objects before on Earth and nearby on other bodies like the Moon, Mars, and recently on Jupiter and a rough evidence from the fossil records of a schedule of events that implies we may be due for a strike that can be devastatingly catastrophic regionally and globally serious to cause drastically sudden climactic change.

I also should point out that the article I posted before the one on today's announcement of another near miss (NEO) asteroid implies that all bets are off with respect to the reliability of previous calculations for orbital trajectories as they were modeled before the induced effect of the gas and dust cloud we are entering was factored into orbital behavior.

There is a serious need to re-evaluate the trajectories in light of this fact as velocities for these objects could be effected significantly enough to make them more susceptible to being captured by various planetary gravity wells and skew their paths sufficiently to change the trajectories and make them either more of a threat or less, but by any measure our predictions are not to be considered as reliable as they should be.

#8 outlawpoet

  • Topic Starter
  • Guest
  • 140 posts
  • 0

Posted 03 September 2003 - 10:08 AM

Lazarus, you're absolutely right. In a lot of ways, the categories I have used unneccesarily simplify risks, possibly at the expense of hiding solutions. NEO objects and gamma ray bursts, for example need to be considered seperately because they demand different solutions. And solar flares while superficially similar to gamma ray bursts threaten differently and at different likelihoods. The organization here is for simplicity and analysis, and shouldn't be taken any other way. Every threat is distinct, and the categorization of threats really changes little but our recognition of them. They still all have to be solved.

Like anything else, the organization of threats to life is an abstraction that attempts to bring up usable regularities within the threats to our lives. These regularites can hopefully be exploited to expose strategies that remove them in a timely fashion.

#9 Hypermere

  • Guest
  • 65 posts
  • 0
  • Location:Gainesville,Florida,USA

Posted 03 October 2003 - 09:44 PM

Let me do a civic duty by stating the obvous, that one threat to life that cannot be overlooked in and of itself is religion. If we didn't have ideas such as Jesus (along with Santa Clause and the Tooth Fairy, if you will) then the masses would not have been dulled into believing that they will have guaranteed eternal life...end result being that maybe some actualy concern would exist today among the general public that aging is a diseast that must be cured. Of course this goes with out saying, but I'd like to see it on billboards everywhere to open up the eye of mankind which has been glue shut by sunday morning church sermons.

#10 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 06 October 2003 - 10:29 PM

that's interesting regarding the existential risks. I am very scared of those things happening! Just curious, what seems to be the best hope we have of being able to extend our lives on an indiivdual basis? Not that I'm not up on the issues but I just was interested in your opinions. The great filter is something new to me! But that makes sense why can't we communicate with other sentient beings right? I've thought to myself many times, gee " is this the best evolution can make?!" I would love to find a much smarter being capable of humbling us...

Edited by dfowler, 06 October 2003 - 11:11 PM.


sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#11 outlawpoet

  • Topic Starter
  • Guest
  • 140 posts
  • 0

Posted 24 October 2003 - 12:57 AM

I think the best ideas available to us minimize existential risks. I'm not certain if it is possible to live forever but we can worry about that when we have a bit more experience.

The simplest thing we can do to help ourselves avoid these risks of death are to streamline our lifestyles, removing unneccesary risks like gratuitous violence, accidents, diet issues, carcinogens, self care, etc. It's sad how many people die of relatively voluntary causes.

Second is to circumvent some of the more immovable risks to us. Flawed biology, tendency to war, lack of options, poverty, isolation. Biotech research, many valuable charities, the Singularitian movement , and other transhuman and humanist efforts are key to solving these.

And Third is to get some variety. Right now all people live in the same place, have the same biology, eat the same kinds of food, live the same kinds of ways. If people would spread out a little bit, branch, and become a little more creative, we'd be far less vulnerable to disasters and problems. The issue with monoculture is that weaknesses and strengths are universal. So one really bad virus or technological issue is all that stands between us and extinction. Moving away from home, and becoming different people than your parents is part of growing up, and is neccesary to maintain sociological and personal health. We in general have yet to do this. (not to mention we all live at the bottom of a huge gravity well, making us a big bunch of fish trapped in a barrel we can barely get out of at all at this point..)

These are things that we've never done before, but they seem, within our capability, and they're key to surviving and thriving in an uncertain universe.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users