Drawing in some degree from patrick's excellent topic, and others as well, I thought it would be helpful to get a bit of background on what threats are most likely, most dangerous, and most worrying.
Now, first of all, existential risks are the most dangerous and worrying kind, generally, because they're almost totally nonrecoverable. But probability gives us rules to discount severity against probability. So it's important to cast existential risks against the background of /normal/ deaths. Approximately 150 to 160 thousand people will die today, and tommorro. The number will rise slighly behind population increase, although not as fast as it used to. This constant death is not as exciting as existential risk. But it's important to remember that in 50 years, as many people will die as would die if every human on earth were killed right now, so long as rates keep their projected trends. Those trends include WHO optimistic predictions of hospitals and medical technology in 3rd world countries. Include the increasing average lifespan. Do not include the possibility of new plagues like AIDS or a more virulent SARS. You are overwhelmingly likely to die of cardiorespiratory failure. (This because you're probably in your twenties or above, and in a western nation if you are reading this) But these risks are known. They're easy to look up. http://www.census.go...www/idbnew.html
So taking into account the constant risks, what other kinds of risks can we organize into a comprehensive framework?
Some existential risks are simple to compute. The earth's size, the debris in the solar system, and the datable impacts on this and other planets suggests that we should be hit by a dangerously large object from space every few million years. It has been some time since the last impact. The last impact of any size was the Tunguska event, early in the last century. Are we at risk? Probably not within the next 50-100 years. However, the actual results of a large impact have two attributes that deserve attention despite the small chance of collision. 1st, there is no way to stop such an impact. Regardless of what you may see in movies, there is no existing credible way to stop space impacts from happening. There is not detection mechanism in place to find them, and there is no space based forces able to prevent collision could they be detected in time. 2nd, such an impact could totally destroy our civilization. It's instructive to note that while an impact is credited with the destruction of the dinosaurs, they did not perish by flame or overpressure. They were destroyed by the mangling of a complex and interdependent system of dinosaur food chains, social realtions, and ecosystems. They died AFTER the impact, as their complex environment was unable to compensate for the new variables. We are profoundly dependent on our technological and agricultural systems for survival, have past the point of local self-sufficiency in many areas some time ago. So we must add threats to our support systems to existential risks.
What about harder to compute probabilities? Particularly futurological issues, like nanotechnology, artificial intelligence, and more dangerous technologies. It seems germane to address them by their known variables, and possible risks. All future technologies are potential existential risks. This is for two reasons. First, because new technologies are arbitrarily powerful being new, and because the implications of such technologies are not yet known. (A simple example: suppose you invent a new radio that operates on gravitic signals, or high energy EM, and it's extraordinarily successful. And some time later it's discovered that it causes sterility in males. At ranges far enough to penetrate beyond line of sight. The entire human race is sterile, bam boom, existential risk, no knowing). Also there is the possiblity of future development in deliberate attempts to destroy our race. species suicide is not unheard of, and is becoming increasingly likely as every member of our species becomes more capable. So we must try to organize these difficult to evaluate risks by what we do know.
Military Future Tech. The Military spends a lot of time building things to kill people. This alone would merit listing here, but in particular their technologies, while crude by futurrist standards, are far more likely to be used, and developed. In my opinion, the Military is the primary source of several future risk scenarios, particularly nanotechnological disaster, super-virii, nuclear exchange, and a new Dark Ages. The militaries of the world also have a lot of money and an interest in the future, so unlike other future risks, I believe the probality of military species suicide increases for some time, and then will begin to decrease, as they gather more information about risks involved in certain technologies.
Lone Crazies. The lone crazy profile is a simple and appealing one for many people, having been acquainted with several variations on this theme, the lone crazy gun nut, the Unabomber crazy intellectual, the crazy religious fanatic. But small groups or solo projects in general are not very credible threats to a significant fraction of people. While individual capability is growing, incorporate capability is growing much faster. So while small groups may cause issues, they greater threat is from larger organizations. There are several organizations who wish to destroy who number in the hundreds, but most of these groups are relatively dispersed or ineffectual. Terrorist organizations, for example, operate generally on the cell principle, where they gather in small isolated groups to accomplish simple goals at greater likelihood. Other small groups with dangerous ideas can be discounted on the basis of ineffectual means. An example would be the anthrax 'attacks' on media centers following september 11, 2001. This was a profoundly ineffectual delivery system that resulted in almost no casualties. We will not investigate the fact that it was weaponized anthrax from an american production facility. Likewise the earlier anthrax attacks of the Aleph sect in Japan. this extremely weathy and motivated sect, that despite several attempts and multiple deployment systems, succeeded in having 3000 people hospitalized, and almost no deaths.
There is however an exception to the rule of ineffective small groups. And it is leveraged technologies. Certain technologies provide an extreme imbalance of power to what could be a relatively small group. Unfortunately new technologies are difficult to predict, but I can lay out a few possibilities. The first is an expert system or tool-AI project. Such a project could give a disproportional amount of capability to a small group. The nightmare version of this scenario is a general AI project. The consequences of a rogue AI project are impossible to predict, and can't be reliably bounded. It could literally result in a worse than existential risk. Such as torturing to death of more people than have ever existed, or other such pleasant scenarios. Second is financial terrorism. This is an extremely dangerous possibility of a group or individual exploiting instabilities in the financial world to intentionally destabilize it. Such a scenario could result in a catastrophic meltdown of global or even intranational trade. literally starving, depriving, overheating, and killing everyone dependent on trade(everyone in a major city, cashcrop farmers, anyone in longterm medical care, career employees, people with heart conditions in hot areas, diabetics, literally anyone). There are several other possiblities that spring to mind, but you get the idea, i think. Leveraged technologies are difficult to appraise and impossible to legislate against. In fact, legislation may actually increase the likelihood of such scenarios occuring, becuase of the outlawing phenomena(independent activities may be spotlighted and made more attractive by illegality).
There are more risks than can be enumerated, but we must focus on those that are likely, threatening, and preventable. It helps little to worry about Gamma Ray Bursts, Nemesis, Strange Matter/Physics Disasters, or alien invasions, because there is little we can do to prepare or prevent such disasters.
And so, I'll list what I think are the most likely, dangerous, and worrying scenarios, please feel free to critique, add, or edit your own.
1. Leveraged Technologies by a small or large group. (Super virii, nanotechnology, unFriendly AI, unknown. Even fifth or sixth generation nuclear weapons would suffice(we don't have Tbombs yet, but we're working on it)) We need to decrease the likelihood of leveraged weapons ever beind deployed by any group or nation be decreasing tensions and inequalities that cause such weapons to be used. Going after the causes of military, terrorist, or governmental action is the only viable way to reduce the likelihood of them occuring.
2. "Natural" Deaths. Everybody dies. That must stop, if we're to credibly consider ourselves to be removing threats to life. This includes accidents and homicide/suicide.
3. A Dark Age. A dark age could be triggered by several stimuli, it could be social or political warring, such as caused the last dark age. It could be another nonexistential disaster that destroys the social commons. It could be a repressive world government. In any case, steps must be taken to ensure that we are more resilient, our support systems more reliable, and our society more decentralized to reduce this risk. A dark age would cause millions of unneccesary deaths and derail attempts at progress for an arbitrary period of time.
4. "Natural Disasters" (meteors/asteroids, magnetic disturbances, solar abnomalities, superfast climate changes,) natural deaths are no better than artificial ones. And defenses and countermeasures should be built or investigated for the most likely and best understood of natural disasters.
5. The Great Filter. The Fermi Paradox is the biggest question mark in the universe. There are no aliens. No superstrong radio signal, no stellar engineering, no macroscale objects. no gravitic radios. Why is the galaxy so empty? why the crushing silence? Is there some great filter that crushes civilizations before they can reach the interplanetary stage? And will the answer come looking for us? The question of why is vital. We must understand, because we are on the verge of becoming visible to other civilizations. What keeps us from seeing them may prevent us from expanding as well.
please feel free to be as complimentary or critical as you can. I declared Crocker's Rules a long time ago because i'm more interested in information than esteem. http://sl4.org/crocker.html