• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Your pie chart


  • Please log in to reply
13 replies to this topic

#1 caliban

  • Admin, Advisor, Director
  • 9,150 posts
  • 581
  • Location:UK

Posted 19 September 2003 - 04:11 PM


Just curious

Posted Image


does this graphic represent someones educated assesment of the likelyhood (or potential impact?) of catastrophic risks, or is it just an example?

#2 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 19 September 2003 - 04:26 PM

It better be an example because it certainly doesn't represent my "educated assessment".

Terrestrial risks would definitely be a lot larger as a threat to individual life unless we are only worrying about mass death as an issue. Threats to life begin with risky behaviors, definitely terrestrial, not cosmic or requiring a rocket scientist to understand. They also represent the single largest statistical cause of general "premature death" now (other than aging) and after that add politics and infectious or genetic diseases so I would not concur at all with the graph as plotted.

What it may represent is a view of what "may" come to threaten us if we overcome the more terrestrial (common) threats first, yet they tend to linger on as potential threats by virtue of the paradox of empowerment. That cosmic threats exist and our methodology of preparation and assessment of probability is inadequate are both without question given the evidence but that said preparing for a known threat is reasonable, while preparing for an unknown threat borders on paranoid delusion.

Even "Unfriendly AI" would IMO opinion be an example of a terrestrial threat unless we are discussing the invasion of Extra Terrestrial Borg. [alien]

#3 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 19 September 2003 - 06:20 PM

Just as a point of reference I have said before and emphasize it here "that which you do not know can kill you."

However after having said this please realize that this is very different from either living in fear of the unknown or trying to make self satisfying "risk assessments" that by the very nature of the threat's being unknown are in fact unknowable probabilities.

Unknown threats fall into three general categories, only two of which are rational to discuss at any great length in search of sufficiently valid criteria in order to make a risk/reward assessment.

1. Threats from very rare but extreme occurrences from which sufficient evidence warrants not just further analysis but data gathering and intentional contingency planning. (i.e. asteroid impacts as opposed to vastly more remote cosmic possibilities like invading ET's)

2. Threats from sources to include emerging technology that by their very nature possess a potential for harm. (i.e. threats from runaway AI, nanotech, bio-weaponry, or even "natural disasters" such as sudden Global Warming)

It is rational to discuss these two because while it is invalid to assign any serious probabilities as there exists both a clear relationship to our behaviors as a species and there is at least a "possibility" that there exists measures we may take to address the threats before and/or after the fact.

The third category is not particularly rational to discuss very much because as in the case of GAI there's little we can do about it except from a design perspective, and b) by its very unknown nature we are blind to anticipate it what the unintended consequences will likely be. I see all superstitious threat in this last category along with some very real yet "unknown" ones.

Unfriendly AI risks falling into this category but it really should be seen as a subset of the second. Nevertheless there is simply insufficient experience or data to make any claim as to a probability with respect to the specific risks and discussion in anything but very general terms is not likely to be very fruitful.

BTW, I for one am at a loss to define threats from other humans as a known or an unknown quantity. I think we are a known quantity and should be able to define our own behavior rationally but clearly many believe themselves to be ultimately a mystery to themselves and thus we represent what is also an "unknowable risk" to ourselves from that point of view.

IMO this last is a cop-out to create plausible deniability and avoid liability and responsibility. I beleive we are capable of knowing ourselves and I suggest that it is high time to honestly face ourselves and realize that "to thine own self be true" is something that can be accomplished, and has been by many people.

The risk assessment you suggest CaliBan as hypothetically representing this councils views is not particularly valid for the reasons I have given but also it would be unhelpful unless it is also accompanied by a contrasting reward assessment.

It is still humans that are the greatest threat other humans and yet humans also represent the source of the greatest possible benefit to one another if we can get beyond win/lose strategies.

sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#4 chubtoad

  • Life Member
  • 976 posts
  • 5
  • Location:Illinois

Posted 19 September 2003 - 08:37 PM

I pretty much agree with Lazarus assessment. I'm really not sure if unfriendly AI is a real threat though, I would like to get a lot more information from people in the field.

#5 celindra

  • Guest
  • 43 posts
  • 0
  • Location:Saint Joseph, TN

Posted 19 September 2003 - 08:48 PM

I question the logic of putting a non-existant technology as the second biggest threat. By this logic, I could put "human-devouring mutant aardvarks from Venus" or "misuse of time travel" or any other yet-to-be-created technology.

And, yes, I know it's long term.

Right now, AI poses absolutely no threat to anyone ... because it doesn't exist. However, there are numerous insane people with nuclear weapons (we call them politicians) who could easily snap and kill billions.

#6 patrick

  • Guest
  • 37 posts
  • 0

Posted 19 September 2003 - 09:04 PM

This is exactly why we need a calculus of risks; an arbitrary yardstick. So we can get down to brass tacks.

#7 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 19 September 2003 - 11:22 PM

This is exactly why we need a calculus of risks; an arbitrary yardstick. So we can get down to brass tacks.


Perhaps you could find a better word than "calculus." It is not rational to assign specific probability but it is perfectly rational to assess relative levels of risk by "prioritizing," given that we always keep open minds about incoming information and ongoing study. At the moment for example I would put war well ahead of asteroids as an immediate threat, but that would change dramatically the moment we got a date within my life expectancy for a probable collision.

Also as long as everyone accepts a priori we are comparing what are at best semi-related multiple variables and in some instances must be assumed to be unrelated variables that in terms of Murphy's Law could coincide to make the impact of such events larger than the individual threat.

An example of the last actually happened. Back in 1980 one of the events that triggered a Defcon alert that put us on launch status was a meteorite that exploded in the atmosphere and initially was read as an atmospheric nuclear blast. We all figured things out before we added our nuclear enhancement to the relatively benign blast debris but if the meteorite had struck the planet during that period, and (worst case scenario without warning onto a major metropolis) then we might have launched first and sorted it all out too late.

Also what we can do something about is valid to address what we can't specifically do something about then becomes a default issue. Again this changes if someone can offer a rational approach that merits discussion, the problem with "regulation" is that it is predicated on denial. The primary form of regulation is licensing and taxation, the second is the imposition of a "Security related Status" and the last is to proscribe the endeavor entirely. The last is the least likely to have long term benefit (look at the example of Japan banning firearms) and most likely to leave a society without recourse, hence the least likely to work at staving off threats.

The first category is actually not the first impulse of government as we see in the case of alcohol and stem cells even though they fall back upon it as we see in states such as Nevada and Holland as a rational manner of regulating the Public Safety for example with respect to prostitution. The idea being that licensing forces some types of health care and inspections as well as reducing the general level of violence associated with the trade in flesh. Sin tax is just an obvious example of being able to regulate enforcement and make the consumer pay. Government tends to be always be heavy handed and reactionary first by imposing repressive legislation that is then overcome through a lengthy evolutionary process of social reform and ongoing incremental legislation.

Caliban as you have started this thread; what do you think the graph should look like as opposed to what you suspect other folks think?

This council is being established to begin this assessment process and I for one don't think we have reach a consensus yet at all, let alone even discussed the minimum necessary variables we can apply.

I lean more toward Celindra's attitude personally but I am also conversely convinced that the beginning of the threat exists because there are behaviors like cyberterrorism and a globally competitive quest to build true AI. It is not yet a practical reality but as various groups are racing to build it we can establish intent and we certainly should at the very least be aware of their methodological approaches.

#8 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 19 September 2003 - 11:34 PM

/me claps.. the idea worked! ;)

I'll be happy to rework the graph to better reflect the prevailing views... i'll be back a bit later..

#9 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 20 September 2003 - 09:57 AM

I mostly agree with what all of Laz has to say here. I worry that a pie chart is overly inflexible and rigid, and 1) we're all bound to have disagreements about the *precise* priorities we should be focusing on, and 2) these priorities are likely to change, either in tiny amounts or by a lot, as we gain new information. It's okay not to have exact ideas about which kinds of risk are the most dangerous! Any quantitative effort in examining the likelihoods of these risks would be interesting; but the subject is vastly too complex for few-hours-a-day volunteers like us to be coming up with specific numeric quantities!

Let's start by stepping back and looking at the basic, basic, basics of what we agree upon. ;) One, we probably care more about global and personal threats than the typical citizen does. We should emphasize that worrying about these things isn't what we do all day, and that we're really all well-rounded people just trying to get on in life, except that our awareness of technology and other big-picture issues has led us to put our attention and thought into certain areas which we consider to be important. The reason why *any* of this big-picture stuff is important is all the small-picture experiences that fit into it.

What else? Okay, it seems that the *kinds* of risks we worry about are greater in number than what the average person worries about. The more we learn about technology, the more specific types of risk we see, and while we certainly can't agree on the specific numbers, there does seem to be a constellation of risks worth thinking about, summarized very nicely by Dr. Bostrom in his Existential Risks paper. We should point out here that just because we know about more risks, doesn't necessarily mean we spend *more* of our time than the average person thinking about them, although it might. Part of why we're thinking about this stuff at all is that none of us really see ourselves as "average persons"; we're trying to assume the hypothetical role as the leaders of a nation, people who have great leverage over the course of the future, and so on. Whether or not we do actually have this influence is irrelevant at the moment! We should point out that this is just our interest; some people are interested in Motorcross, paragliding or whatever, and while we may be interested in these things too, *another* one of our interests is getting together and speculating about the possible risks humanity will be confronting over the next few decades and beyond.

Okay, so what else is different between us and typical citizens concerned about the fate of the Earth? I would say that we tend to focus less on the specific details of political situations, more keenly concerned with *the technologies being used in these political events* rather than their specific nature or the people involved, although all of these factors may be important in the last analysis. The fact remains that we all simply *don't have the time* to know all the details of every possible political or social event, past and future, so we're more concerned about the *technology* behind these actions, rather than all the details. Many of us would agree that our technological capacity is increasing, and our moral capacities are holding comparatively constant. Accelerating change might bring accelerating compassion, but I tend to think that the change aspect comes first, and 2nd-order effects like aggregate compassion improving comes second. This is why analysts are always saying "technology is racing ahead of ethics".

Okay, here's another thing we might agree upon. Since these technologies are not limited to any one nation or group in scope, we tend to favor a *global*, all-humanity-inclusive stance in reasoning about these issues. We don't invoke "Ethnic Grampa's common sense" when thinking about these things because 1) specific nations and cultures and beginning to matter less, global issues matter more and concern everyone, and 2) the specific rules and regularities present in the annals of history often just apply to narrow situations their formulators encountered, nothing like what we're encountering today! I think we agree that many of the history books can be thrown out the window on these existential issues, because none of our forebearers could have possibly imagined the situation we are in.

That's all I have to say for now. ;)

#10 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 20 September 2003 - 11:37 AM

I alluded to one reason we share that brings us into common focus on trying to do something about such seeming insurmountable "existential risks" and that is a consequence of the positive aspect of living longer. If we are to live longer then we must prepare to meet such challenges head on.

Living longer will not take place with greater complacency about the world around us, it will not be caused by seeing life as less meaningful and it will not in itself cause us to see life as less meaningful. Quite to the contrary I suspect it will promote a new awareness of what transpires around us in this world and a greater desire to influence events towards more positive outcomes that we live to share.

For us to succeed we must act as leaders by example and demonstrate concern, good will, and make honest efforts to remedy that which has confounded human progress throughout the ages but we are not the first generation nor hopefully the last to come upon such recognition and I suggest that the history books are still valuable in this respect by providing a better understanding of the foundations upon which we build.

However we are the bearers of a new message of hope and it is one worth living for; one that says to everyone listening that there is a real chance to build a better world in our lifetime and live to preserve and protect it, but more importantly to enjoy it.

Living longer simply forces a measure of life by very different standards than are generally understood and such a difference contributes to radically altered perspectives on what constitutes risk but it also irrevocably alters the measure of reward.

The baubles that passed for profit in the past may no longer appear as anything more than childish trinkets in a world where worth is measured by the quality of lives shared in an Eden of our own creation, or where the real risk is eternity in a hell of our making.

Death is life's whip but it makes slaves of us all and what we are about is the liberation of the human spirit from the bondage of mortality. This cannot be accomplished by a single-minded fixation on risk as all risks must be measured in costs, capabilities, and the potential for reward. This is how we turn life's lemons into lemonade.

For example I have argued numerous times that we need to focus on capturing asteroids because the rewards are very tangible and immediate. It would also provide a technology in place that can meet an unforeseen challenge someday to prevent the next KT event. It is not pie in the sky chart speculation.

I am not as concerned about Cosmic Heat Death as I am about Solar Storms and real terrestrial climate because it is what I must address at the moment and even this borders on an irrationally optimistic goal. Getting fixated on building shelters against Cosmic Ray Bursts is a little like building dikes in the desert against the threat of flooding caused by global warming, I suggest there exists a better first line of defense.

I am not complacent about resource depletion, climate, about polarizing politics and stratified economics. I think a lot can be done that we aren't in terms of building collective protection but the greatest threat most people everywhere face is street crime, not even epidemics. The greatest threat they daily cope with is the loss of income from unstable and exploitive market practices and famine still kills more people on Earth today than even war so lets not get too sanguine about technology because it only works were it is affordable and even then some costs are measured beyond money.

But in building a collective protection who are we? How do we learn to trust one another and build on our individual strengths and overcome our weaknesses?

One pragmatic approach when feasible is to try and sort out tasks so as to tackle them one at a time in a manner intended to lend a symmetry to the effort such that success in one builds aptitude and resources for dealing with a greater threat. This is the reason I have recommended altering the focus of the space program from going to the planets immediately to capturing asteroids because I see this as such an avenue of opportunity, not merely because I fear a strike.

I said "when feasible" because life rarely gives us the luxury of tackling one problem at a time and for this reason juggling should be required study for any politician, not too mention it would improve their general showmanship. ;))

#11 Bookcase

  • Guest
  • 11 posts
  • 0

Posted 20 September 2003 - 12:07 PM

A dice has 6 faces. Roll it and the probability of achieving each of those faces is 1/6th. There are 6 variables, and each of these is fixed... there will always be 6 faces. A risk assessment of threats to life?...... I'm sure you will agree this has infinite variables, made infinitely worse as the probability of each variable coming up is continuosly changing.

A statistical estimate of various risks is obviously possible for a certain time, but as some here have pointed out, there could be an infinite number of risks of which we have not encountered. An assessment can only be reliably based on data obtained (ie number of deaths for each type given a total number of deaths)..... but with such little data.... we might as well spend our time pondering over how to each immortality ;)

#12 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 20 September 2003 - 03:04 PM

There is a major qualitative difference between the various threats and data gathering ability and this needs to be better understood along with the "threats". For example I keep insisting there is extremely valid but poorly understood data with respect to impacts that might threaten us and so insist it validates attention while some subjects much more speculative and thus reactions are less specifically defined and far less certain of doing anything except assuaging irrational fears.

GAI (General Artificial Intelligence) is an area that revolves around humanity until it is accomplished and then a new relationship will automatically develop in response to the synergy between us and the new species/phylum of our creation BUT this hasn't happened yet and all that can be understood is intent plus history. This is qualitatively different than the relationship that has evolved memetically between humanity and its technology.

GAI is more analogous in principle to a new phylum of life from a general theoretical understanding that is why it resists being seen with simplistic utilitarian ideology. It isn't a "mimicry of life" it is alive once sentient, what it isn't is nicely understood as "organic life" as we have long defined such but that is perhaps more a problem with our definitions.

Asteroid impacts by contrast are far more "certain" and a vast amount of data is already collected but poorly correlated. New evidence is coming in daily but interpretation of this data requires attention to details that few will find profitable except those that pay attention to the potential of the information to create opportunity not simple catastrophe. In other words it is valid to address this because the risk's are more ascertainable and the immediate responses are more attainable.

One man's poison is another man's pleasure that which destroys can also provide. How we make this happen is often the result of opportunistic creativity and an awareness of how large scale systems relate to their elements along with more than a little bit of luck.

One such lucky break happened to me a few minutes ago when I stumbled upon this site.

Posted Image
http://www.unb.ca/pa...ImpactDatabase/

It makes a difference in these discussions when we can begin to discuss facts and compare various interpretations of the facts for validity. To accomplish this we need observed facts first, corroborated facts second, and a measure the fact's validity and contrapuntal relation to conflicting facts third; all before even attempting to fix upon single minded theoretical interpretations.

Toward this end I suggest that no serious discussion of any of these threats can take place without the general and specific categorizations that serve to reduce the threats to meaningful proportions but also a concurrent attempt to data-mine the web-world for such cross linked databases that allow serious ongoing independent investigation and confirmation of hypotheses, not to mention at the very least informed discussion.

So along with ever implied risk should come a topic threads that function so encourage serious extended study into the very best data we can gather from around the world even at the risk that some will overlap.

For example while as a US citizen I tend to focus on Hubble and NASA sources of astronomical data, they are not by any means the only sources. Those of you that wish to contribute your various group's data should be encouraged to do so, even and perhaps especially when these contradict more "established sources" because when such real contradiction is confirmed, we are often at the starting point of the most important journey of exploration possible in our time.

#13 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 24 September 2003 - 06:43 PM

Add these related comments:

Existential Risks - Must Reads
http://new.imminst.o...t=0

sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#14 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 28 February 2004 - 10:31 AM

I've retired the Pie Chart and replaced with:

Posted Image




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users