Your use of the words “existential risk/threat” is interesting. Is this terminology that others use in the manner you describe?
I don't think MANY people use it, just people in philosophical Singularitarian discussions and essays.
I find the philosophical concept of existentialism to be an abhorrent malady of the continued alienation our failing social experiments facilitate. I like to have my recognition of evil in the universe to be clear and unfettered so that others can help me fine-tune my perspective and so that I may better address how to combat such evil.
Yeah, existential risks and their causes are usually considered "bad" (can't think of a good one!
) ).
Might “existential risk/threat” be better phrased as “existence risk/threat?”
You could, but I'm thinking it'd just cause more confusion than anything. I know why you think it should be called that though (and it does make sense to do so, but society, as I'm sure you're aware doesn't make too much sense a lot, either).
Do you honestly believe that I’m just trying to get my way? If so, I would greatly appreciate your sharing how you come to this.
I couldn't COMPLETELY infer such, but when you speak of rationalizing a society and complaining about our full-speed-torpedoes-be-damned ideology, such does become slighty inferrable and credible (whether it is true or not).
My hope is that scientists will come to have some forethought of their own perhaps with help of a social system that works to recognize and avoid dangers to existence. Sure, some will pursue any science for the sake of science though it may have the potential to put an end to science by destroying our selves, truly a maniacal obsession.
I have but one question: Which do you think is more dangerous... nanotechnology, uploading technology, or FAI?
Interesting that we both use imminent danger as justification for two stances. Perhaps this is because our stances are not directly opposing. For those who may want no singularity to ever happen, I would say your argument is valid but both of those polarities seem to come from a non-contextual understanding. I find from a point of view that we are all in this together, that the repercussions of our activities can not help but have repercussions that effect others, the development of the singularity must happen with great care and effort to insure that this potential is benevolent. I don’t think you disagree with this. I’m just asking you to make the leap to the understanding that our society right now allows the creation of technology for ends that are both nonsensical and destructive; that how and why we make any technology should be the prime consideration.
FAI, in particular, is meant to be as benevolent as possible for humanity. To say that FAI is just a very ambitious general intelligence project is simply underestimating SIAI's (Singularity Insititue for Artificial Intelligence) goals.
Saying that sociology should be of higher priority than the singularity does not mean that the singularity shouldn’t be of high priority also. I just hope we can avoid the “damn the torpedoes, full speed ahead” perspective. If we can dodge and avoid dangers that we face, I say, let it be. No blinders for this horse.
Sorry, damn the torpedoes! As long as there are greater risks other than a gone-crazy "Friendly" AI, such as nuclear war, biological war, nanotechnological disasters and wars, and uploaded individuals running through the Internet amok, or something unforeseen, I plan on helping to create in anyway I can a world revolutionizing FAI (and I, too, will be rational about the implications of such before advocating it).
Let it be is to let the dangers stare you in the face. Call me crazy, call me irrational, but I feel that trying to take the time to think about a Terminator AI scenario, or something similar, is just people's general bias to Strong AI (AI with the capacity more intelligent than humans). Aggressive behavior is inherent within human beings as a complex evolved trait. A AI would have no instinctual reason to retaliate, or initiate, violence against humans. That's not to say one wouldn't, but humans today are probably more prone to violence than proposed AI are.
If at all in the scientific community,your hopes are fulfilled in the proponents of FAI. We're here to think of what such a possibly vast intelligence would do. If you would, just take the time to read a introductory article on FAI:
http://www.singinst....dly/whatis.html. It's not TOO long, but you'll see there is thought of the implications of FAI extensively.
You might also like Staring Into the Singularity, by Eliezer Yudkowsky:
http://sysopmind.com/singularity.htmlYes, might I have good luck and may it rub off on you too, thank you. I’m not a superstitious person, me thinks, so I can share my birthday wishes without compromising their possible manifestation. Since about the age of thirteen, my wish before blowing out the candles has been “I wish the world had all happy people.” Though I really haven’t had a birthday cake for myself for more than thirty years, I still have this wish. In the final analysis, I find altruism to be self-serving. lol
Funny, Eliezer Yudkowsky, auther of "Creating Friendly AI" would probably agree with you.