• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Intelligence Explosion


  • Please log in to reply
No replies to this topic

#1 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 21 November 2009 - 10:39 PM


Intelligence Explosion

The "Intelligence Explosion" is one school of thought of the "Technological Singularity". This is a collection of writings mostly by Eliezer Yudkowsky that is the first comprensive, organized explanation of the Intelligence Explosion theory. In the introduction, the term "Technological Singularity" is defined, and then the various schools of thought of the Singularity are explained, distinguishing and then introducing the Intelligence Explosion school of thought on the Singularity. The main body is an exhibition of the Intelligence Explosion and Friendly AI theories. The conclusion is a summarization of the positions of Eliezer Yudkowsky and brings this vision to an important call to action for the reader. Further readings are included at the end.

Introduction

1. Technological Singularity - http://en.wikipedia....cal_singularity
2. Three Major Singularity Schools - http://yudkowsky.net...ularity/schools
3. What is the Singularity? - http://singinst.org/...sthesingularity

Intelligence Explosion

4. The Power of Intelligence - http://yudkowsky.net/singularity/power
5. Seed AI - http://singinst.org/...OGI/seedAI.html
6. Cascades, Cycles, Insight... - http://lesswrong.com...cycles_insight/
7. ...Recursion, Magic - http://lesswrong.com...ecursion_magic/
8. Recursive Self-Improvement - http://lesswrong.com...elfimprovement/
9. Hard Takeoff - http://lesswrong.com...f/hard_takeoff/
10. Permitted Possibilities, & Locality - http://lesswrong.com...ities_locality/

Friendly AI

11. Cognitive Biases Potentially Affecting Judgment of Global Risks - http://www.singinst....tive-biases.pdf
12. The AI-Box Experiment - http://yudkowsky.net/singularity/aibox
13. Artificial Intelligence as a Positive and Negative Factor in Global Risk - http://www.singinst....igence-risk.pdf
14. Why We Need Friendly AI - http://www.preventin...ed-friendly-ai/
15. Coherent Extrapolated Volition - http://www.singinst....upload/CEV.html

Conclusions

16. What I Think, If Not Why - http://lesswrong.com...ink_if_not_why/
17. Why Work Toward the Singularity? - http://singinst.org/...dthesingularity
18. The Singularity Institute for Artificial Intelligence (SIAI) - http://singinst.org/
19. Donate - http://singinst.org/donate/

Links

20. Consolidation of Links on Friendly AI - http://www.accelerat...on-friendly-ai/
21. Writings about Friendly AI - http://www.singinst....ut-friendly-ai/
22. Reading - http://www.singinst.org/reading
23. Eliezer Yudkowsky - http://yudkowsky.net/

Edited by RighteousReason, 21 November 2009 - 10:47 PM.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users