Great idea! Unfortunately I think there is inherently more disagreement on timeframes, outcomes, desirability, etc., of AGI technology than there is about cryonics or anti-aging research. It's also easier to miss the point entirely. For example, Jake, on the AGIRI forums is talking about AGI causing "unemployment" - this is thinking of AGI as, say, well-educated foreign scientists competing with a home country's scientists - rather than the introduction of a totally new, recursively self-improving species. He also mentioned AGI being used for malicious research purposes - thinking of AI as a tool rather than a new species is just what an open letter on AGI should be designed to contradict.
I agree that it would be useful to cite substantive papers on the issue, but unfortunately few exist.
The Singularity is Near would be a great work to cite, of course.
It will be more difficult to find people to sign anything about AGI than something about anti-aging or cryonics, I think. For this reason, it might be good to have a "seed group" to sign it, then solicit new signatures by pointing them to a pre-existing web site.
As in the cryonics letter, it would be best to solicit signatures from people in "related disciplines" (AI is so multidisciplinary) rather than "leading AI researchers" followed by "additional scientists". So there would be just one pool of signatures. So let's look at the two letters:
To whom it may concern,
Cryonics is a legitimate science-based endeavor that seeks to preserve human beings, especially the human brain, by the best technology available. Future technologies for resuscitation can be envisioned that involve molecular repair by nanomedicine, highly advanced computation, detailed control of cell growth, and tissue regeneration.
With a view toward these developments, there is a credible possibility that cryonics performed under the best conditions achievable today can preserve sufficient neurological information to permit eventual restoration of a person to full health.
The rights of people who choose cryonics are important, and should be respected.
To whom it may concern,
Aging has been slowed and healthy lifespan prolonged in many disparate animal models (C. elegans, Drosophila, Ames dwarf mice, etc.). Thus, assuming there are common fundamental mechanisms, it should also be possible to slow aging in humans.
Greater knowledge about aging should bring better management of the debilitating pathologies associated with aging, such as cancer, cardiovascular disease, type II diabetes, and Alzheimer's. Therapies targeted at the fundamental mechanisms of aging will be instrumental in counteracting these age-related pathologies.
Therefore, this letter is a call to action for greater funding and research into both the underlying mechanisms of aging and methods for its postponement. Such research may yield dividends far greater than equal efforts to combat the age-related diseases themselves. As the mechanisms of aging are increasingly understood, increasingly effective interventions can be developed that will help prolong the healthy and productive lifespans of a great many people.
The letter on aging research is 147 words. The cryonics letter is only 93 words. Unfortunately, to cover all the necessary bases and distinguish an AGI letter from a generic letter that just says "AI is great", it should contain at least 500 words.
Using the above letters as inspiration, following are some points that might be made. The key point should be that human-equivalent AGI systems will be able to improve on their own programming and robotics rapidly, leading to consequences far beyond the initial success. This is the essence of the Singularity idea.
1. Artificial General Intelligence is a legitimate field that seeks to build software systems with general intelligence, that is, AI that can independently find problem-solving strategies and solutions for problems in biology, physics, engineering, architecture, nanotechnology, cognitive science, and programming, with quality equalling or surpassing the brightest human minds.
2. Artificial General Intelligence is a subfield of Artificial Intelligence with the greatest long-term consequences. Most of Artificial Intelligence is focused on building software systems for narrow tasks, rather than flexible general intelligence.
3. Artificial Intelligence as a field is not frozen or stagnant, and many important advances have been made in recent years.
4. Artificial General Intelligence, if built, would be intelligent enough to improve on its own programming and robotics without human assistance. Construction of the first true AI could have consequences far beyond the original programmers' intentions. Because of its superior cognitive hardware, AGI could self-improve very rapidly by human standards. This represents significant risk but also significant promise. Rogue AI is a legitimate, near-term threat to the human species, on par or exceeding the risk of nuclear war, bio-terror, or asteroid impact.
5. The problem of how to ensure that AI remains friendly to humanity as it gains the ability to reprogram itself is unsolved. Before we build human-equivalent Artificial General Intelligence, there should be extensive theoretical and experimental (on infra-human AIs) studies to ensure that future AGIs are good global citizens, even given the ability to reprogram themselves, adhering to the "spirit" and not just the "letter" of their goal programming.
6. Ultimately, human-equivalent AI cannot be avoided. So AGI researchers should do their best to ensure that AGIs benefit humanity rather than hinder it. Because the benefits of successful AGI would be so large, arguing about the specifics of the distribution of benefits is not as important as ensuring that everyone receives them.
7. We must not anthropomorphize AI, and assume that AGIs will be motivated by the same things that motivate us, find challenging the same obstacles that challenge us, arrange themselves in social configurations the same way that we do, etc.
8. A number of potential paths to AGI exist, including symbolic AI, genetic algorithms, universal inference and decision theory, and whole brain emulation.
I argue that any letter on AGI that hopes to make a beneficial impact will explicitly include all of the above points. To ensure a positive impact in the media, it would be wise to include the biggest names possible, minimizing the inclusion of those without doctorates, without association with well-known AI companies, or without association to explicitly Singularity/AGI-related organizations. Here is a rough list of potential invitees:
J. Storrs Hall, Ph.D
Ray Kurzweil
Eric Drexler, Ph.D
Nick Bostrom, Ph.D
Robin Hanson, Ph.D
Bart Kosko, Ph.D
Hugo de Garis, Ph.D
Marvin Minsky, Ph.D
Steve Omohundro, Ph.D
Anders Sandberg, Ph.D
Ben Goertzel, Ph.D
Aubrey D.N.J. de Grey, Ph.D
Eric Baum, Ph.D
Pei Wang, Ph.D
Hans Moravec, Ph.D
everyone on the list Bruce linked
additional names
Edited by MichaelAnissimov, 20 March 2006 - 01:18 AM.