• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Scientists' Letter on AGI


  • Please log in to reply
18 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 06 March 2006 - 12:45 AM


I'm starting to see potential benefit from putting together a cohesive paragraph or two about the importance of AGI+Singularity which would be signed by the leading AGI researchers, such as has been done with:

http://www.imminst.o...ryonics_letter/
http://www.imminst.org/cureaging/

I've also created a topic at AGIRI:
http://www.agiri.org...p?showtopic=144

And will tandem back and forth in hopes of building support for this project...

#2 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 18 March 2006 - 11:19 PM

Bruce - out of curioristy and not criticism I have some questions about this type of letter. How do these signed documents help? Have they been shown to be effective? What are the expected results?

When I tell someone about cryonics or life extension or the Singularity, should I point to these documents to back the ideas up? "So," I finish telling a friend, "that is the idea behind AGI+Singularity. It may seem a little crazy, but if you go to thislink.com you will see a lot of scientists, thinkers, and other respected people are behind the idea." Is this the idea and part of the intent? Thanks.

sponsored ad

  • Advert

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 19 March 2006 - 08:07 PM

Hi Richard,

It's hard to quantify, but one goal in creating these letters is to provide a qualified starting point from which debate may grow. The anticipated outcome of more debate should be that more people have an opportunity to take a position on these topics. As more debate happens, I'm confident that more people will come to see cryonics, life extension and AGI as beneficial and rational pursuits.

BTW, thanks for joining ImmInst as a Full Member!

#4 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 19 March 2006 - 09:41 PM

Great idea! Unfortunately I think there is inherently more disagreement on timeframes, outcomes, desirability, etc., of AGI technology than there is about cryonics or anti-aging research. It's also easier to miss the point entirely. For example, Jake, on the AGIRI forums is talking about AGI causing "unemployment" - this is thinking of AGI as, say, well-educated foreign scientists competing with a home country's scientists - rather than the introduction of a totally new, recursively self-improving species. He also mentioned AGI being used for malicious research purposes - thinking of AI as a tool rather than a new species is just what an open letter on AGI should be designed to contradict.

I agree that it would be useful to cite substantive papers on the issue, but unfortunately few exist. The Singularity is Near would be a great work to cite, of course.

It will be more difficult to find people to sign anything about AGI than something about anti-aging or cryonics, I think. For this reason, it might be good to have a "seed group" to sign it, then solicit new signatures by pointing them to a pre-existing web site.

As in the cryonics letter, it would be best to solicit signatures from people in "related disciplines" (AI is so multidisciplinary) rather than "leading AI researchers" followed by "additional scientists". So there would be just one pool of signatures. So let's look at the two letters:

To whom it may concern,

Cryonics is a legitimate science-based endeavor that seeks to preserve human beings, especially the human brain, by the best technology available. Future technologies for resuscitation can be envisioned that involve molecular repair by nanomedicine, highly advanced computation, detailed control of cell growth, and tissue regeneration.

With a view toward these developments, there is a credible possibility that cryonics performed under the best conditions achievable today can preserve sufficient neurological information to permit eventual restoration of a person to full health.

The rights of people who choose cryonics are important, and should be respected.


To whom it may concern,

Aging has been slowed and healthy lifespan prolonged in many disparate animal models (C. elegans, Drosophila, Ames dwarf mice, etc.). Thus, assuming there are common fundamental mechanisms, it should also be possible to slow aging in humans.

Greater knowledge about aging should bring better management of the debilitating pathologies associated with aging, such as cancer, cardiovascular disease, type II diabetes, and Alzheimer's. Therapies targeted at the fundamental mechanisms of aging will be instrumental in counteracting these age-related pathologies.

Therefore, this letter is a call to action for greater funding and research into both the underlying mechanisms of aging and methods for its postponement. Such research may yield dividends far greater than equal efforts to combat the age-related diseases themselves. As the mechanisms of aging are increasingly understood, increasingly effective interventions can be developed that will help prolong the healthy and productive lifespans of a great many people.


The letter on aging research is 147 words. The cryonics letter is only 93 words. Unfortunately, to cover all the necessary bases and distinguish an AGI letter from a generic letter that just says "AI is great", it should contain at least 500 words.

Using the above letters as inspiration, following are some points that might be made. The key point should be that human-equivalent AGI systems will be able to improve on their own programming and robotics rapidly, leading to consequences far beyond the initial success. This is the essence of the Singularity idea.

1. Artificial General Intelligence is a legitimate field that seeks to build software systems with general intelligence, that is, AI that can independently find problem-solving strategies and solutions for problems in biology, physics, engineering, architecture, nanotechnology, cognitive science, and programming, with quality equalling or surpassing the brightest human minds.

2. Artificial General Intelligence is a subfield of Artificial Intelligence with the greatest long-term consequences. Most of Artificial Intelligence is focused on building software systems for narrow tasks, rather than flexible general intelligence.

3. Artificial Intelligence as a field is not frozen or stagnant, and many important advances have been made in recent years.

4. Artificial General Intelligence, if built, would be intelligent enough to improve on its own programming and robotics without human assistance. Construction of the first true AI could have consequences far beyond the original programmers' intentions. Because of its superior cognitive hardware, AGI could self-improve very rapidly by human standards. This represents significant risk but also significant promise. Rogue AI is a legitimate, near-term threat to the human species, on par or exceeding the risk of nuclear war, bio-terror, or asteroid impact.

5. The problem of how to ensure that AI remains friendly to humanity as it gains the ability to reprogram itself is unsolved. Before we build human-equivalent Artificial General Intelligence, there should be extensive theoretical and experimental (on infra-human AIs) studies to ensure that future AGIs are good global citizens, even given the ability to reprogram themselves, adhering to the "spirit" and not just the "letter" of their goal programming.

6. Ultimately, human-equivalent AI cannot be avoided. So AGI researchers should do their best to ensure that AGIs benefit humanity rather than hinder it. Because the benefits of successful AGI would be so large, arguing about the specifics of the distribution of benefits is not as important as ensuring that everyone receives them.

7. We must not anthropomorphize AI, and assume that AGIs will be motivated by the same things that motivate us, find challenging the same obstacles that challenge us, arrange themselves in social configurations the same way that we do, etc.

8. A number of potential paths to AGI exist, including symbolic AI, genetic algorithms, universal inference and decision theory, and whole brain emulation.

I argue that any letter on AGI that hopes to make a beneficial impact will explicitly include all of the above points. To ensure a positive impact in the media, it would be wise to include the biggest names possible, minimizing the inclusion of those without doctorates, without association with well-known AI companies, or without association to explicitly Singularity/AGI-related organizations. Here is a rough list of potential invitees:

J. Storrs Hall, Ph.D
Ray Kurzweil
Eric Drexler, Ph.D
Nick Bostrom, Ph.D
Robin Hanson, Ph.D
Bart Kosko, Ph.D
Hugo de Garis, Ph.D
Marvin Minsky, Ph.D
Steve Omohundro, Ph.D
Anders Sandberg, Ph.D
Ben Goertzel, Ph.D
Aubrey D.N.J. de Grey, Ph.D
Eric Baum, Ph.D
Pei Wang, Ph.D
Hans Moravec, Ph.D

everyone on the list Bruce linked
additional names

Edited by MichaelAnissimov, 20 March 2006 - 01:18 AM.


#5 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 19 March 2006 - 10:37 PM

Great work, MA. You've beautifully prepared the blueprint. I agree that this AGI letter will need to be longer than the others in order to clarify and encompass more complex ideas. It seems to me the next logical step is taking the dive and creating the first draft...

#6 tyleremerson

  • Guest
  • 10 posts
  • 0

Posted 20 March 2006 - 12:45 AM

MA: Indeed good start. Ray didn't do a Ph.D. but should be approached.

BK: You must quantify goals otherwise an AGI letter will lack precise intent; without precise intent you won't have precise meaning; without precise meaning its value will be lessened.

I like your idea. I am interested in working with you and others on this. The Stanford Singularity Summit would, e.g., be a remarkable venue to announce an open letter on AGI and the singularity. The Institute is brainstorming the possibility of a formal "singularity studies research program" that would build a bridge toward academia. The letter might outline an AGI and singularity studies research proposal to build consensus among signers and quantify academic encouragement.

#7 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 20 March 2006 - 03:36 AM

Recommended domain name: singularityletter.org

513 words

Probably needs more concrete discussion on the benefits of AI, and why it's worth hurrying for. Definitely needs general editing. Claims are strong and detailed, revisions would go in the direction of milder and vaguer claims.


To whom it may concern,

Throughout the last half-century, the field of Artificial Intelligence (AI) has been progressing steadily. Today, AI systems help humans accomplish a variety of tasks, from filtering our e-mail to preventing credit card fraud.

If progress continues, the problem-solving abilities of AI systems will approach and then surpass the brightest human minds. These systems will acquire the ability to improve their own source code without human assistance, giving rise to smarter versions that no human team could program directly. These versions will be intelligent enough to have influence over the real world, including improving their underlying hardware.

Humanity has no experience dealing with a species smarter than it. For this reason the creation of Artificial Intelligence should be approached with caution. To distinguish between narrow-purpose AI and AI designed specifically for general intelligence and self-improvement, the term “Artificial General Intelligence” (AGI) was coined.

The creation of a smarter-than-human species has been called a ‘singularity’ by futurists, by analogy to singularities in cosmology. In cosmology, the singularity at the center of a black hole refers to the point at which the laws of physics as we know them cease to apply. This doesn’t imply that laws vanish, but simply that they change in ways we can’t foresee. The analogy is not perfect, but a cosmological singularity captures some of the uncertainty we will experience when confronting a smarter-than-human species for the first time.

The creation of AGI would be unlike prior technological milestones. AGI would be capable of independently intiating actions and making choices, inventing new technologies, and solving difficult problems. Created by human programmers rather than evolution and natural selection, AGI will not necessarily be motivated by the same things that motivate us, find challenging the same obstacles that challenge us, or arrange themselves in social configurations the same way that we do.

The choices an AGI makes when improving upon its own programming will stem from its initial top-level motivations. To minimize the probability of rogue AI, researchers working towards generally intelligent systems need to instill them with positive goal structures – altruism, benevolence, philanthropy. Because the benefits of successful AGI would be so large, arguing about the specifics of the distribution scheme is not as important as ensuring that they are received by everyone.

It is our position that AGI cannot be avoided entirely. As AI researchers and futurists, it is our responsibility to do as much as we can to ensure a positive outcome. We have begun by emphasizing the importance of Artificial Intelligence and the fundamental difference between self-improving AI systems and the AI systems of today.

We do not expect to confront these questions in the distant future. Smarter-than-human AI is something we anticipate arriving in the next years or decades, not centuries. The creators of the first Artificial General Intelligence are people that could very well be alive today. The policies and precedents we set in the present will have influence over what will happen in the future. The potential benefits of AI exceed those of any other technology. We will do our best to ensure they are accessible to all.

Signed,

#8 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 20 March 2006 - 03:40 AM

Thanks, Tyler.

I'll be happy to work with you, MA and others on creating an unaffiliated open letter on AGI.

BK: You must quantify goals otherwise an AGI letter will lack precise intent.


Right. Such is the reason why I proposed the following at AGIRI.org on Feb 26, 2006:

I'm hopping to gain feedback from AGIRI members on how best to craft an open letter on AGI, where I envision the letter's focus to be on the following (or more):

1) AGI's likelihood
2) AGI's potential for good
3) AGI's potential for bad

http://www.agiri.org...4&st=0#entry240


To briefly encompass my thinking, the underlying goal may be:

publish AGI open letter + increase awareness =
more bright minds working toward AGI for good reasons
which increases likelihood of a good Singularity

#9 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 20 March 2006 - 04:05 AM

Thanks, MA.

I've reserved singularityletter.org

#10 tyleremerson

  • Guest
  • 10 posts
  • 0

Posted 20 March 2006 - 04:47 AM

Jeff Medina had some good remarks about this:

[19:29] wwfaid: K. I tend not to think generic letter signing events are worth bothering with -- seems like if the claims of a letter are really supported by enough bright, educated people to make it noteworthy, the claims would already be widespread enough among the people whose opinions matter to make the letter worth bothering with -- and compare the Discovery Institute's letter against evolution / in favor of "intelligent design" signed by a few hundred PhDs (and the scientific establishment's destruction of that letter by outdoing its numbers with an opposing letter signed only by scholars named Steve), but if Klein is going to do it anyway, can't hurt to add momentum to Singularity Studies by working it in somehow.

[19:31] wwfaid: The only instances I can think of where such a letter meant anything were where *many* world-renowned scientists signed a letter to show public policy folks & politicians that something was important to do or avoid, a key example being the letter signed by Einstein and many others against nuclear power.

[19:32] wwfaid: They've never been successful swaying scientists/researchers, as far as I've seen.

[19:36] wwfaid: Switching the letter to support of "studying the singularity further" would require that potential signors to agree that the singularity is both significant enough *and* likely enough to be worth shifting study resources from other popular, widely-seen-as-important research topics from those topics to it. I highly doubt the majority of researchers in singularity-relevant fields would support that sort of letter either.

[19:40] wwfaid: The best that seems plausible to me is agreement that it is worthy of further study... but not any moreso than the Riemann hypothesis, reinforcement learning subfields, neurological therapies, blah blah blah (i.e., most scholars will grant that most anything with even a small apparent chance of relevance to the real world is worthy of having *someone* studying it further, because (1) some areas that didn't appear fruitful initially turned to be very worthwhile and (2) if we spend money on researching the mating habits of midges in Wisconsin, certainly we can spend some money on kooky singularity possibilities).

#11 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 March 2006 - 05:01 AM

Within the letter, providing links to key words to Wikipedia may be helpful... such as:

Singularity > http://en.wikipedia....cal_singularity
AGI > http://en.wikipedia....al_intelligence

#12 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 07 April 2006 - 03:23 AM

MichaelAnissimov: Probably needs more concrete discussion on the benefits of AI, and why it's worth hurrying for. Definitely needs general editing. Claims are strong and detailed, revisions would go in the direction of milder and vaguer claims.


Starting from the top, I hope to sharpen MA's ideas to more effectually convey the points.. but I really want to focus on the first paragraph to capture the essence...

--
To whom it may concern,

Within the 21st century, the creation of Artificial General Intelligence (AGI) will emerge as the greatest scientific achievement of all time. However, as with nuclear power, AGI's development will be fraught with equally great promise and peril. Therefore, it is incumbent upon us to more fully understand the consequences and benefits, so that we may adequately prepare.

#13 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 07 April 2006 - 03:40 AM

MA: Indeed good start. Ray didn't do a Ph.D. but should be approached.

It's my understanding that Ray has been awarded honorary doctorates. Does anyone have more info?

#14 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 07 April 2006 - 04:07 AM

It's my understanding that Ray has been awarded honorary doctorates. Does anyone have more info?


http://www.kurzweiltech.com/raycv.html

From the site:

Honorary Degrees
2005  (scheduled) Honorary Doctorate of Science, Worcester Polytechnic Institute
2002  Honorary Doctorate of Humane Letters, Landmark College
2000  Honorary Doctorate in Science and Humanities, Michigan State University
1993  Honorary Doctorate of Science, Dominican College
1991  Honorary Doctorate of Science, Queens College, City University of New York
1990  Honorary Doctorate of Science, New Jersey Institute of Technology
1989  Honorary Doctorate of Humane Letters, Misericordia College
1989  Honorary Doctorate of Engineering, Merrimack College
1988  Honorary Doctorate of Science, Rensselaer Polytechnic Institute
1988  Honorary Doctorate of Science, Northeastern University
1987  Honorary Doctorate of Music, Berklee College of Music
1982  Honorary Doctorate of Humane Letters, Hofstra University


http://www.ecollege....n?id=rk#keynote

From that site:

Mr. Kurzweil has received twelve honorary doctorates and honors from three U.S. presidents


So I suppose the "scheduled" from the first list actually happened, because that adds up to 12.
:)

#15 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 07 April 2006 - 12:25 PM

Hmm, 12 doctorates, and no Ph.D. (Doctor[-ate] of Philosophy). So I guess the "Ray didn't do a Ph.D." also extends to "Ray doesn't have a Ph.D.".

#16 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 24 April 2006 - 07:24 PM

Wishing to strengthen the letter, I would like to post a reply received recently from a friend about the concept of our Scientist's Letter on AGI:

==

Someone was right when they suggested in a post that
to be meaningful, it should include public policy
prescriptions for things to be done or not done by
government, and such regulatory answers as should or
should not be imposed.

While we dont want AGI regulated (it cant be, anyway),
there could be a useful constraint imposed by making
AGI creators liable for the actions or damages caused
by an errant Artilect, in the same way that parents
are liable for damages caused by their minor children.

Such legal liability would encourage the proper design
precautions, and is merely the state guaranteeing that
individuals/companies/institutions are responsible for
their actions in injuring one another, the ultimately
libertarian, Jeffersonian ideal of what the State is
really only good for anyway.

If BINA-48 were emancipated, who is responsible for
paying her electric/network bill (i.e. food/housing)
each month? "Dont breed 'em if you cant feed 'em."

Forcing creator-parents to take responsibility for
their offspring, whether biological or silicon, is
sensible; otherwise, ultimately, the government will
end up taking the responsibility for everyone, to
protect them from the mistakes of the irresponsible.
That's never been a good thing in the past, and is not
likely to be in the future, either.

An Artilect must also be directly subject to civil and
criminal sanctions for their actions, themselves, just
as a juvenile can still be imprisioned, even if his
parents must pay off his damages.

#17 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 24 April 2006 - 07:42 PM

I've always believed that if it walks like duck and quacks like duck (down to very deep levels) it should be treated like a duck. That's partly why I get upset at people that don't take identity duplication seriously, with the attendent implication that rights of duplicates shouldn't be taken seriously.

While it's fashionable to worry about what AGI will do to humans, the potential for abuse of non-biological beings by humans is at least as great. I don't know whether you want your letter to touch on that or not.

For people new to the idea, aging intervention is a complex issue. Cryonics more complex still. AGI by comparison is right off the charts in terms of the technological and moral complexity of issues raised.

---BrianW

#18 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 04 July 2007 - 04:18 AM

Created an editable draft here:
http://www.agiri.org...AGI_Open_Letter

sponsored ad

  • Advert

#19 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 September 2007 - 12:36 AM

A number of colleagues from SIAI are moving forward now w/ revisions to Michael's original version. So, if anyone wants to correct our grammar or offer other suggestions, this would be greatly appreciated!

Latest version:
http://www.agiri.org...AGI_Open_Letter




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users