• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Viability of AGI for Life Extension & Singularity


  • Please log in to reply
249 replies to this topic

#61 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 30 June 2006 - 06:49 AM

We are able to segment the project into discreet tasks because we've invested considerable time thus far and have a good understanding of how much more is needed.


Bruce, who is "we"? I know that Ben has been working on AGI for upwards of a decade now, and brainstorming about it for longer than that... but who else on the team claims to have a specific idea of how much more is needed and has consensus with Ben on it? None, as far as I'm aware. You jumped on board only a couple years ago, and you are not an AGI specialist, so how can you claim to know exactly how much more work is necessary to achieve AGI via Novamente any more than I can speculate exactly how much work Eliezer & co. need to put in to cross the finish line?

Even if we were AGI specialists, it would still be just a guess.

Ben's post on the topic is appropriately cautious - he doesn't know, and puts the date between 6 and 25 years, which sounds absolutely reasonable to me.

The only reason I even have an upper bound on my estimate for AGI, which lies roughly around 2030, is that I actually buy Kurzweil's argument regarding brain simulation and increasing resolution and computing power. Plus, I forsee nanocomputing before then, which Kurzweil doesn't even account for.

But when you ask me, "when before 2030 do you think it will happen?", I have no clue.

#62 brizzadizza

  • Guest
  • 51 posts
  • 1
  • Location:San Clemente, CA

Posted 30 June 2006 - 08:55 PM

Has the concept behind BioLiterate and other pattern recognition software ever been applied to investment opportunities? If you could develop a program that parced huge amounts of information and was able to make informed decision about stock markets you would have a huge revenue generating machine. Obviously not just in software sales but also in using your own product.

What are the costs involved with keeping 12 programmers working full time? Would 1.2 million a year be an unreasonably small estimate? If we do a midrange estimate of time it takes to AGI and figure 12 years we're looking at a project cost of just under 15 million dollars yes? Are there laymen type organizations that offer research grants? I don't know how scientific funding works, I know there are corporate grants and there are government/military grants, but are there private grants? Would it be possible to set up a private grant system? It seems if we could unite the transhumanist community and some of the computer science community across the net we could achieve the 1.2 million yearly in micro-donations alone. We could use the internet like it was meant to be used, as the greatest tool for panhandling ever invented!

Still, a private grant set up as a charitable organization would seem to at least partway mitigate the expenses of the AGI project and could spur more public awareness of the functionality of AGI.

Brandon

sponsored ad

  • Advert

#63 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 01 July 2006 - 03:30 AM

I'm wondering if human intelligence augmentation has a chance to catch up, not by adding new capabilities, but by knocking out restrictions, to give us better access to lightning-fast processors we all have, but we have not evloved to use them for general (analytical) purposes. See e.g. Alan Snyder's and related work.

#64 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 03:42 AM

Anissimov: Bruce, who is "we"? I know that Ben has been working on AGI for upwards of a decade now, and brainstorming about it for longer than that... but who else on the team claims to have a specific idea of how much more is needed and has consensus with Ben on it? None, as far as I'm aware. You jumped on board only a couple years ago, and you are not an AGI specialist, so how can you claim to know exactly how much more work is necessary to achieve AGI via Novamente any more than I can speculate exactly how much work Eliezer & co. need to put in to cross the finish line?

Even if we were AGI specialists, it would still be just a guess.

Ben's post on the topic is appropriately cautious - he doesn't know, and puts the date between 6 and 25 years, which sounds absolutely reasonable to me.

Michael, to be sure, Ben's reply was, "Still, the general order of the time estimate is important. It's not 6 months and it's not 25 years either."

The "we" I'm referring to is the Novamente team. We currently consist of around a dozen hard-core individuals... some paid, but most working volunteer, part-time. Many on the team are paying their bills by working on Novamente related AI consulting work, tangentially related to AGI.

Concerning the six year number again, we've invested a considerable amount of thought into what we think it will take to reach human-level AGI. In so doing, we've come up with 70 discrete tasks, each one taking between 1 and 18 months. We then put this information into a project planner software. The calculation came to 6 years. Is this exact? No, of course not. But, it's also not pie in the sky either. What we're doing is really challenging, but there is nothing impossible about it.

#65 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 04:10 AM

The only reason I even have an upper bound on my estimate for AGI, which lies roughly around 2030, is that I actually buy Kurzweil's argument regarding brain simulation and increasing resolution and computing power. Plus, I forsee nanocomputing before then, which Kurzweil doesn't even account for.

But when you ask me, "when before 2030 do you think it will happen?", I have no clue.

Michael, one of the challenges in talking with some people about AGI is that they are convinced that the brain-mapping approach is going to work. Well, I agree! It can work, because there is proof of concept ;-) But, will it be first? This could be a self-fulfilling prophecy... if most continue to think this way.

However, what happens if we map the brain and then find out that it's a complete mess, and thus impossible to scale to greater-than-human level intelligence? Then, we'll have to go back to the drawing boards and find a more non-biologically inspired approach, no? With this in mind, Novamente is taking a shortcut. This graphic illustrates this point:

Posted Image

Approaches and Projected Time Frames in Reaching Artificial General Intelligence (AGI)

As knowledge of the human brain increases, and the cost of computing power decreases, more scientists understand how creating powerful Artificial General Intelligence (AGI) via emulating the human brain in software is possible.

Currently, however, a substantial knowledge gap exists between our understanding of the lower-level neuronal mechanisms of the brain, and our understanding of its higher-level dynamics and cognitive functions. Creating AGI based on brain mapping must wait until quantitative improvements in brain scanning and modeling lead to revolutionary new insights into brain dynamics, filling in the knowledge gap. There is little doubt that this will happen, but it is hard to project how long it will take. Kurzweil estimates 2045[1] based on systematic extrapolation of the observed rate of improvement of brain scanning technology.

Computer science based approaches to AGI, on the other hand, provide an exciting possible shortcut. There is no need to wait for brain scanning to improve and neuroscience to undergo a revolution, leading AI theorists such as Marvin Minsky[2] agree that, with the right AGI design, contemporary computing hardware is quite likely adequate to support the implementation of AGI at the human level and beyond.

Skeptics will point out that the computer science based approach to AGI has been pursued for some time without dramatic successes. But computers have never been as powerful as they are now, and more importantly, the field has lacked adequate AGI designs, taking into account the comprehensive knowledge gained by the fields of cognitive and computer science as well as neuroscience.

If pursued properly based on a powerful AGI design, the computer science approach to AGI may be able to lead to AGI at the human level and beyond within the next decade. And these computer science based AGIs will then, among other transformative effects, drastically increase the rate of progress of science and engineering, including brain mapping and neuroscience.

1. Pg. 136 "The Singularity is Near"
2. http://www.novamente...file/AAAI06.pdf



#66 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 04:38 AM

1/ How are you planning to verify that AGI self learning algorithms are stable. That they are convergent towards some goal, that might even be not entirely clear in the beginning. I assume that some set of meta level rules on top of the highest layer of logical or functional abstraction are required to apply some form of ethical restrictive.

We're calling this aspect MOSES (Meta-Optimizing Semantic Evolutionary Search). MOSES is the "global procedure learning and pattern recognition components of the Novamente AI system." MOSES is being spearheaded by Novamente team member Moshe Looks. Background on MOSES can be found here.

2/ The practical implication of 1 is: How are you going to validate a set of algorithms or even an integrated system against a predefined set of life scenario's or challenges for the AGI. Are you able to test the effectiveness of the rules of ethics you implemented, even if you don't know how your "child" will develop itself.

Novamente's progress will be measured by following a developmental hierarchy loosely based on Piaget's stages of cognitive development:

Posted Image

At various intervals, we'll stop, take a deep breath, and then determine how advanced Novababy has become... assessing tendencies for what some may consider ethical behavior.

For a technical overview, see: http://www.novamente...CI06_Stages.pdf

#67 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 05:10 AM

Has the concept behind BioLiterate and other pattern recognition software ever been applied to investment opportunities? If you could develop a program that parced huge amounts of information and was able to make informed decision about stock markets you would have a huge revenue generating machine. Obviously not just in software sales but also in using your own product.

Yes, Brandon. A few Novamente team members have recently partnered with individuals from a California based hedge fund. The idea behind this partnership is to make stock predictions based on text-based pattern recognition of news reports. Still early stages... so we can't declare total financial victory just yet.

#68 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 05:21 AM

What are the costs involved with keeping 12 programmers working full time? Would 1.2 million a year be an unreasonably small estimate?

Half the Novamente team is based in Brazil, where cost of living is much less than the US. We're currently focused on raising considerably less than $1.2M, which will comfortably sustain our full programming team for more than 12 months. After which, we'll proceed with a second round of funding to bring the company to sustainability via natural language question answering product sales (product is called SagacityTM) and eventually human-level AI and beyond. For this round, we currently have a lead investor and are looking to fill the round.

Posted Image

If we do a midrange estimate of time it takes to AGI and figure 12 years we're looking at a project cost of just under 15 million dollars yes?

Less than this, say around 1/3 or less this number, and around half the time.

#69 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 05:33 AM

Are there laymen type organizations that offer research grants? I don't know how scientific funding works, I know there are corporate grants and there are government/military grants, but are there private grants?

Novamente's natural language processing technology was initially developed under contract to U.S. Army Intelligence and Security Command (INSCOM)... and recently we've submitted grant proposals to governmental organizations such as DARPA. Even though we receive good marks, the proposals have been rejected thus far.

Would it be possible to set up a private grant system? It seems if we could unite the transhumanist community and some of the computer science community across the net we could achieve the 1.2 million yearly in micro-donations alone. We could use the internet like it was meant to be used, as the greatest tool for panhandling ever invented!

Novamente currently supports the Artificial General Intelligence Research Institute's (AGIRI) as we were the exclusive sponsor of AGIRI's 2006 AGI Workshop.

Posted Image
http://www.agiri.org/workshop

By the way, IBM has agreed to sponsor our next workshop [thumb]

Posted Image
AGIRI's mission is to "foster the creation of powerful and ethically positive" AGI.
http://www.agiri.org/

#70 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 05:47 AM

I'm wondering if human intelligence augmentation has a chance to catch up, not by adding new capabilities, but by knocking out restrictions, to give us better access to lightning-fast processors we all have, but we have not evloved to use them for general (analytical) purposes.

John, this may improve human cognition considerably, compared to our current level. However, I tend to think that all of the more biologically based intelligences will pale in comparison to AGI's ability to think due to AGIs rapid capacity to ramp up via recursive self-improvement.

#71 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 06:29 PM

As you may tell, I like graphics...

Here's and overview of the Novamente AGISim Learning Architecture:

Posted Image

NovaBaby lives, so to speek, inside AGISim, which is a 3D simulation world powered by the CrystalSpace game engine used in the Crystal Cassie embodiment of the SNePs AGI system.

Reference paper is ]Crystal Cassie: Use of a 3-D Gaming Environment for a Cognitive Agent http://www.cse.buffa...s/sansha03a.pdf

#72 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 08:02 PM

After talking with Ben Goertzel's wife and Novamente team member, Izabela Goertzel, about how AGI may outpace human-level intelligence... as seen in this graph:

Posted Image

Izabela helped me to see where it is likely that human-derived intelligences will eventually find parallel with AGIs over time, such that the old graph (purple dotted line = human-derived) should look more like this:

Posted Image

#73 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 01 July 2006 - 10:10 PM

After many year of "AI Winter", the mood seems to be thawing lately with more events and reports on at least "human-level AI". This AAAI essay by Nils J. Nilsson called Human-Level Artificial Intelligence? Be Serious!, exemplifies:

I claim that achieving real human-level artificial intelligence would necessarily imply that most of the tasks that humans perform for pay could be automated. Rather than work toward this goal of automation by building special-purpose systems, I argue for the development of general-purpose, educable systems that can learn and be taught to perform any of the thousands of jobs that humans can perform. Joining others who have made similar proposals, I advocate beginning with a system that has minimal, although extensive, built-in capabilities. These would have to include the ability to improve through learning along with many other abilities.
Posted Image
http://ai.stanford.e...g26-04-HLAI.pdf

#74 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 July 2006 - 10:33 PM

Did you notice the date of the cartoon? It is an original Charles Addams of the Addams family fame and it is from 1946.

You gotta love the not so subtle fact that even the race of the workers (all black robots working on all white upgrades) was implied but even more importantly it refers to the origin of the word *robot*, which if I remember correctly is Czech for *worker*.

I think this wonderful old cartoon shows how old this debate is and how the core issues haven't really changed. It is all about questions of a manufacturing versus serivce ecomomy and who will control the end product in the competition for wealth and power, not about whether or not true AI is ultimately possible.

#75 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 02 July 2006 - 04:23 PM

Posted Image

#76 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 02 July 2006 - 04:36 PM

Novamente's first question/answering product:

Posted Image

#77 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 02 July 2006 - 11:11 PM

Hello, Bruce,

I attended the Singularity Summit at Stanford and came away lost. The event was not the public unveiling of the Singularity I expected and provided little information on how to participate as an individual, enthusiast and citizen. To me the message was "The Singularity is near...please buy our books and read patiently until it occurs." Missing was any sense of real action. That is why I am so impressed with Novamente's progress ("Advisor for Evolvable Hardware" - wow! NovaBaby - omg, omg, omg!) Furthermore, I appreciate your openness and responsiveness in this topic.

What, if anything, would you tell my parents, blue collar workers, the homeless, terrorists, sports buffs, high school students, pre-schoolers, prisoners, the Vatican, Paris Hilton, indigenous tribes, janitors, gang members, etc. about Novamente, AGI, and the Singularity? Do we all need to care about the outcome of this progress, or just some of us?

If you could, what would you tell dogs, cats, primates, other animals, plants, bacteria, ecosystems, etc. about Novamente, AGI, and the Singularity? Are they affected at all or should we care if they are or not?

#78 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 02 July 2006 - 11:49 PM

Missing was any sense of real action.


I suspect some of this might come down to a feeling of seemingly over the top secrecy being necisary. I know I've seen several people who at least claimed to be working on related projects stating that they'd never actually talk about specifics for a number of reasons. Foremost being fear of their ideas being stolen and used by competing projects or by large companies whose legal teams could crush anything in their path. Secondly by fear of violence by hardcore anti-progressives. The second reason I find somewhat ludicrous and self-aggrandising even if given a pretty big assumed leap above the current state of the art. It'd take a pretty big scare to get someone to leap on a plane, grab a gun, and get to stalkin'. A chatbot with a souped up self-evolving neural network isn't going to have enough umph to get that kind of effect. And anything beyond that I suspect would be too alien at its level of interaction to create the kind of fear that would be necessary.

The first reason seems a lot more plausible. The current level of patent slapdowns in the computing world is simply insane. Large companies are seemingly pretty well equipped to issue a "boot ta' the head" to anyone they feel like, either taking their ideas and suing the originator for infringement, or just plain going on an insane sue-down. I suspect that there's people out there doing the equivalent of the milkman throwing off his uniform on arriving home and heading out to the garage to tinker on his new engine design in secrecy. Or, perhaps, I'm just expanding my own hopes into a world view which increases the chances of something I want to occur actually coming into effect. But, oh well, it's fun to dream.

#79 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 03 July 2006 - 05:41 AM

I think there may be one additional company I know of to be added to the list of AGI System-Building Projects:

Ai Research

Ai Research is a leading artificial intelligence research project. At Ai, we're creating a new form of life.



#80 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 03 July 2006 - 06:08 AM

I suspect some of this might come down to a feeling of seemingly over the top secrecy being necisary. I know I've seen several people who at least claimed to be working on related projects stating that they'd never actually talk about specifics for a number of reasons.


I understand that for commercial enterprises, openness may be unfeasible. If I understand correctly, Novamente wants to make money along the way toward AGI and the Singularity simply to support those ends, after which all bets are off. Therefore, they will have a certain amount of proprietary information to be protected, at least in the beginning. This sounds reasonable to me.

At HiRISE we are taking a different approach to proprietary data than has been practiced by past planetary science missions. Most missions to date hold onto their data for at least several months of internal study to give team scientists a chance to beat other scientists to new knowledge and publish relevant papers. The Principal Investigator of HiRISE, Alfred McEwen, is eschewing this tradition and we will instead release data to the world as soon as technically possible (a matter of a few days to a few weeks of radiometric correction and geometric processing). There will be minor exceptions, such as requested images of potential landing sites by upcoming Mars missions like Phoenix. However, they will only get a first quick look before we release the images to the public sooner than has been practiced in the past. Dr. McEwen reminds the rest of the team that there will be no need for hoarding our high resolution images because there will be plenty for everyone - public, scientific community, and team members - to discover new things and publish papers.

I do not mean to directly comparing HiRISE and Novamente, but I do hope that some amount of openness can exist within the AGI community if that openness means faster and more safe AGI. Novamente appears to be especially open with Bruce's answers here, their workshops and other outreach activities, and Dr. Goertzel's publications on the topic.

#81 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 July 2006 - 11:54 AM

Thanks, Richard.

What, if anything, would you tell my parents, blue collar workers, the homeless, terrorists, sports buffs, high school students, pre-schoolers, prisoners, the Vatican, Paris Hilton, indigenous tribes, janitors, gang members, etc. about Novamente, AGI, and the Singularity? Do we all need to care about the outcome of this progress, or just some of us?

Everyone should care about AGI's development because it will so radically change the world more than anything else over the next few decades. As you know, we have been thinking about a more formalized statement here.

If you could, what would you tell dogs, cats, primates, other animals, plants, bacteria, ecosystems, etc. about Novamente, AGI, and the Singularity? Are they affected at all or should we care if they are or not?

I'd ask, "are you interested in becoming more intelligent?" I think all life forms will be affected and that it may become an obligation of entities of greater intelligence to help lesser-intelligences transcend.

#82 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 July 2006 - 06:43 PM

All life...

Attached Files



#83 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 July 2006 - 06:49 PM

I understand that for commercial enterprises, openness may be unfeasible. If I understand correctly, Novamente wants to make money along the way toward AGI and the Singularity simply to support those ends, after which all bets are off. Therefore, they will have a certain amount of proprietary information to be protected, at least in the beginning. This sounds reasonable to me.

This is correct, Richard. By 2009, Novamente aims to create a question/answering product called Sagacity™ which should be able to do what Ask Jeeves has always promised.

However, as you suggest, we do aim to be as open as possible about our work and intention to safely guide AGIs development toward the Singularity. This work will also be augmented by our non-profit arm, the Artificial General Intelligence Research Institute (AGIRI.org) who’s mission it is to foster the creation of powerful and ethically positive AGI by helping current AGI projects.

For example, AGIRI's AGI-SIM project is focused on the creation of a sensory-rich simulated 3D world for AGI research so that "NovaBabies" from other projects can interact and "learn" from each other in this virtual environment.

#84 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 03 July 2006 - 06:58 PM

i gotta say bruce, this is all very impressing. I know with all the competing interests out there I really hope it is your group that wins. You seem most on the track to friendliness. I only hope friendliness in such a system is possible.

#85 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 July 2006 - 07:09 PM

Thanks, Justin.

I've shifted much of my energies now toward AGI because I think it can save the most lives. Apart from Novamente, if I were to rate the awareness to friendliness among other AGI projects, the Singularity Institute is the most explicit, but Peter Voss' A2i2 project has the more practical approach plus a good grasp of the friendliness question.

To be sure, I don't think it possible to come up with a short and sweet algorithm for friendliness. Rather we will need to allow NovaBaby to learn and grow in intelligence... and along the way carefully test for friendliness attributes... and then enhance them.

#86 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 July 2006 - 07:23 PM

Here's a relevant CNET article w/ John McCarthy, who coined the term AI in 1956 (50yrs ago):

July 3, 2006 "Getting machines to think like us"
http://news.com.com/...html?tag=st.num

Attached Files

  • Attached File  john.gif   32.74KB   0 downloads


#87 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 03 July 2006 - 07:48 PM

All life...


Annoying, being forced out the door in a minute. Fullness of writing be darned, I'm still going to give a shot at throwing my thoughts in here.

The only thing I don't like about that image is the shared idea from the popular understanding of evolution as a process from one set low point to another set high point. It's a somewhat top down approach that begins with the idea of ourselves as the pinnacle to which everything is striving. When in fact there's no set value to evolution, and a bumbling half-wit might be the most fit, and thus the evolutionary pinnacle for a particular environment, depending on what selective criteria are in motion.

With this graph, there's a similar assumption being made. Intelligence seems to be a combination of many different factors, some quite different or even at odds with each other, rather than a single unit. Some aspects a dolphin might actually be better equipped to use, while we might find ourselves so lacking in it to not even be aware of the absence. As far as tool use, we're definitely at the top. We 'may' be for language use. I'm not so quick to rule out dolphins on that one until we've thrown some more research into it. It's really hard to measure other criteria, such as rationality. We 'seem' at the surface to be very rational. But for all our ability to think logically, it seems like our species implements that at a social level pretty infrequently. Instead we're still very much moved by instinct and unconscious drives. I think the chances that we're still the very top when it comes to reason overcoming instinct are pretty good. But I'm still somewhat uncomfortable making a decision like that without more data.

I'd also make a few edits to narrow the listings down. Humans, arguably, also belong in the great ape listing. And most of the dolphin research has focused on bottlenose dolphins, with others showing a lot of variation in brain structure. It's a bit speculative to list whales in there as well, since there's really very little research on their intelligence.

Yes, I know, I'm being 'far' too nitpicky about something meant as a quick signpost.

PS: More raven/crow props please! :)

#88 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 03 July 2006 - 09:00 PM

I'd also make a few edits to narrow the listings down. Humans, arguably, also belong in the great ape listing. And most of the dolphin research has focused on bottlenose dolphins, with others showing a lot of variation in brain structure. It's a bit speculative to list whales in there as well, since there's really very little research on their intelligence.


I think Bruce is suggesting transcendence for any organism, regardless of their previous intelligence level. Their position in his graphic may be arbitary depending on future research but the result will be the same...life extension and the Singularity for all organisms.

I am actually very surprised by your answer, Bruce. In "The World's Most Dangerous Ideas - Transhumanism" by Francis Fukuyama, he wrote

"If we start transforming ourselves into something superior, what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind?"


It sounds like you are claiming the right to transcend lesser intelligences, in effect leaving no organism within our noosphere behind. This is a scope beyond what many technology progressives have been thinking. However, it does play into the view of the Singularity as a kind of Cambrian Explosion of intelligences, intelligences that will find new ecological niches to occupy not just on the Earth but throughout our galaxy and beyond.

I think, Bruce, you have done a good job in explaining why it is mandatory that we develop AGI for the good of humanity. By what rights and justifications should humanity bring the results of this development to other organisms, and how would we know if this was good or not?

#89 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 July 2006 - 09:33 PM

"If we start transforming ourselves into something superior, what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind?"

Rather than rights to intelligence, I think it's helpful to think about this in a slightly different way… how things may change in a more systematic way... and then try to guide it into a good direction.

So, one concept worth highlighting here is that entities with higher levels of intelligence may inherently become more ethical by virtue of being more rational and having the ability to comprehend larger amounts of knowledge in order to predict the outcome of their actions.

With this in mind, it also seems possible that higher level intelligences will inherently become more generous (as perceived by lesser intelligences). In terms of resource allocation (which may have similarities to intelligence allocation) we already see this in humans who have met a certain level of hierarchy of needs… ex: Bill & Melinda Gates.

sponsored ad

  • Advert

#90 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 July 2006 - 09:44 PM

Also, more along these lines, Ben and I reviewed Kurzweil's Singularity is Near during a recent NIH book club meeting where I said that my take home message was that super intelligences will be impossible to control by its lesser intelligent creators. Thus, one of the things we may currently do to ensure a positive Singularity is to promote a more positive value system.

Ray writes on page 424:

Our primary strategy in this area should be to optimize the likelihood that future non-biological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity. The best way to accomplish this is to foster those values in our society today and going forward. If this sounds vague, it is. But there is no purely technical strategy that is workable in this area, because greater intelligence will always find a way to circumvent measure that are the product of a lesser intelligence.

Attached Files

  • Attached File  nih.gif   137.44KB   1 downloads





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users