• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Viability of AGI for Life Extension & Singularity


  • Please log in to reply
249 replies to this topic

#151 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 July 2006 - 02:05 AM

Biological systems have a lot of in inbuilt water cooling, how will you keep the AGI host machine cool?

Efficiently designed software can help with this, but machine cooling is mostly a hardware challenge. Right now, hardware is not the major bottleneck, rather its software design:
Posted Image

#152 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 July 2006 - 02:10 AM

Is the negative public perception of AI hammered in by the Matrix and Terminator triologies likely to effect government policies on AGI?

Hard to say. Public perception can change quickly. However, it seems without an impressive proof of concept for AGI, public perception will remain generally dismissive.

In comparison to AGI, robotics seems to have more success w/ funding. Perhaps because most people like to see something physically created, whereas our AGI system is currently being trained in a virtually inside a computer. So, along with improving fundamentals, we will create more realistic simulations... eventually moving into robotics.

sponsored ad

  • Advert

#153 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 10 July 2006 - 02:11 AM

Will the AGI really be conscious and not just appear to be so from the outside looking in and does it really matter?

For the purpose accelerating the growth of human knowledge and technology, I don't think it matters whether an AGI is "really conscious" or not. It doesn't even need to think like a human, per se. It just needs to be really, really smart, in a general enough manner to be able to learn diverse domains of knowledge, process them, extract new knowledge, etc. And be scalable and capable of reprogramming itself. Et cetera.

Bottom line: really, really smart. Consciousness would be a bonus, but at this time, I haven't seen a compelling argument that consciousness would be necessary or would necessarily arise simply from being so smart.

#154 caston

  • Guest
  • 2,141 posts
  • 23
  • Location:Perth Australia

Posted 10 July 2006 - 02:49 AM

I've always considered the need to learn maths a sign of being conscious because you need to visualise and do the adding up etc yourself. A machine does the math on much lower level. A baby isn't born knowing the XOR function for instance.

We generally build machines to help ourselves calculate things that would take up a lot effort to run inside the conscious mind.

Maybe teach babynova math then give it a calculator. Then take the calculator away like they do in school and see how it reacts.

Also I consider the consciousness to be like the end user. Having an IT industry without end users is like having a television industry without viewers and the same goes for biological systems.

The end user may be stupid, illogical, gullible and wrong they will also often be smart, rational, greedy, horny, considerate and right and all of these states are what drives the system forward. Sometimes sophisticated people fail where simple and courageous people succeed and biological systems "know" this.

I also think intelligence works best when it is diversified. 6.5 billion minds is ALOT of processing power. Some are good at math some are better at writing literature. It's all about diversity.

For the purpose of extending the human growth of technology AI and AGI, like all other tools, should be an extension of our own consciousness. Opposable thumbs can't hammer in a nail but they can allow us to use a hammer.

Edited by caston, 10 July 2006 - 03:24 AM.


#155

  • Lurker
  • 1

Posted 10 July 2006 - 07:02 AM

It just needs to be really, really smart, in a general enough manner to be able to learn diverse domains of knowledge, process them, extract new knowledge, etc. And be scalable and capable of reprogramming itself. Et cetera.


Then it's just a next-gen computer - and that is something I have no problem with envisaging to happen at all.

#156 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 July 2006 - 07:37 AM

Jay (along with Harold, Laz, Elrond, Don, susmariosep, treonsverdery, Kevin Perrott, th3hegem0n, osiris, Marc_Geddes, signifier and 7000) made a rather large contribution to the consciousness+sofware debate last year (July 05) here:
http://www.imminst.o...t=ST&f=3&t=7344

Interesting, in that discussion ImmInst member eirenicon suggested Ben's essay: http://www.goertzel....QualiaNotes.htm

#157

  • Lurker
  • 1

Posted 10 July 2006 - 07:42 AM

Jay made a large contribution to the consciousness+sofware debate last year (July 05) here:
http://www.imminst.o...t=ST&f=3&t=7344


Jay's consciousness arguments become so esoteric I have to grow a new brain lobe just to make sense of what he is writing about.. :)

#158 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 10 July 2006 - 07:58 AM

Everyone... rest easy!

...the answer to the problem of consciousness is here:

http://www.goertzel....HardProblem.htm

Ben explains this as an "attempt to defuse the 'hard problem of consciousness' by observing that both experience and mind-structures can be viewed in terms of a deeper underlying realm of patterns. In this realm of patterns, indeed, it's all about 'forms of organization'".

#159 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 11 July 2006 - 12:56 AM

...I have to grow a new brain lobe...

Heh, you should add that one here:
http://www.imminst.o...&f=11&t=7228&s=

#160

  • Lurker
  • 1

Posted 12 July 2006 - 07:17 AM

Level 4: Trans-sentients

Entities capable of fully understanding the *true nature* of mind - the relation between Mind and Reality (has awareness of 'The Theory Of Everything' or TOE). Capable of true general intelligence and recursive self-improvement. Examples: FAI - Friendly Artificial Intelligence. Rights: Who knows?


This is an interesting concept: how would a hyper-sentient perceive the world? On a less ambitious scope I have often wondered what it would be like see the world though the eyes of an Einstein or Da Vinci. Clearly they were able to see patterns that elude conventional minds. Personally, on my better days, I find I think best when I am able to have access to anything I choose to recall, ie all I have learned. Obviously that is not possible but the more data I have access to at any one time the more the interesting the associations that begin to emerge in novel new ways. Therefore, one type of hyperintelligence in humans would emerge from an increased ability to recall very quickly, accurately, and with a broader range of evocation associations for discrete memory sequences.

#161 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 12 July 2006 - 08:46 AM

DARPA has awarded US$5.5 million to BBN Technologies to begin Phase 1 of creating "The Integrated Learner".

Another AGI project to add to the list?

#162 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 12 July 2006 - 10:25 AM

I took some basic computer courses in college to learn the really basic MS OS programming stuff etc. These days, I am so out of touch, it's almost sad.


If you feel an urge to dive back in, I have to plug the bioinformatics forum here. maestro949 put a huge load on his own shoulders within it, and has managed to do a really fantastic job in bringing forward so much quality information. To a lesser degree, I found myself thrown back into the coding mix not too long ago. Amusingly, my case was simply from having been forced into dull, monotonous, scholastic perversions of it that I didn't have the heart to really keep an eye on how amazingly diverse and developed the field was becoming. But his posts have proven many times over to be the door into concepts which I could spend many happy ages delving into if I had the oportunity.

#163 caston

  • Guest
  • 2,141 posts
  • 23
  • Location:Perth Australia

Posted 13 July 2006 - 12:39 PM

Marc:

Is that anything related to my post? ie a baby isn't born knowing the "XOR" function

I've been wondering if autism in fact what happens when the mind is able to directly operate the "biological computer"

#164 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 July 2006 - 12:22 AM

Posted Image
Here's a more technical 55 min overview of Novamente's AGI design:
http://www.novamente...deo/ben_agi.php

#165 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 July 2006 - 04:30 AM

A few of Marc's ideas may be wrong, but they're not useless... not by a long shot.

Best of luck, Marc!

#166

  • Lurker
  • 1

Posted 15 July 2006 - 08:09 AM

  I'm starting a comp-sci degree on Monday so no more time for messageboards.

See ya round.


All the best with your degree - high distinctions all the way!

Maybe because the crap you say is useless and wrong.


Chill out.

#167 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 July 2006 - 06:20 PM

Haha don't take my comments too strongly Marc, but you know what I mean. It's time to put all the theory and intuition about general intelligence into a concrete implementation.

True!

#168 anupshinde

  • Guest
  • 1 posts
  • 0

Posted 17 July 2006 - 10:57 AM

Hi,

I read your essay on AGI and went through its architecture. I may have missed out many things in there, but will all due respect for your knowledge, I would like to put forward some points (though I couldn’t remember all that I thought of). I would be glad if you kindly let me know your views on these.

Thanks and regards,
Anup Shinde.

=====

Point 1

I found a statement in your essay:
The creation of Artificial General Intelligence – AGI – is the grand adventure. Whatever other achievements you can imagine, they’ll be massively easier to achieve with the help of a benevolent AGI. This is not a religious thing; I’m not talking about some mystical savior. I’m talking about machines that we can actually build – and I believe we have the technology to do so now.

I do believe that there exists a strong base for developing that technology but the current intelligence architecture is massively parallel in nature that our machines are not capable of achieving even with distributed processing. We need small (or may I say very small) units of intelligent agents working parallelly and collaborating for a big purpose.

I thought of this while I was working and studying neural networks from the biological perspective (instead of the usually followed mathematical perspective).

My thinking might be bit amateur (which I am in compared to you) but I think technologies like nanotechnology will help us creating such machines, which could also make up the basis of electronic life (I strongly believe that life can exist on non-biological systems). Well at least nanotechnology cannot be worked with directly by many people (except in big company's research departments)

Current commercial computing though supporting parallel processing does end up in serializing many parts for processing. I would compare it to an simple example.... biological systems are like 100 people WALKING through 100 doors and our electronic systems making 100 people RUN through 1,2 or atmost 10 doors.... which one do you think is faster when too many things occur collectively?

Our electronic processors are quite faster than us in execution, but combining the teamwork achieved in human brain, the brain appears to be much faster.

Our brain has slower mind agents/cognitive objects...therefore these have to be larger in number. Compared to that, an artificial mind having faster mind agents can afford to have small number of mind agents. Still it will require quite a number of processors instead of one or two that exist most of today’s machines

But that means that artificial mind would have a bigger, faster yet complex architecture. And when things get complex, they are difficult to evolve. Similarly smaller scattered things are difficult to manage...there is always tradeoff



Point 2

How does Novamente understand real-time?

As the definition of real-time says "An operation within a larger dynamic system is called a real-time operation if the combined reaction- and operation-time of a task operating on current events or input, is no longer than the maximum delay allowed, in view of circumstances outside the operation. The task must also occur before the system to be controlled becomes unstable. "

For a simple example if Novamente is driving a car and it suddenly sees huge pothole that the car can’t avoid. Of course in such a simple scenario, it would be trained enough to handle such situations effectively.

But AGI is more than just car driving. It could be handling multiple events at a time. Example, say the passenger is instructing the system to follow a specific route and the above example-stated event is also occurring.

In such a situation, how would Novamente understand and provide priorities to its tasks. (Or at least handle both effective, which humans can do)

As far as speed is concerned, AGI might be able to handle both the systems effectively at the same time.

So for an AGI driving a car...how can it understand by itself "This much fast is fast enough”. (Of course I said that it understands by itself and not we teach it)




Point 3 -
It’s also written that "
Given the way the Novamente architecture is set up, as long as a Novamente system can’t modify its own algorithms for learning and its own techniques for representing knowledge, it’s possible for us, as the system’s designers and teachers, to understand reasonably well what’s going on inside its mind. Unlike the human brain, Novamente is designed for transparency – it’s designed to make it easy to look into its mind at any given stage and see what’s going on. But once the system starts modifying its fundamental operations, this transparency will decrease a lot, and maybe disappear entirely. So before we take the step of letting the system make this kind of modification, we’ll need to understand the likely consequences very well. "

Okay, what I am saying now may not come under Novamente’s scope.

If Novamente is learning in a strict environment, we are putting limits to its learning capability.

E.g. when a child is told to be away from fire, most don’t understand and they will not understand until they get their hands burnt. The child may not understand the word "very hot"...or shud I say that the child will not feel the heat.

As humans we always have the power to make a choice. Nobody puts constraints on us except ourselves. We make some rules ourselves and even follow rules when we have some basic explanation (explicit or implicit). But for humans as a phrase goes "rules are made to be broken". The natural rule that human cannot fly does not limit us from flying faster than birds.

Rules made by humans or nature are "NOT perfect" and most of time we always find way to bend rules. But if we forcify our rules on a system, AGI may become similar to mathematical AI techniques.

Learning of "self" does not come to humans naturally and expecting it from machines is a big dream. We have evolved so much that we forget that our brains first only recognize that what is happening to ourselves "materially". Our culture and society enforces things like difference between good and bad. These things are told to us using the very basic material senses. As we grow, we start understanding complex issues (the process that we call as "getting mature").

Even our creativity depends on ability to make a choice. This is in case of animals too. And without creativity and imagination, the human race wouldn’t have grown to this state. Then we would be still struggling in everyday life and would never have though of anything like "self".

The purpose of saying all this was just to say that with too many limits Novamente might not be able to become the AGI system that we want it to be.

#169 Guest_prospero_*

  • Lurker
  • 0

Posted 23 July 2006 - 02:09 PM

I think we should devolve and avoid a Singularity.

#170 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 23 July 2006 - 02:51 PM

I think we should devolve and avoid a Singularity.

Some people I see posting on the internet are way ahead of you on that.

#171 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 24 July 2006 - 05:21 PM

Hi Anup,

Sorry for the delay... I plan to answer your questions soon. Susan, Ben and I recently returned from Terasem's workshop in Vermont where we had a nice meeting/dinner with Ray Kurzweil and a number of other pioneers.

Posted Image
Ben Goertzel, Ray Kurzweil

Posted Image
Ray Kurzweil, Susan Fonseca-Klein, Bruce Klein (taken by Ben Goertzel, who got us to smile quite nicely)

More Pictures

#172 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 24 July 2006 - 11:42 PM

I think we should devolve and avoid a Singularity.


It would require the application of advanced technology to avoid a Singularity to begin with. You would need to forbid AI research, brain enhancement research, and even genetic engineering of humans. People will always build technology, but it would require rulers with even greater technology to prevent everyone from advancing indefinitely. So those against a Singularity should work to build technology to take over the world and prevent it from ever happening. (Not that you have a chance of succeeding.)

#173 doug123

  • Guest
  • 2,424 posts
  • -1
  • Location:Nowhere

Posted 25 July 2006 - 04:30 AM

Wow! You got to meet Ray Kurzweil! For anyone who's never heard of him or his work: http://en.wikipedia....i/Ray_kurtzweil

His book: The Age of Spiritual Machines: When Computers Exceed Human Intelligence (1999) was excellent. It was assigned reading for my intro to computer programming course; it was a great read!

#174 EmbraceUnity

  • Guest
  • 1,018 posts
  • 99
  • Location:USA

Posted 13 August 2006 - 06:40 PM

Since programming errors in an AGI could have disastrous consequences, what measures are in place for bug testing? Would you consider licensing Coverity software in order to find errors? Many large corporations and open source projects have licensed the software in order to improve the quality of its programs, I think it would be wise for Novamente to do the same.

Edited by progressive, 13 August 2006 - 07:38 PM.


#175 Guest_prospero_*

  • Lurker
  • 0

Posted 16 August 2006 - 02:53 PM

I think we should devolve and avoid a Singularity.


It would require the application of advanced technology to avoid a Singularity to begin with. You would need to forbid AI research, brain enhancement research, and even genetic engineering of humans. People will always build technology, but it would require rulers with even greater technology to prevent everyone from advancing indefinitely. So those against a Singularity should work to build technology to take over the world and prevent it from ever happening. (Not that you have a chance of succeeding.)


Maybe I meant that evolution has an end at some point. Once we are seeded throughout the universe we would obviously create lower life forms to support ourselves for relativities sake. As I read some people like Moravec, the more intelligent something becomes outside the bounds of a few planets the less relevant it could become too. Also sounds like a god. I don't mean that we would create a world like ours in history and become toymasters, maybe not. Would seem scary though or redundant. Like space is a giant garden but how far do we go..Allot of those planets out there need to grow but how much pain and suffering would we allow. Again that's more godlike talk but is that what we are becoming?.

Also, at what point will this AGI being become about the intelligence of an ape - or higher; will Novemante consider allowing it to own itself at some point.How do we know that they aren't hiding this being for testing as well?
To me, once we allow something to become so big like that it then becomes a god or just becomes so irrelevant it might choose to kill itself off. I think I am more into regular AGI as in smaller redundant systems like servers. Something more scaled down that needs to be controlled by humans to be effective. I see humans as near an evolutionary dead end once we have immortality and the ability to switch back and forth between solid and non solid states we will relatively remain like we are now as a middle ground.

I am not against metaphysical but I do still think we need the physical universe or maybe it's just a waste bin for Dark Matter and the Dark Matter Universe is where all the action is at.

#176 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 19 August 2006 - 11:37 PM

I see humans as near an evolutionary dead end once we have immortality and the ability to switch back and forth between solid and non-solid states we will relatively remain like we are now as a middle ground.


I have to disagree, how can you substantiate that claim? If I were able to place some sort of mechanical augmentation device in my brain that would allow me to retrieve memories extremely fast with crystal clarity, to me that would count as augmentation as well as evolution, now lets say, 7 years later I improve that device so now it will scan through the memories I am reteiving and looking for connections and relationships between memories that I'm not noticing and relay them to my stream of thought... 4 years later I modify it so it will analyze my thought patterns and keep me from retrieving memories that are not of relevance to the problem (preventing distraction)... now I can keep improving this thing, I count this or even dumping it all in at once as a sort of evolution.

If you meant "dead end" in strictly a biological sense, I would still have to disagree, every day we are subjected to mathematics, coordination complexities, and complex social and linguistic webs... if we were to halt all technological advancement in our lives and we lived like this for 200,000 years.... I think my (great*6500) grand children would be very well equipped for the tasks that the last 6500+ generations had done before them.

For instance the right leg (assuming USA) would probably be stronger as an evolutionary trait due to 6500+ generations driving and pressing the gas pedal with their right foot. People might be able to read Times New Roman extremely fast as their eyes would be better suited for the bright screens and they would probably have a disposition to reading that particular font. People would most likely have "a feeling" of how to drive without ever driving previously, such as judging speed, corners and traffic maneuvers better.

So, no matter how you meant it I still find that claim false and unsupported, because thinking trillions of times faster (being in a non-solid state as you put it) would allow you to come up with some pretty fantastic creations, including some pretty damn good mind enhancing toys.

#177 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 20 August 2006 - 08:51 AM

Maybe I meant that evolution has an end at some point. Once we are seeded throughout the universe we would obviously create lower life forms to support ourselves for relativities sake.


Complete assumption, pulled out of nowheres.

As I read some people like Moravec, the more intelligent something becomes outside the bounds of a few planets the less relevant it could become too.


Moravec can be kinda crazy and wrong at times. "The more intelligent something becomes outside the bounds of a few planets"... heh. Why stop there? Or conversely, why does a superintelligence need to use so much matter to perform its computations? The answer to all of these questions is Mu.

Would seem scary though or redundant. Like space is a giant garden but how far do we go..Allot of those planets out there need to grow but how much pain and suffering would we allow. Again that's more godlike talk but is that what we are becoming?.


Stop trying to put yourself in the shoes of a superintelligence. It's like a dust molecule trying to put itself in the shoes of a university physics department.

I think I am more into regular AGI as in smaller redundant systems like servers. Something more scaled down that needs to be controlled by humans to be effective.


Go for it!

#178 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 22 August 2006 - 11:28 PM

I was wondering today... what if all of the sudden google, microsoft, apple.. or any company or group of people started coming out with extremely-out-of-this-world products, things that defy our current understanding of physics, or are simply too complex to comprehend as a whole, things that would seem like magic... it could be a signal that AGI has been created and is being kept "under-wraps" to make an easy profit. Though... you might also want to watch for that company laying off massive amounts of it's employees :)

I think if Novamente doesn't succeed first, the above will eventually happen somewhere. What do you guys think?

#179 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 23 August 2006 - 12:16 AM

I was wondering today... what if all of the sudden google, microsoft, apple.. or any company or group of people started coming out with extremely-out-of-this-world products, things that defy our current understanding of physics, or are simply too complex to comprehend as a whole, things that would seem like magic... it could be a signal that AGI has been created and is being kept "under-wraps" to make an easy profit. Though... you might also want to watch for that company laying off massive amounts of it's employees  :)

I think if Novamente doesn't succeed first, the above will eventually happen somewhere. What do you guys think?

Some people think that Google will be first to AI, as evidenced by the two previous threads about articles on it:
http://www.imminst.o...ST&f=11&t=11282
http://www.imminst.o...=ST&f=47&t=9213

I personally think Google has a good shot at it. Even if they only focused a small percentage of their total research budget/personel, it would still be a massive amount of development effort. Plus, they seem to be sufficiently forward-thinking (more so than most companies) to be ones to think of it. If I were to bet on one company that is not specifically focused on building AGI (as their primary goal), Google would be it.

sponsored ad

  • Advert

#180 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 23 August 2006 - 03:29 AM

Yeah, that kinda reinforces the idea for me as well... And personally I could sleep at night knowing the most powerful intelligence on the planet is in the hands of Google... I think [huh]




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users