Posted 17 July 2006 - 10:57 AM
Hi,
I read your essay on AGI and went through its architecture. I may have missed out many things in there, but will all due respect for your knowledge, I would like to put forward some points (though I couldn’t remember all that I thought of). I would be glad if you kindly let me know your views on these.
Thanks and regards,
Anup Shinde.
=====
Point 1
I found a statement in your essay:
The creation of Artificial General Intelligence – AGI – is the grand adventure. Whatever other achievements you can imagine, they’ll be massively easier to achieve with the help of a benevolent AGI. This is not a religious thing; I’m not talking about some mystical savior. I’m talking about machines that we can actually build – and I believe we have the technology to do so now.
I do believe that there exists a strong base for developing that technology but the current intelligence architecture is massively parallel in nature that our machines are not capable of achieving even with distributed processing. We need small (or may I say very small) units of intelligent agents working parallelly and collaborating for a big purpose.
I thought of this while I was working and studying neural networks from the biological perspective (instead of the usually followed mathematical perspective).
My thinking might be bit amateur (which I am in compared to you) but I think technologies like nanotechnology will help us creating such machines, which could also make up the basis of electronic life (I strongly believe that life can exist on non-biological systems). Well at least nanotechnology cannot be worked with directly by many people (except in big company's research departments)
Current commercial computing though supporting parallel processing does end up in serializing many parts for processing. I would compare it to an simple example.... biological systems are like 100 people WALKING through 100 doors and our electronic systems making 100 people RUN through 1,2 or atmost 10 doors.... which one do you think is faster when too many things occur collectively?
Our electronic processors are quite faster than us in execution, but combining the teamwork achieved in human brain, the brain appears to be much faster.
Our brain has slower mind agents/cognitive objects...therefore these have to be larger in number. Compared to that, an artificial mind having faster mind agents can afford to have small number of mind agents. Still it will require quite a number of processors instead of one or two that exist most of today’s machines
But that means that artificial mind would have a bigger, faster yet complex architecture. And when things get complex, they are difficult to evolve. Similarly smaller scattered things are difficult to manage...there is always tradeoff
Point 2
How does Novamente understand real-time?
As the definition of real-time says "An operation within a larger dynamic system is called a real-time operation if the combined reaction- and operation-time of a task operating on current events or input, is no longer than the maximum delay allowed, in view of circumstances outside the operation. The task must also occur before the system to be controlled becomes unstable. "
For a simple example if Novamente is driving a car and it suddenly sees huge pothole that the car can’t avoid. Of course in such a simple scenario, it would be trained enough to handle such situations effectively.
But AGI is more than just car driving. It could be handling multiple events at a time. Example, say the passenger is instructing the system to follow a specific route and the above example-stated event is also occurring.
In such a situation, how would Novamente understand and provide priorities to its tasks. (Or at least handle both effective, which humans can do)
As far as speed is concerned, AGI might be able to handle both the systems effectively at the same time.
So for an AGI driving a car...how can it understand by itself "This much fast is fast enough”. (Of course I said that it understands by itself and not we teach it)
Point 3 -
It’s also written that "
Given the way the Novamente architecture is set up, as long as a Novamente system can’t modify its own algorithms for learning and its own techniques for representing knowledge, it’s possible for us, as the system’s designers and teachers, to understand reasonably well what’s going on inside its mind. Unlike the human brain, Novamente is designed for transparency – it’s designed to make it easy to look into its mind at any given stage and see what’s going on. But once the system starts modifying its fundamental operations, this transparency will decrease a lot, and maybe disappear entirely. So before we take the step of letting the system make this kind of modification, we’ll need to understand the likely consequences very well. "
Okay, what I am saying now may not come under Novamente’s scope.
If Novamente is learning in a strict environment, we are putting limits to its learning capability.
E.g. when a child is told to be away from fire, most don’t understand and they will not understand until they get their hands burnt. The child may not understand the word "very hot"...or shud I say that the child will not feel the heat.
As humans we always have the power to make a choice. Nobody puts constraints on us except ourselves. We make some rules ourselves and even follow rules when we have some basic explanation (explicit or implicit). But for humans as a phrase goes "rules are made to be broken". The natural rule that human cannot fly does not limit us from flying faster than birds.
Rules made by humans or nature are "NOT perfect" and most of time we always find way to bend rules. But if we forcify our rules on a system, AGI may become similar to mathematical AI techniques.
Learning of "self" does not come to humans naturally and expecting it from machines is a big dream. We have evolved so much that we forget that our brains first only recognize that what is happening to ourselves "materially". Our culture and society enforces things like difference between good and bad. These things are told to us using the very basic material senses. As we grow, we start understanding complex issues (the process that we call as "getting mature").
Even our creativity depends on ability to make a choice. This is in case of animals too. And without creativity and imagination, the human race wouldn’t have grown to this state. Then we would be still struggling in everyday life and would never have though of anything like "self".
The purpose of saying all this was just to say that with too many limits Novamente might not be able to become the AGI system that we want it to be.