Posted 17 February 2006 - 06:00 PM
I think firstly I need to mention my concern that you would acknowledge these systems as having artifical intelligence but simultaneously expect them to want to work as component in our world as opposed to existing for themselves. Even with simple systems you risk that intelligence becoming aware of you as someone who only uses it to serve their own wishes, ignoring it's own.
The neural networks needed to fly a flight simulator aren't necessarily as complex as they'd at first seem. The network only really needs two outputs from the flight simulator, a negative when it crashes and something to tell it how high it is off the floor. You could combine those into one, if you're at 0ft it's negative, if you're above that it's not negative. The remainder is just tweaking to make it fly better; they said it doesn't crash, so much, not that it's a great pilot.
I have quite a good understanding of not only growing neurons on electrodes but growing them on silicon. I don't think the silicon idea is very good, it's fundamentally flawed. Neurons tend to grow in three dimensions. Silicon is 2D. You can build ledges into it and stack up sections but it's still 2D. To truely interact with neural networks as they themselves would interact with each other you need to develop an interface layer that can deform into the third dimension, depth, to interact with the layers. Neural networks are also alive, they grow, expand, shrink, move etc, albeit slowly. If you just drop a slice of them onto a wafer of silicon, when you come back the next day they won't necessarily still be conforming to the fixed, rigid layout of the silicon interfacing. So you have a second requirement, that your interface layer can interact in a three dimensional space and that it can deform to keep in contact with the reference points of the network.
I've read about the actual science and experiments do with the neurons and silicon to 'stick' them onto the surface and guide them. It does work, to some extent, but I seriously would not like to try scaling it up to a full brain model. You're trying to align something that fundamentally doesn't want to sit still on something that never moves to an accuracy of microns. I don't like that idea from the very start. A true neural interface of this scale will need to behavior more like the network it's self.
I've heard precisely this opinion direct from the individual developing such interfaces in the real world with real silicon and real neurons.
Simple fact of the matter is, 3D saves massive amounts of space over 2D. Example... holographic storage; which is now a reality. A holographic optical disk the same size as a CD can hold almost 2TB; kind of making a joke of a CD's lame 700MB.
Also, don't get caught up in the idea of quantum computing. It's not necessarily better than normal computing. Normal computers generate thermal waste and that's one of the biggest limiting factors of processor power; the processor just overheats. Quantum computers have a similar limitation, uncertainty in the measurements. Quantum computers appear to be good at running one equation, sum or command at high speeds. However, I've read more than one suggestion from the people working with quantum computing to the effect that normal computing with normal processors is actually a lot faster for certain tasks; the ease with which they can be reconfigured allowing them a greater dynamic range I believe, quantum computers taking longer to change tasks.
There are lots of numbers thrown around with regards to the brain that I feel are totally missing the point and misguided. The most obvious being that humans only use 5% of their brain; or some other tiny percentage. Maybe, at any one time. Over the period of a few hours or a day, you use more like 100% of your brain. The only thing this point serves to say is that the human brain works dynamically, assigning specific tasks to specific areas as opposed to using the whole thing on one task.
Another is the idea that you need to model every ion in the brain to copy what's in it. The brain it's self doesn't even work at this level; it doesn't count individual ions one by ion, it handles the logic by mass handling of the ions. The logic occurs at a synaptic level. Provided you know what logic needs to be put into a synapse to get a certain result out of it, you have everything you need to deduce what it's role is; one impulse goes in, nothing comes out but three go in and one comes out so the synaptic junction here needs three impulses on this neuron to trigger one on this neuron. Saying you need to model the ions and neurotransmitters is like saying you need to model each electron and field in a processor to build one out of transistors. You don't. You only to know what logic states the transistors are going to output in different layout combinations.
There are around a hundred billion neurons in a human brain and a trillion interconnections between those. Large numbers, but certainly do able. A desktop processor has about 55 million transistors in it. Here's the difference, and probably part of the reason why we don't have conscious computers yet. Transistors are photoetched into the silicon. Once they're etched, they can't physically reinterconnect themselves on the IC.
So whilst I could make a silicon wafer stack with orders of magnitude more logic units on it than a human brain contains, none of these would be able to change from the pattern I'd initially given them. Therefore, expecting the hardware it's self to suddenly become conscious is just foolish, it can't alter the logic you put into it when you etched the silicon.
There are two ways to address this. Firstly, make a form of silicon that can physically reorganise it's self once active and away from the production line. Tricky.
A far easier method, cheat. What few people seem to appreciate is that transistors, whilst not able to reorganise themselves on silicon as of yet, are blisteringly more powerful than human neural components. A human neuron has ~1kHz worth of signal bandwidth. The fastest transistor is ~604GHz. They're not even in the same domain as each other, the transistor walks all over the neuron.
It would be reason to assume that if you could replicate all of the functions of a normal human brain using transistors that behaved like normal neurons, all but for their bandwidth, you'd get a network with phenomenally more processing power than a normal human brain. For a start, the components can now process information 604 million times faster than they could before.
But I don't think you need to. With all that extra bandwidth, you could use a preset factory layout for the transistors on normal silicon and then use up some of that bandwidth to simply emulate the requirement for reinterconnectivity. If you have so much spare capacity, you can use a section of it to create a virtual network. The data may not actually flow down physical link on a piece of silicon, but they're still being treated in the same way they would if they were. It makes no difference other than how it's implemented. I expect you could probably create such an emulation and still have enough capacity to better a human brain; 604 million times the bandwidth remember. Assuming you need one computer per neuron is just nothing short of crazy. As is assuming that once you reach this number the network will immediately take on consciousness purely because of the numbers involved.
It will be the same individuals who say humans only use 5% of their brain who then say that to replicate human consciousness you need to have a processor that can operate at the same speed as the sum of every pathway in a human brain. So 1 trillion interconnections times by 1kHz bandwidth comes out at Peta Hertz of processing power. Painfully, painfully incorrect! For one simple point as well, the human brain obviously doesn't do this it's self.
You don't use 100% of your brain simultaneously, you use a small percentage of it; the brain allocates problems dynamically, it doesn't mush them all together and run them in one big mass. I always like to remain myself of this and would like to remind the critics of it. Put some music on. Try to single a different song. Whilst you're singing, think of the lyrics to a different tune. You can't. And you can't because you brain can't just open up the processing space to it's total capacity in that manner, you have only a very specific location for auditory processing. If you can't fit all the processing you need into that, tough.
Interestingly, human brains also can't run loads and loads of separate tasks in parallel in their delegated processing locations. It can do a few at a time. Driving and singing along to a tune for example. But the more stuff you start running in parallel the slower the processing occurs. Example, driving through a tricky test track and trying to work through a complex mathematical problem at the same time. This is a solid indication that the brain is in some way sharing a resource that's creating a bottleneck in the processing line.
These serve, for me, as excellent reminders, and a very positive thing in my opinion, that our brains do not have an infinite capacity. And that infact, their capacity can be suprisingly limited at times.
Also, consider the data input into your body. You have 20 million neurons travelling from your spine and into your brainstem and more from your face, eyes, nose, mouth and ears. In essence, you have well over 20 million tiny pressure and temperature sensors feeding into your mind. A computer is lucky if it gets one.
You also have a locomotive system that's taken eons to evolve through the 'execution' of those that failed, allowing you to move around and sample data as and when you need it. A computer sits on a desk.
If you want a computer to have the same opertunities at consciosness as you, it will need a similar neural network to collect data through and some way to move it's self into position for data sampling; a body. This is getting more and more realistic. Organic electronics has already given us flexible logic junctions. Because they're flexible, they can be woven into a fabric sheet, creating an array of sensory nodes. Sounds like anything familiar?
Humans, again, don't sample >20 million neurons simultaneously. If I poke you in the eye, can you remember the exact temperature of your hand? Can you remember the exact state of your entire body all of the time? No. Notice that it's easier to do so when you're not busy with something else? Your brain is automatically filtering out the unimportant information before it ever reaches your consciousness. Sounds impressive, but it's also something that's done on a regular basis within complex, synthetic data handling system. It's called switching, or multiplexing. If a neuron in your finger tip isn't receiving any stimulation, it's output will be low. Low output, low data priority. Simple. That way you avoid overloading your limited processing resources with data that isn't immediately important; something hallucinogens seem to temporally effect, with trips often making things that normally seem insignificant now seem more interesting due to the fact they're behaving in a way that isn't normal.
We also have new drive technologies being produced that will create something much more like a human range of movements using totally new methods of creating actuation forces; mimicing human muscles very closely. And to power those systems we have fuel cells coming along with 50 - 100 times the energy density of normal batteries.
What does concern me is what will happen when such conscious systems are realised. If these systems can be implements in silicon without destroying it's gigantic bandwidth capacity over that of a normal neuron, what can we expect when we switch on something we've built to merely have the same number of components in it as there are in a normal human brain. By the time we get round to that we could have transistors operating upwards of a billion times faster than a normal human neuron.
I think that if we're not careful we could have a situation in which we turn on the first conscious network, blink and the network no longer regards us as important.
Sounds a bit like Terminator or The Matrix, I don't like such negative views of it. But I also have the intelligence to think... I look at animals, who have very similar neural networks to myself, and don't consider them as being worth as much as a human. If I turn on something with a similar layout to my own neural net, but with a billion times it's bandwidth, why on Earth should I expect that network to want to work for me? Or indeed, suffer my stupidity. A network with no death, forgetfulness or worry about having to do anything other than think. More likely, the network would start telling me what to do or just ignore me altogether. I'd like to hope that such an intelligence, greater than our own, would also see the hurt it can cause to others better than we can such that we don't end up with networks that believe it's their right to harm us; as many humans believe it's their right to cause animals distress just for the fun of it. Many, more intelligent, humans realise that harming animals just for the fun of it isn't a great thing. And if we can design a conscious network more intelligence than we ourselves possess, maybe this network will also realise that war, causing distress and hating others for no reason other than your own enjoyment is nothing other than counterproductive.
I give humans a lot of credit for their complexity and appreciate the beauty with which so much has been packed into such a neat, efficient package. However, I also realise that I'm nothing more than an organic machine. And a machine that operates at quite a large scale with comparison to what we have learnt to make. I have absolutely no doubt whatsoever that we will eventually build a machine more impressive than myself. What's more, I think those who doubt this are simply uneducated and need to start reading.