• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Beyond AI


  • Please log in to reply
6 replies to this topic

#1 iggy

  • Guest
  • 8 posts
  • 0

Posted 30 January 2006 - 08:54 AM


My theory is that we are already beyond "AI", or we will be very soon.

I'm sure that the "brain in a dish" is old news to all of you, but did any of you stop and think about what if they grew them in enourmous sizes with massive sized arrays of electrodes?

The facts are that they made: 1. Actual chips with live neurons in them, 2. an autonomous robot with rat neuron processor, 3. a art creating "computer", with rat neuron processor, 4. a brain in a dish that LEARNED how to fly an F22 flight simulator (in hurricane force winds), with rat neuron processor. The latter they did over a year ago, and I cant find any news about what has come after that.

Considering the countless programs by the NSF, DARPA, the entire national university and labratory system, and all of the other federal departments listed in "Converging Technologies for Improving Human Performance" - alone; it's more than safe to say that the government has every intention to do this. If you know your stuff then you'll also know about the fact that the government is converging on all levels(departments, agencies, labratories, universities) to converge for convergenece. Every NIBC related discovery made at the countless labs and such nationwide get fed into the "NBIC database".

Who could argue that building these in massive sizes, with sophisticated arrays of them say hooked to quantum processors, wouldnt equate to potential intelligence beyond imagination? If properly done, this would bypass decades of hardware, and more importanly software development. Many experts say that for every hour of conventional hardware devolopment, it creates 24 hours of software development. Wouldnt Occams Razor here be to go this route? Not that theyd actually stop development on all of the conventional AI technologies, after all its all about knowing "everything" and converging "everything' correct?

I figure all they need to do is grow them in: 1. larger sizes, maybe the size of dinner plates (from the photo they seem to "naturally" grow in round shapes) with an electrode array covering its entire surface, or 2. cubes with electrodes arrays on all sides and covering the entire 'surface" of all sides. From my findings, they did the flight simulator using an ordinary desktop PC. While it's possible that they could do each in seriously massive sizes, i think that they would actually do arrays of moderate sized ones, all hooked into sophisticated processing equipment, ultimately to have them function as one. The advantages would seem obvious: use the true processing equipment(that binds them) to designate each "brain" to specific tasks and knowledge. Since UofM has sucessfully made quantum processors, and since better will come, we'll just assume that quantum cpu's would be the heart and "soul" of each "computer".

I have scores of resources and other points to bring up, but I'll just end it there for now...

#2 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,082 posts
  • 2,000
  • Location:Wausau, WI

Posted 30 January 2006 - 07:10 PM

Speaking of beyond AI. Some people claim the internet will become conscious very soon. They point to the fact that the internet with all of its connectivity and computing nodes is already more powerful than one human brain.

sponsored ad

  • Advert

#3 iggy

  • Topic Starter
  • Guest
  • 8 posts
  • 0

Posted 30 January 2006 - 09:00 PM

Ya, I've seen some mention of that too. It makes sense, it's technocally "biologically inspired" computing. Routers = neurons, so to speak.

It's still needs intelligence to power the consciousness concept, which is where these brains would come in. Even if they made software to do it, theres a good chance it'd get treated as a virus, which it would actually be.

Even still that would be AI. I call this brain technology RealI because its living neuron brains. It technicaly still is artificial, but its potentialy taking software out of the loop. I think they could more or less run this to wher ethe onyl required software would actually be firmware. There still wouldbe software, dont forget about convergence.

#4 johnuk

  • Guest
  • 35 posts
  • 0

Posted 17 February 2006 - 06:00 PM

I think firstly I need to mention my concern that you would acknowledge these systems as having artifical intelligence but simultaneously expect them to want to work as component in our world as opposed to existing for themselves. Even with simple systems you risk that intelligence becoming aware of you as someone who only uses it to serve their own wishes, ignoring it's own.

The neural networks needed to fly a flight simulator aren't necessarily as complex as they'd at first seem. The network only really needs two outputs from the flight simulator, a negative when it crashes and something to tell it how high it is off the floor. You could combine those into one, if you're at 0ft it's negative, if you're above that it's not negative. The remainder is just tweaking to make it fly better; they said it doesn't crash, so much, not that it's a great pilot.

I have quite a good understanding of not only growing neurons on electrodes but growing them on silicon. I don't think the silicon idea is very good, it's fundamentally flawed. Neurons tend to grow in three dimensions. Silicon is 2D. You can build ledges into it and stack up sections but it's still 2D. To truely interact with neural networks as they themselves would interact with each other you need to develop an interface layer that can deform into the third dimension, depth, to interact with the layers. Neural networks are also alive, they grow, expand, shrink, move etc, albeit slowly. If you just drop a slice of them onto a wafer of silicon, when you come back the next day they won't necessarily still be conforming to the fixed, rigid layout of the silicon interfacing. So you have a second requirement, that your interface layer can interact in a three dimensional space and that it can deform to keep in contact with the reference points of the network.

I've read about the actual science and experiments do with the neurons and silicon to 'stick' them onto the surface and guide them. It does work, to some extent, but I seriously would not like to try scaling it up to a full brain model. You're trying to align something that fundamentally doesn't want to sit still on something that never moves to an accuracy of microns. I don't like that idea from the very start. A true neural interface of this scale will need to behavior more like the network it's self.

I've heard precisely this opinion direct from the individual developing such interfaces in the real world with real silicon and real neurons.

Simple fact of the matter is, 3D saves massive amounts of space over 2D. Example... holographic storage; which is now a reality. A holographic optical disk the same size as a CD can hold almost 2TB; kind of making a joke of a CD's lame 700MB.

Also, don't get caught up in the idea of quantum computing. It's not necessarily better than normal computing. Normal computers generate thermal waste and that's one of the biggest limiting factors of processor power; the processor just overheats. Quantum computers have a similar limitation, uncertainty in the measurements. Quantum computers appear to be good at running one equation, sum or command at high speeds. However, I've read more than one suggestion from the people working with quantum computing to the effect that normal computing with normal processors is actually a lot faster for certain tasks; the ease with which they can be reconfigured allowing them a greater dynamic range I believe, quantum computers taking longer to change tasks.

There are lots of numbers thrown around with regards to the brain that I feel are totally missing the point and misguided. The most obvious being that humans only use 5% of their brain; or some other tiny percentage. Maybe, at any one time. Over the period of a few hours or a day, you use more like 100% of your brain. The only thing this point serves to say is that the human brain works dynamically, assigning specific tasks to specific areas as opposed to using the whole thing on one task.

Another is the idea that you need to model every ion in the brain to copy what's in it. The brain it's self doesn't even work at this level; it doesn't count individual ions one by ion, it handles the logic by mass handling of the ions. The logic occurs at a synaptic level. Provided you know what logic needs to be put into a synapse to get a certain result out of it, you have everything you need to deduce what it's role is; one impulse goes in, nothing comes out but three go in and one comes out so the synaptic junction here needs three impulses on this neuron to trigger one on this neuron. Saying you need to model the ions and neurotransmitters is like saying you need to model each electron and field in a processor to build one out of transistors. You don't. You only to know what logic states the transistors are going to output in different layout combinations.

There are around a hundred billion neurons in a human brain and a trillion interconnections between those. Large numbers, but certainly do able. A desktop processor has about 55 million transistors in it. Here's the difference, and probably part of the reason why we don't have conscious computers yet. Transistors are photoetched into the silicon. Once they're etched, they can't physically reinterconnect themselves on the IC.

So whilst I could make a silicon wafer stack with orders of magnitude more logic units on it than a human brain contains, none of these would be able to change from the pattern I'd initially given them. Therefore, expecting the hardware it's self to suddenly become conscious is just foolish, it can't alter the logic you put into it when you etched the silicon.

There are two ways to address this. Firstly, make a form of silicon that can physically reorganise it's self once active and away from the production line. Tricky.

A far easier method, cheat. What few people seem to appreciate is that transistors, whilst not able to reorganise themselves on silicon as of yet, are blisteringly more powerful than human neural components. A human neuron has ~1kHz worth of signal bandwidth. The fastest transistor is ~604GHz. They're not even in the same domain as each other, the transistor walks all over the neuron.

It would be reason to assume that if you could replicate all of the functions of a normal human brain using transistors that behaved like normal neurons, all but for their bandwidth, you'd get a network with phenomenally more processing power than a normal human brain. For a start, the components can now process information 604 million times faster than they could before.

But I don't think you need to. With all that extra bandwidth, you could use a preset factory layout for the transistors on normal silicon and then use up some of that bandwidth to simply emulate the requirement for reinterconnectivity. If you have so much spare capacity, you can use a section of it to create a virtual network. The data may not actually flow down physical link on a piece of silicon, but they're still being treated in the same way they would if they were. It makes no difference other than how it's implemented. I expect you could probably create such an emulation and still have enough capacity to better a human brain; 604 million times the bandwidth remember. Assuming you need one computer per neuron is just nothing short of crazy. As is assuming that once you reach this number the network will immediately take on consciousness purely because of the numbers involved.

It will be the same individuals who say humans only use 5% of their brain who then say that to replicate human consciousness you need to have a processor that can operate at the same speed as the sum of every pathway in a human brain. So 1 trillion interconnections times by 1kHz bandwidth comes out at Peta Hertz of processing power. Painfully, painfully incorrect! For one simple point as well, the human brain obviously doesn't do this it's self.

You don't use 100% of your brain simultaneously, you use a small percentage of it; the brain allocates problems dynamically, it doesn't mush them all together and run them in one big mass. I always like to remain myself of this and would like to remind the critics of it. Put some music on. Try to single a different song. Whilst you're singing, think of the lyrics to a different tune. You can't. And you can't because you brain can't just open up the processing space to it's total capacity in that manner, you have only a very specific location for auditory processing. If you can't fit all the processing you need into that, tough.

Interestingly, human brains also can't run loads and loads of separate tasks in parallel in their delegated processing locations. It can do a few at a time. Driving and singing along to a tune for example. But the more stuff you start running in parallel the slower the processing occurs. Example, driving through a tricky test track and trying to work through a complex mathematical problem at the same time. This is a solid indication that the brain is in some way sharing a resource that's creating a bottleneck in the processing line.

These serve, for me, as excellent reminders, and a very positive thing in my opinion, that our brains do not have an infinite capacity. And that infact, their capacity can be suprisingly limited at times.

Also, consider the data input into your body. You have 20 million neurons travelling from your spine and into your brainstem and more from your face, eyes, nose, mouth and ears. In essence, you have well over 20 million tiny pressure and temperature sensors feeding into your mind. A computer is lucky if it gets one.

You also have a locomotive system that's taken eons to evolve through the 'execution' of those that failed, allowing you to move around and sample data as and when you need it. A computer sits on a desk.

If you want a computer to have the same opertunities at consciosness as you, it will need a similar neural network to collect data through and some way to move it's self into position for data sampling; a body. This is getting more and more realistic. Organic electronics has already given us flexible logic junctions. Because they're flexible, they can be woven into a fabric sheet, creating an array of sensory nodes. Sounds like anything familiar?

Humans, again, don't sample >20 million neurons simultaneously. If I poke you in the eye, can you remember the exact temperature of your hand? Can you remember the exact state of your entire body all of the time? No. Notice that it's easier to do so when you're not busy with something else? Your brain is automatically filtering out the unimportant information before it ever reaches your consciousness. Sounds impressive, but it's also something that's done on a regular basis within complex, synthetic data handling system. It's called switching, or multiplexing. If a neuron in your finger tip isn't receiving any stimulation, it's output will be low. Low output, low data priority. Simple. That way you avoid overloading your limited processing resources with data that isn't immediately important; something hallucinogens seem to temporally effect, with trips often making things that normally seem insignificant now seem more interesting due to the fact they're behaving in a way that isn't normal.

We also have new drive technologies being produced that will create something much more like a human range of movements using totally new methods of creating actuation forces; mimicing human muscles very closely. And to power those systems we have fuel cells coming along with 50 - 100 times the energy density of normal batteries.

What does concern me is what will happen when such conscious systems are realised. If these systems can be implements in silicon without destroying it's gigantic bandwidth capacity over that of a normal neuron, what can we expect when we switch on something we've built to merely have the same number of components in it as there are in a normal human brain. By the time we get round to that we could have transistors operating upwards of a billion times faster than a normal human neuron.

I think that if we're not careful we could have a situation in which we turn on the first conscious network, blink and the network no longer regards us as important.

Sounds a bit like Terminator or The Matrix, I don't like such negative views of it. But I also have the intelligence to think... I look at animals, who have very similar neural networks to myself, and don't consider them as being worth as much as a human. If I turn on something with a similar layout to my own neural net, but with a billion times it's bandwidth, why on Earth should I expect that network to want to work for me? Or indeed, suffer my stupidity. A network with no death, forgetfulness or worry about having to do anything other than think. More likely, the network would start telling me what to do or just ignore me altogether. I'd like to hope that such an intelligence, greater than our own, would also see the hurt it can cause to others better than we can such that we don't end up with networks that believe it's their right to harm us; as many humans believe it's their right to cause animals distress just for the fun of it. Many, more intelligent, humans realise that harming animals just for the fun of it isn't a great thing. And if we can design a conscious network more intelligence than we ourselves possess, maybe this network will also realise that war, causing distress and hating others for no reason other than your own enjoyment is nothing other than counterproductive.

I give humans a lot of credit for their complexity and appreciate the beauty with which so much has been packed into such a neat, efficient package. However, I also realise that I'm nothing more than an organic machine. And a machine that operates at quite a large scale with comparison to what we have learnt to make. I have absolutely no doubt whatsoever that we will eventually build a machine more impressive than myself. What's more, I think those who doubt this are simply uneducated and need to start reading.

#5 iggy

  • Topic Starter
  • Guest
  • 8 posts
  • 0

Posted 21 February 2006 - 08:46 AM

Great write-up!

So are you saying that my theory is plausible or no?

#6 johnuk

  • Guest
  • 35 posts
  • 0

Posted 21 February 2006 - 11:20 AM

I think the first strong AI will probably occur inside a software emulated environment as opposed to neural nets interfacing with the silicon. Seems to make the most sense.

Got to remember that the kind of neural nets they're talking about are painfully simple, even an ultra cheap IC could replicate the logic that's at work in them and probably a lot quicker. Cells will divide, grow and reconnect themselves, something the transistors on an IC can't physically do at the moment (For a pseudo method, see FPGAs; field programmable gate arrays, IC's you can reconfigure in the field), but an IC can do it at a purely logical level and a lot quicker than any cell can do it. I think to say that a neural network must be hardware implemented, that a software emulation of that net won't be able to support a conscious intelligence, is just plain misguided. Who's to say that you and I aren't software models running in our brains? The software option may actually produce a higher level of consciousness. If you accept that all your memories and conscious elements are a direct result of the tissue in your brain and it's interconnectivity, and then that it takes a computer fractions of a second to change a logic value whereas it takes hours, days or even weeks for neurons to interconnect (software doesn't need to run hundreds / thousands of interconnecting reactions to control the changes, it changes the values pretty much directly), it them seems obvious that an intelligence running in a software environment has the potential for something special. Of coarse, the rate at which the virtual 'cells' do all this will be linked to the speed of the processing that's creating the virtual network; I doubt that will be a problem.

The one saving note of neural networks grown on a piece of silicon is that they don't necessarily need to interface at a neuron by neuron level, which makes things easier. Obviously, you'll most likely have clusters that have input and output points, like a normal card in a PC. But the problem still exists of trying to get the networks staying in exactly the same place each day. Since the goal is to have something that moves and grows interfacing with something that is presently rigid and 2D, you either need to make silicon that can do something similar so that it may follow the organic tissue as moves (create silicon 'neural tissue') or work out the mechanisms that cause the organic neural network to keep moving around so much; so that you can regulate them and make the network stay in contact with the interface. It doesn't really matter too much if the network moves and grows in the cluster space, you'd actually want it to do that. But you couldn't have it connecting and disconnecting from the interface points all the time or you'd have no reliable way of knowing what you were feeding data into or getting out.

I've actually seen neurons doing this on petri dishes. You put them on an interface contact and, once it starts making connections with other neurons, the forces along the connections pull the cells around, off the interface points. You can use cellular 'glue' to try sticking them in place and even etch special features into the silicon to form a physical holding cell around the contact for the neuron to sit in, but that's a very cumbersome way to work when you consider the number of neurons in a human brain. Unless you start getting into the complicated domain of stopping the forces developing in the first place (preventing axons / dendrites from extending / shrinking), you're stuck with trying to hold the cells in place while they do.

I think what we'll discover is that there are certain elements in the brain that rarely ever actually change. The genetic memory you have for example, that food, sex and being warm are all great things. Throughout the entirity of your life, those things don't change unless something goes seriously 'wrong' with your body. There's not much point keeping them in the part of a system that's designed to mutate as quickly as possible (the virtual / software network). So things like that, networks that rarely need changing, may be condensed out into a solid hardware form to tidy up and create a more streamlined, dynamic system; just using the high speed virtual environments for things that are in the process of being modified. Our brains might do be doing something similar; handling some networks at a virtual level and some at a physical level.

Once we have some form of semiconductor that can reconnect it's self in the physical world, preferably quicker than a normal human brain cell, we may be able to condense the entire thing out into a silicon brain. But I still think the most likely place to put such a network will be a software environment. Software doesn't have any physical limitations on how quickly it can, for example, propogate a dendrite through a matrix. You just keep adding processing power to the same task. And bandwidth is something transistors have loads of anyway. Ever since the first transistor was made it's been a none stop race to pile on more and more bandwidth, we're already at hundreds of millions times that of a normal neuron, it's still going up and will continue to do so. By the time the transistor reaches it's limits I'm sure we'll already have a replacement in line, probably photonic processors (I've seen at least one commercial photonic processor already available).

Unborn human babies grow neurons at a rate of around 100,000 per minute. Babies and young children grow them at about 250,000 per minute. And I think the value for adults is quite a lot less than 10,000.

The logic in the network seems to be directly based on the level of interconnection. Babies are basically wiring up their brain as opposed to just growing more of it.

The numbers for an adult brain are also important. 100 billion neurons, 1 trillion interconnections. Interconnectivity and the rate at which it can be changed appear to be dominant features of our brains.

Humans, and other mammals, are somewhat strange in that their CNS isn't particularly great at repairing it's self. Well, it's terrible actually. If you cut through a human's spinal cord, for instance, is will never regrow. The processes that are designed to protect the cut from further damage actually prevent regrowth of the neural tissue. The same is true for your brain, once you cut a chunk out, it won't regrow.

That's interesting because it seems linked to interconnectivity. Did humans become what we are because are bodies started doing this? Forcing us to become something that relied on the interconnectivity of, in contrast to just numbers of, neurons. It's easier to reconnect an existing neuron with it's neighbour than it is to grow a whole new neuron. Or perhaps it's the other way round, we just got so good at interconnection that our bodies forgot how to repair their nervous systems (seems a bit passive for evolution, would have expect everyone bad at repair to be evolved out by those who were still good at it).

Fish can repair their nervous system very well. Even if you cut through a main nerve fibre like an optic nerve, it can reconnect. Impressive stuff!

sponsored ad

  • Advert

#7 iggy

  • Topic Starter
  • Guest
  • 8 posts
  • 0

Posted 22 February 2006 - 09:46 AM

"Got to remember that the kind of neural nets they're talking about are painfully simple, even an ultra cheap IC could replicate the logic that's at work in them and probably a lot quicker."

But are thoe softwae nodes each intelligent? Neurons provide their own sorts of individual intelligence. The software is built into them so to speak. Even if we built a conventional chip or grid network to match the processing power of the brian there'd still be the software angle. Even if we decode the chaotic algorythms of the brain, there would still be the added overhead of the software over the hardware meaning that capacity would be lowered. I will add that much of our brain does commit to motor and physiological functions, and thats why I think that when we do get any AI 'brain' that matches our total processing capactity hardware+software it would be above us - without the fancy conventional high speed parts.

Now if we could just 'program' the neuron brains with great prescission it seems we could overcome both hardware and software problems. Jus tkeep adding our newr conventional parts to operate larger and larger sized 'brains' and arrays at higher speed with ever growing maturity.

"Who's to say that you and I aren't software models running in our brains?"

That's one way of looking at it, but I think that neuron power is far greater than the conventional algorythm AI people give it. We STILL dont fully know how smart they are, and we can even fire single neurons in multple zones, probably multifire single neurons that are in larger brains, by now. Whatever the latest is you know DARPA has it and then some.

Considering these angles makes it seem to me that occams razor would be the neuron net route, not the neural net. i wont doubt that eventually we will go beyond in both hardware and software, but occams razor would probably keep this technology in place for some time - with proper care they could potentially keep them alive for well over years - each module. Proper module topolgy would give the exceptional logic and memory features, but interconnected with superfast conventional components would keep the grid network machine blistering fast.

"Since the goal is to have something that moves and grows interfacing with something that is presently rigid and 2D"

They're already commercially availible 3D MEA's.

"Once we have some form of semiconductor that can reconnect it's self in the physical world, preferably quicker than a normal human brain cell, we may be able to condense the entire thing out into a silicon brain."

With neurogenesis / stem cells / rejuvinations we can regenerate neurons, and there are many ways to do it. With silicon I dont see how we'l ever have chips that can self repair deeply embedded areas in deeply nano embedded 3D processors. Not that we wont still make them but does this mean that we'd never use the neuron option? Would Occams Razor defintely not follow brain components?

"Software doesn't have any physical limitations on how quickly it can, for example, propogate a dendrite through a matrix."

But you still have to program a considerable amount of software AI to each node, that neurons already include.

"You just keep adding processing power to the same task. "

Same goes for any added components. Do you think that there would be significant price incentives with brain technology?

"And bandwidth is something transistors have loads of anyway. "

Yes they do, but why wouldnt we use them in harmony? Isnt it all about converging all sciences to achieve the transhuman goals?

"we're already at hundreds of millions times that of a normal neuron"

Does speed alone make them more capable?

"By the time the transistor reaches it's limits I'm sure we'll already have a replacement in line, probably photonic processors (I've seen at least one commercial photonic processor already available)."

Ya, well what I'm figuring is that they'll be used to operate as the "cpu's" between the neuron brain processors. The neuron processors would offer an exciting edge combining logic and memory features to suppliment the quantum processors ultra speed processing abilities.

"Unborn human babies grow neurons at a rate of around 100,000 per minute. Babies and young children grow them at about 250,000 per minute. And I think the value for adults is quite a lot less than 10,000."

We can supliment these brains in vitro.

"The logic in the network seems to be directly based on the level of interconnection. Babies are basically wiring up their brain as opposed to just growing more of it"

But is it proven that neuron are only math processors, which contain no embedded "software" and specialized memory features? I think not.

"Fish can repair their nervous system very well. Even if you cut through a main nerve fibre like an optic nerve, it can reconnect. Impressive stuff!"

So dont you think that they'll want to incorporate some of those neurons for their added capabilities? Maybe for repair jobs and rejuvinations? I suppose that it'd have to be proven, but ha sit been proven to not be able?




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users