• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

The Singularity Keystone


  • Please log in to reply
1 reply to this topic

#1 bascule

  • Guest
  • 11 posts
  • 0

Posted 04 January 2006 - 06:08 PM


The Neocortical Column (NCC) marks the quantum leap from reptiles to mammals and therefore constitutes the birth of mammalian intelligence and the emergence of human cognitive capabilities.

The NCC seems to have been such a highly successful microcircuit design, that it was repeatedly copied to become almost 80% of the human brain (millions of columns were added).


This quote comes from the Blue Brain Project, which intends to use the world’s 9th most powerful supercomputer to create the most realistic, cellular level model of the neocortical column to date.

Daniel Dennett, one of the pre-eminent thinkers on consciousness, reckons that consciousness must come about through a cumulative, distributed effect. Imagine a thriving society of pattern analyzers who like to communicate with each other by posting little bits of “thoughtstuff” (phenomenological objects, or “phenoms” to the Dennett-initiated) on a sort of community bulletin board, which all the other pattern analyzers can read and respond to. All the pattern analyzers choose from the available “phenoms” and look for whatever patterns they choose to specialize in. And specialize they do, just like little people they all have their specific types of patterns they “like” to look for which they develop over time. So when a common pattern is discovered that enough of the specialists like, it gets reposted throughout the bulletin board of your brain, with added input from more and more specialists as the idea develops. This mimics human societal behavior, in which we figure out higher level concepts by listening to other people’s ideas and contributing back our own deductions. In collective human behavior, the role of the “phenom” is replaced with that of a meme, a “thought virus” which passes from person to person. In either case, the phenoms or memes which are replicated by the greatest number of individuals dominate the thought process or collective though process. In humans, the dominant phenoms control our behavior. In society, a tool like Google Zeitgeist can show what the collective consciousness is “thinking” about.

Now, I don’t want to mischaracterize Dennett; he’s quite adamant that consciousness cannot be localized to a specific part of the brain, and that the entire brain works for the benefit of conscious processing, therefore indicating that consciousness is an inseperable quality from any part of the brain. I, on the other hand, am going to be a little more brash than Dennett, and say that the most logical seat of consciousness is the part that has diverged so drastically in humans as compared to the rest of our common ancestral heritage. It only makes sense that the “specialists” Dennett talks about are the neocortical columns and that consciousness arises through their collective action.

If this is the case, then what the Blue Brain project is building a mathematatical model of is essentially the atomic unit of consciousness, a universal pattern analyzer which can work collaboratively with millions of variadic copies of itself to correct its own mistakes and deficiencies, who together comprise a society which can share the patterns that they individually see (and recognize that others’ see the same patterns) and also correct each others’ mistakes if the pattern doesn’t actually exist.

Once we have a mathematical model of this atomic unit of universal pattern analyzer, we don’t need a computer the size of Blue Brain to model this atomic unit of consciousness. I don’t mean to sound myopic, but it seems to me that it’s far more likely that we could already model tens of thousands of them, in realtime, on your modern day home PC. Blue Brain is, in effect, an emulator for the computer that the NCC’s “specialist” program runs on, and once we have that, the computational requirements will drastically decrease. The mathematical model that Blue Brain (or a project with similar methodology, if Blue Brain doesn’t pan out) produces will be the hot commodity among AI researchers, who I can only predict will begin building programs which model large communities of artificial NCCs. I really believe that once we can do that, AI researchers will finally be able to fill in the gaps themselves since they will finally have a surefire base framework to begin operating from, with the “hard stuff” already in place.

Furthermore, once we understand how the neo-cortical column works, we can begin to learn how to “talk” to them with electronic hardware. We can use the mathematical model to extract phenomenological messages that the columns are communicating, and then apply a (Bayesian) classification algorithm to begin divining meaning. From this we can build a “language map” of how the neocortex communicates internally. Once we have this, we can begin using computers to generate and inject phenomenological objects into the workspace of consciousness. When we can do this, and have a bidirectional interface directly into human consciousness, we’ve successfully created a direct neural interface (DNI), perhaps the ultimate form of Intelligence Amplification (IA) as predicted by Vernor Vinge.

So, following the successful extraction of a mathematical model of the NCC’s operation, I would predict that both strong AI and DNI are in the not-too-distant future. And if you believe all of this, then the Singularity could happen in a decade’s timespan, or less…

This is a syndicated meme

#2 johnuk

  • Guest
  • 35 posts
  • 0

Posted 18 February 2006 - 12:25 AM

Makes me wonder if the guys running these programs have stopped to consider what they'll do when they do accidentally create something that knows what it is.

I've also come to take such atomically accurate models with a pinch of salt. If all that was required was the modelling being done, we'd have one conscious by now. I can only conclude that somewhere they're missing the 'bigger picture' of what's going on.

I think a much better way to understand consciousness would be to take all the money out of these programs and put it into neural interfacing.

Once you have a direct connection to each neuron in a human brain, you can just sit back and watch what happens on the interface, deciphering the layout and operation by direct observation as opposed to guessing at it with modelling. If your oberservations aren't going fast enough, you could force them by injecting some logic at points in the network. When you stimulate one neuron you just wait and see what happens to the others. As the sequencing of impulses begin to mount up, which I suspect they will do extremely rapidly, you will also begin producing links between neurons; e.g. this neuron is never active unless these are. You then assign a probability to the link, almost zero on the first link observations. As you collect the huge volumes of data you will, the possible links between neurons will either be reinforced or subtracted from, giving you a map of the interconnectivity.

You could stimulate neural activity none invasively by just stimulating the person in the outside world. For instance, whilst you're recording the activity of every neuron in their brain, take them to concerts, have them doing things that keep them mentally busy.

The best neural interfaces at the moment are rigid 2D silicon wafers. There's no way that's ever going past the surface of your brain without having to slice into it. The field is in desperate need of better interface layer.

Then there's the most worrying problem of all, organic neural networks have a tendancy to move, grow and reorganise themselves. Not on a mass scale, but it still occurs. Your probability software would need to account for new neurons growing during observation. These would occur as probability errors, neurons that once only responded to one particular stimulation would now stop or start responding to others.

Hopefully, once you have a connection with enough of the tissue, capturing these newest neurons directly wouldn't be a problem, they could be deduced with software observation of how the network is responding.

This route also solves two problems at once, you understand how your brain works and, hey, you just happen to have a direct connection to it. Trying to model it in a computer in normal way means you're guessing at more of it and you still lack the capacity to communicate directly with your brain.

A number of things about the whole idea of modelling a brain like this seem wrong to me. It just seems like the harder of two paths, the most likely to be incorrect and it also breeds the mindset of "it's a computer, switch it on and off when you want to, have it run our machines for us and do all the boring stuff we don't want to".

Things like the CCortex project are being run by companies. I seriously doubt they want to create a new consciousness purely as a labour of love. Essentially they want something that thinks for it's self, and brings all the benefits of being able to do so, but costs nothing to employ. Irresponsible would be a good word I guess.

sponsored ad

  • Advert



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users