• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
* * * * * 4 votes

Blue Brain


  • Please log in to reply
54 replies to this topic

#1 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 04 March 2008 - 05:09 PM


I've never heard of it, but it looks like these guys have got good stuff going on concerning brain simulation. They say that in 10 years "this computer will be talking to us".


Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?

A computer simulation of the upper layer of a rat brain neocortical column. Here neurons light up in a “global excitatory state” of blues and yellows. Courtesy of Dr. Pablo de Heras Ciechomski/Visualbiotech

In the basement of a university in Lausanne, Switzerland sit four black boxes, each about the size of a refrigerator, and filled with 2,000 IBM microchips stacked in repeating rows. Together they form the processing core of a machine that can handle 22.8 trillion operations per second. It contains no moving parts and is eerily silent. When the computer is turned on, the only thing you can hear is the continuous sigh of the massive air conditioner. This is Blue Brain.

The name of the supercomputer is literal: Each of its microchips has been programmed to act just like a real neuron in a real brain. The behavior of the computer replicates, with shocking precision, the cellular events unfolding inside a mind. “This is the first model of the brain that has been built from the bottom-up,” says Henry Markram, a neuroscientist at Ecole Polytechnique Fédérale de Lausanne (EPFL) and the director of the Blue Brain project. “There are lots of models out there, but this is the only one that is totally biologically accurate. We began with the most basic facts about the brain and just worked from there.”

Before the Blue Brain project launched, Markram had likened it to the Human Genome Project, a comparison that some found ridiculous and others dismissed as mere self-promotion. When he launched the project in the summer of 2005, as a joint venture with IBM, there was still no shortage of skepticism. Scientists criticized the project as an expensive pipedream, a blatant waste of money and talent. Neuroscience didn’t need a supercomputer, they argued; it needed more molecular biologists. Terry Sejnowski, an eminent computational neuroscientist at the Salk Institute, declared that Blue Brain was “bound to fail,” for the mind remained too mysterious to model. But Markram’s attitude was very different. “I wanted to model the brain because we didn’t understand it,” he says. “The best way to figure out how something works is to try to build it from scratch.”

The Blue Brain project is now at a crucial juncture. The first phase of the project—“the feasibility phase”—is coming to a close. The skeptics, for the most part, have been proven wrong. It took less than two years for the Blue Brain supercomputer to accurately simulate a neocortical column, which is a tiny slice of brain containing approximately 10,000 neurons, with about 30 million synaptic connections between them. “The column has been built and it runs,” Markram says. “Now we just have to scale it up.” Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. “If we build this brain right, it will do everything,” Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? “When I say everything, I mean everything,” he says, and a mischievous smile spreads across his face.

Henry Markram is tall and slim. He wears jeans and tailored shirts. He has an aquiline nose and a lustrous mop of dirty blond hair that he likes to run his hands through when contemplating a difficult problem. He has a talent for speaking in eloquent soundbites, so that the most grandiose conjectures (“In ten years, this computer will be talking to us.”) are tossed off with a casual air. If it weren’t for his bloodshot, blue eyes—“I don’t sleep much,” he admits—Markram could pass for a European playboy.

But the playboy is actually a lab rat. Markram starts working around nine in the morning, and usually doesn’t leave his office until the campus is deserted and the lab doors are locked. Before he began developing Blue Brain, Markram was best known for his painstaking studies of cellular connectivity, which one scientist described to me as “beautiful stuff…and yet it must have been experimental hell.” He trained under Dr. Bert Sakmann, who won a Nobel Prize for pioneering the patch clamp technique, allowing scientists to monitor the flux of voltage within an individual brain cell, or neuron, for the first time. (This involves piercing the membrane of a neuron with an invisibly sharp glass pipette.) Markram’s technical innovation was “patching” multiple neurons at the same time, so that he could eavesdrop on their interactions. This experimental breakthrough promised to shed light on one of the enduring mysteries of the brain, which is how billions of discrete cells weave themselves into functional networks. In a series of elegant papers published in the late 1990s, Markram was able to show that these electrical conversations were incredibly precise. If, for example, he delayed a neuron’s natural firing time by just a few milliseconds, the entire sequence of events was disrupted. The connected cells became strangers to one another.

When Markram looked closer at the electrical language of neurons, he realized that he was staring at a code he couldn’t break. “I would observe the cells and I would think, ‘We are never going to understand the brain.’ Here is the simplest possible circuit—just two neurons connected to each other—and I still couldn’t make sense of it. It was still too complicated.”

Cables running from the Blue Gene/L supercomputer to the storage unit. The 2,000-microchip Blue Gene machine is capable of processing 22.8 trillion operations per second—just enough to model a 1-cubic-mm column of rat brain. Courtesy of Alain Herzog/EPFL

Neuroscience is a reductionist science. It describes the brain in terms of its physical details, dissecting the mind into the smallest possible parts. This process has been phenomenally successful. Over the last 50 years, scientists have managed to uncover a seemingly endless list of molecules, enzymes, pathways, and genes. The mind has been revealed as a Byzantine machine. According to Markram, however, this scientific approach has exhausted itself. “I think that reductionism peaked five years ago,” he says. “This doesn’t mean we’ve completed the reductionist project, far from it. There is still so much that we don’t know about the brain. But now we have a different, and perhaps even harder, problem. We’re literally drowning in data. We have lots of scientists who spend their life working out important details, but we have virtually no idea how all these details connect together. Blue Brain is about showing people the whole.”

In other words, the Blue Brain project isn’t just a model of a neural circuit. Markram hopes that it represents a whole new kind of neuroscience. “You need to look at the history of physics,” he says. “From Copernicus to Einstein, the big breakthroughs always came from conceptual models. They are what integrated all the facts so that they made sense. You can have all the data in the world, but without a model the data will never be enough.”

Markram has good reason to cite physics—neuroscience has almost no history of modeling. It’s a thoroughly empirical discipline, rooted in the manual labor of molecular biology. If a discovery can’t be parsed into something observable—like a line on a gel or a recording from a neuron—then, generally, it’s dismissed. The sole exception is computational neuroscience, a relatively new field that also uses computers to model aspects of the mind. But Markram is dismissive of most computational neuroscience. “It’s not interested enough in the biology,” he says. “What they typically do is begin with a brain function they want to model”—like object detection or sentence recognition—“and then try to see if they can get a computer to replicate that function. The problem is that if you ask a hundred computational neuroscientists to build a functional model, you’ll get a hundred different answers. These models might help us think about the brain, but they don’t really help us understand it. If you want your model to represent reality, then you’ve got to model it on reality.”

Of course, the hard part is deciphering that reality in the first place. You can’t simulate a neuron until you know how a neuron is supposed to behave. Before the Blue Brain team could start constructing their model, they needed to aggregate a dizzying amount of data. The collected works of modern neuroscience had to be painstakingly programmed into the supercomputer, so that the software could simulate our hardware. The problem is that neuroscience is still woefully incomplete. Even the simple neuron, just a sheath of porous membrane, remains a mostly mysterious entity. How do you simulate what you can’t understand?

Markram tried to get around “the mystery problem” by focusing on a specific section of a brain: a neocortical column in a two-week-old rat. A neocortical column is the basic computational unit of the cortex, a discrete circuit of flesh that’s 2 mm long and 0.5 mm in diameter. The gelatinous cortex consists of thousands of these columns—each with a very precise purpose, like processing the color red or detecting pressure on a patch of skin, and a basic structure that remains the same, from mice to men. The virtue of simulating a circuit in a rodent brain is that the output of the model can be continually tested against the neural reality of the rat, a gruesome process that involves opening up the skull and plunging a needle into the brain. The point is to electronically replicate the performance of the circuit, to build a digital doppelganger of a biological machine.

Felix Schürmann, the project manager of Blue Brain, oversees this daunting process. He’s 30 years old but looks even younger, with a chiseled chin, lean frame, and close-cropped hair. His patient manner is that of someone used to explaining complex ideas in simple sentences. Before the Blue Brain project, Schürmann worked at the experimental fringes of computer science, developing simulations of quantum computing. Although he’s since mastered the vocabulary of neuroscience, referencing obscure acronyms with ease, Schürmann remains most comfortable with programming. He shares a workspace with an impressively diverse group—the 20 or so scientists working full-time on Blue Brain’s software originate from 14 different countries. When we enter the hushed room, the programmers are all glued to their monitors, fully absorbed in the hieroglyphs on the screen. Nobody even looks up. We sit down at an empty desk and Schürmann opens his laptop.

In Markram’s laboratory, state-of-the-art equipment allows for computer-controlled, simultaneous recordings of the tiny electrical currents that form the basis of nerve impulses. Here, a technique known as “patch clamp” provides direct access to seven individual neurons and their chemical synaptic interactions. The patch clamp robot—at work 24 hours a day, seven days a week—helped the Blue Brain team speed through 30 years of research in six months. Inset, a system integrates a bright-field microscope with computer-assisted reconstruction of neuron structure. The entire setup is enclosed inside a “Faraday cage” to reduce electromagnetic interference and mounted on a floating table to minimize vibrations. Courtesy of Alain Herzog/EPFL

The computer screen is filled with what look like digitally rendered tree branches. Schürmann zooms out so that the branches morph into a vast arbor, a canopy so dense it’s practically opaque. “This,” he proudly announces, “is a virtual neuron. What you’re looking at are the thousands of synaptic connections it has made with other [virtual] neurons.” When I look closely, I can see the faint lines where the virtual dendrites are subdivided into compartments. At any given moment, the supercomputer is modeling the chemical activity inside each of these sections so that a single simulated neuron is really the sum of 400 independent simulations. This is the level of precision required to accurately imitate just one of the 100 billion cells—each of them unique—inside the brain. When Markram talks about building a mind from the “bottom-up,” these intracellular compartments are the bottom. They are the fundamental unit of the model.

But how do you get these simulated compartments to act in a realistic manner? The good news is that neurons are electrical processors: They represent information as ecstatic bursts of voltage, just like a silicon microchip. Neurons control the flow of electricity by opening and closing different ion channels, specialized proteins embedded in the cellular membrane. When the team began constructing their model, the first thing they did was program the existing ion channel data into the supercomputer. They wanted their virtual channels to act just like the real thing. However, they soon ran into serious problems. Many of the experiments used inconsistent methodologies and generated contradictory results, which were too irregular to model. After several frustrating failures—“The computer was just churning out crap,” Markram says—the team realized that if they wanted to simulate ion channels, they needed to generate the data themselves.

That’s when Schürmann leads me down the hall to Blue Brain’s “wet lab.” At first glance, the room looks like a generic neuroscience lab. The benches are cluttered with the usual salt solutions and biotech catalogs. There’s the familiar odor of agar plates and astringent chemicals. But then I notice, tucked in the corner of the room, is a small robot. The machine is about the size of a microwave, and consists of a beige plastic tray filled with a variety of test tubes and a delicate metal claw holding a pipette. The claw is constantly moving back and forth across the tray, taking tiny sips from its buffet of different liquids. I ask Schürmann what the robot is doing. “Right now,” he says, “it’s recording from a cell. It does this 24 hours a day, seven days a week. It doesn’t sleep and it never gets frustrated. It’s the perfect postdoc.”

The science behind the robotic experiments is straightforward. The Blue Brain team genetically engineers Chinese hamster ovary cells to express a single type of ion channel—the brain contains more than 30 different types of channels—then they subject the cells to a variety of physiological conditions. That’s when the robot goes to work. It manages to “patch” a neuron about 50 percent of the time, which means that it can generate hundreds of data points a day, or about 10 times more than an efficient lab technician. Markram refers to the robot as “science on an industrial scale,” and is convinced that it’s the future of lab work. “So much of what we do in science isn’t actually science,” he says, “I say let robots do the mindless work so that we can spend more time thinking about our questions.”

According to Markram, the patch clamp robot helped the Blue Brain team redo 30 years of research in six months. By analyzing the genetic expression of real rat neurons, the scientists could then start to integrate these details into the model. They were able to construct a precise map of ion channels, figuring out which cell types had which kind of ion channel and in what density. This new knowledge was then plugged into Blue Brain, allowing the supercomputer to accurately simulate any neuron anywhere in the neocortical column. “The simulation is getting to the point,” Schürmann says, “where it gives us better results than an actual experiment. We get the same data, but with less noise and human error.” The model, in other words, has exceeded its own inputs. The virtual neurons are more real than reality.

A simulated neuron from a rat brain showing “spines”—tiny knobs protruding from the dendrites that will eventually form synapses with other neurons. Pyramidal cells such as these (so-called because of their triangular shape) comprise about 80 percent of cerebral cortex mass. Courtesy of BBP/EPFL

Every brain is made of the same basic parts. A sensory cell in a sea slug works just like a cortical neuron in a human brain. It relies on the same neurotransmitters and ion channels and enzymes. Evolution only innovates when it needs to, and the neuron is a perfect piece of design.

In theory, this meant that once the Blue Brain team created an accurate model of a single neuron, they could multiply it to get a three-dimensional slice of brain. But that was just theory. Nobody knew what would happen when the supercomputer began simulating thousands of brain cells at the same time. “We were all emotionally prepared for failure,” Markram says. “But I wasn’t so prepared for what actually happened.”

After assembling a three-dimensional model of 10,000 virtual neurons, the scientists began feeding the simulation electrical impulses, which were designed to replicate the currents constantly rippling through a real rat brain. Because the model focused on one particular kind of neural circuit—a neocortical column in the somatosensory cortex of a two-week-old rat—the scientists could feed the supercomputer the same sort of electrical stimulation that a newborn rat would actually experience.

It didn’t take long before the model reacted. After only a few electrical jolts, the artificial neural circuit began to act just like a real neural circuit. Clusters of connected neurons began to fire in close synchrony: the cells were wiring themselves together. Different cell types obeyed their genetic instructions. The scientists could see the cellular looms flash and then fade as the cells wove themselves into meaningful patterns. Dendrites reached out to each other, like branches looking for light. “This all happened on its own,” Markram says. “It was entirely spontaneous.” For the Blue Brain team, it was a thrilling breakthrough. After years of hard work, they were finally able to watch their make-believe brain develop, synapse by synapse. The microchips were turning themselves into a mind.

But then came the hard work. The model was just a first draft. And so the team began a painstaking editing process. By comparing the behavior of the virtual circuit with experimental studies of the rat brain, the scientists could test out the verisimilitude of their simulation. They constantly fact-checked the supercomputer, tweaking the software to make it more realistic. “People complain that Blue Brain must have so many free parameters,” Schürmann says. “They assume that we can just input whatever we want until the output looks good. But what they don’t understand is that we are very constrained by these experiments.” This is what makes the model so impressive: It manages to simulate a real neocortical column—a functional slice of mind—by simulating the particular details of our ion channels. Like a real brain, the behavior of Blue Brain naturally emerges from its molecular parts.

In fact, the model is so successful that its biggest restrictions are now technological. “We have already shown that the model can scale up,” Markram says. “What is holding us back now are the computers.” The numbers speak for themselves. Markram estimates that in order to accurately simulate the trillion synapses in the human brain, you’d need to be able to process about 500 petabytes of data (peta being a million billion, or 10 to the fifteenth power). That’s about 200 times more information than is stored on all of Google’s servers. (Given current technology, a machine capable of such power would be the size of several football fields.) Energy consumption is another huge problem. The human brain requires about 25 watts of electricity to operate. Markram estimates that simulating the brain on a supercomputer with existing microchips would generate an annual electrical bill of about $3 billion . But if computing speeds continue to develop at their current exponential pace, and energy efficiency improves, Markram believes that he’ll be able to model a complete human brain on a single machine in ten years or less.

For now, however, the mind is still the ideal machine. Those intimidating black boxes from IBM in the basement are barely sufficient to model a thin slice of rat brain. The nervous system of an invertebrate exceeds the capabilities of the fastest supercomputer in the world. “If you’re interested in computing,” Schürmann says, “then I don’t see how you can’t be interested in the brain. We have so much to learn from natural selection. It’s really the ultimate engineer.”

An entire neocortical column lights up with electrical activity. Modeled on a two-week-old rodent brain, this 0.5 mm by 2 mm slice is the basic computational unit of the brain and contains about 10,000 neurons. This microcircuit is repeated millions of times across the rat cortex—and many times more in the brain of a human. Courtesy of BBP/EPFL; rendering by Visualbiotech

Neuroscience describes the brain from the outside. It sees us through the prism of the third person, so that we are nothing but three pounds of electrical flesh. The paradox, of course, is that we don’t experience our matter. Self-consciousness, at least when felt from the inside, feels like more than the sum of its cells. “We’ve got all these tools for studying the cortex,” Markram says. “But none of these methods allows us to see what makes the cortex so interesting, which is that it generates worlds. No matter how much I know about your brain, I still won’t be able to see what you see.”

Some philosophers, like Thomas Nagel, have argued that this divide between the physical facts of neuroscience and the reality of subjective experience represents an epistemological dead end. No matter how much we know about our neurons, we still won’t be able to explain how a twitch of ions in the frontal cortex becomes the Technicolor cinema of consciousness.

Markram takes these criticisms seriously. Nevertheless, he believes that Blue Brain is uniquely capable of transcending the limits of “conventional neuroscience,” breaking through the mind-body problem. According to Markram, the power of Blue Brain is that it can transform a metaphysical paradox into a technological problem. “There’s no reason why you can’t get inside Blue Brain,” Markram says. “Once we can model a brain, we should be able to model what every brain makes. We should be able to experience the experiences of another mind.”

When listening to Markram speculate, it’s easy to forget that the Blue Brain simulation is still just a single circuit, confined within a silent supercomputer. The machine is not yet alive. And yet Markram can be persuasive when he talks about his future plans. His ambitions are grounded in concrete steps. Once the team is able to model a complete rat brain—that should happen in the next two years—Markram will download the simulation into a robotic rat, so that the brain has a body. He’s already talking to a Japanese company about constructing the mechanical animal. “The only way to really know what the model is capable of is to give it legs,” he says. “If the robotic rat just bumps into walls, then we’ve got a problem.”

Installing Blue Brain in a robot will also allow it to develop like a real rat. The simulated cells will be shaped by their own sensations, constantly revising their connections based upon the rat’s experiences. “What you ultimately want,” Markram says, “is a robot that’s a little bit unpredictable, that doesn’t just do what we tell it to do.” His goal is to build a virtual animal—a rodent robot—with a mind of its own.

But the question remains: How do you know what the rat knows? How do you get inside its simulated cortex? This is where visualization becomes key. Markram wants to simulate what that brain experiences. It’s a typically audacious goal, a grand attempt to get around an ancient paradox. But if he can really find a way to see the brain from the inside, to traverse our inner space, then he will have given neuroscience an unprecedented window into the invisible. He will have taken the self and turned it into something we can see.

A close-up view of the rat neocortical column, rendered in three dimensions by a computer simulation. The large cell bodies (somas) can be seen branching into thick axons and forests of thinner dendrites. Courtesy of Dr. Pablo de Heras Ciechomski/Visualbiotech

Schürmann leads me across the campus to a large room tucked away in the engineering school. The windows are hermetically sealed; the air is warm and heavy with dust. A lone Silicon Graphics supercomputer, about the size of a large armoire, hums loudly in the center of the room. Schürmann opens the back of the computer to reveal a tangle of wires and cables, the knotted guts of the machine. This computer doesn’t simulate the brain, rather it translates the simulation into visual form. The vast data sets generated by the IBM supercomputer are rendered as short films, hallucinatory voyages into the deep spaces of the mind. Schürmann hands me a pair of 3-D glasses, dims the lights, and starts the digital projector. The music starts first, “The Blue Danube” by Strauss. The classical waltz is soon accompanied by the vivid image of an interneuron, its spindly limbs reaching through the air. The imaginary camera pans around the brain cell, revealing the subtle complexities of its form. “This is a random neuron plucked from the model,” Schürmann says. He then hits a few keys and the screen begins to fill with thousands of colorful cells. After a few seconds, the colors start to pulse across the network, as the virtual ions pass from neuron to neuron. I’m watching the supercomputer think.

Rendering cells is easy, at least for the supercomputer. It’s the transformation of those cells into experience that’s so hard. Still, Markram insists that it’s not impossible. The first step, he says, will be to decipher the connection between the sensations entering the robotic rat and the flickering voltages of its brain cells. Once that problem is solved—and that’s just a matter of massive correlation—the supercomputer should be able to reverse the process. It should be able to take its map of the cortex and generate a movie of experience, a first person view of reality rooted in the details of the brain. As the philosopher David Chalmers likes to say, “Experience is information from the inside; physics is information from the outside.” By shuttling between these poles of being, the Blue Brain scientists hope to show that these different perspectives aren’t so different at all. With the right supercomputer, our lucid reality can be faked.

“There is nothing inherently mysterious about the mind or anything it makes,” Markram says. “Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don’t know why you wouldn’t be able to generate a conscious mind.” At moments like this, Markram takes on the deflating air of a magician exposing his own magic tricks. He seems to relish the idea of “debunking consciousness,” showing that it’s no more metaphysical than any other property of the mind. Consciousness is a binary code; the self is a loop of electricity. A ghost will emerge from the machine once the machine is built right.

And yet, Markram is candid about the possibility of failure. He knows that he has no idea what will happen once the Blue Brain is scaled up. “I think it will be just as interesting, perhaps even more interesting, if we can’t create a conscious computer,” Markram says. “Then the question will be: ‘What are we missing? Why is this not enough?’”

Niels Bohr once declared that the opposite of a profound truth is also a profound truth. This is the charmed predicament of the Blue Brain project. If the simulation is successful, if it can turn a stack of silicon microchips into a sentient being, then the epic problem of consciousness will have been solved. The soul will be stripped of its secrets; the mind will lose its mystery. However, if the project fails—if the software never generates a sense of self, or manages to solve the paradox of experience—then neuroscience may be forced to confront its stark limitations. Knowing everything about the brain will not be enough. The supercomputer will still be a mere machine. Nothing will have emerged from all of the information. We will remain what can’t be known.



Blue Brain

Edited by maestro949, 24 April 2009 - 10:20 AM.

  • like x 1

#2 JohnDoe1234

  • Guest
  • 1,097 posts
  • 154
  • Location:US

Posted 04 March 2008 - 05:37 PM

Yeah, that is some pretty amazing stuff right there (22.8 trillion operations per second to model 1 cubic mm of brain...)

I hope they're learning a lot!

sponsored ad

  • Advert

#3 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 04 March 2008 - 05:45 PM

After only a few electrical jolts, the artificial neural circuit began to act just like a real neural circuit. Clusters of connected neurons began to fire in close synchrony: the cells were wiring themselves together. Different cell types obeyed their genetic instructions. The scientists could see the cellular looms flash and then fade as the cells wove themselves into meaningful patterns. Dendrites reached out to each other, like branches looking for light. "This all happened on its own," Markram says. "It was entirely spontaneous." For the Blue Brain team, it was a thrilling breakthrough. After years of hard work, they were finally able to watch their make-believe brain develop, synapse by synapse. The microchips were turning themselves into a mind.


Impressive as to what they've accomplished in such a short timeframe.

"If we build this brain right, it will do everything"

dishes, laundry, your job. It will build a larger and more complicated version of itself that we can't understand. It's time for the technophobes to get scared, very scared. This could unfold much faster than anyone thinks ;)

#4 Liquidus

  • Guest
  • 446 posts
  • 2
  • Location:Earth

Posted 04 March 2008 - 06:04 PM

Isn't the blue brain project more or less the ultimate attempt to reverse engineer the structure of the human brain so that:

1) The brain is understood in its entirety, opening up doors never before thought possible

2) Refining the definition of consciousness

3) Having relevant knowledge to duplicate consciousness for AI purposes and beyond.

I've followed this project loosely, and I'm glad it's catching a bit of steam.

#5 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 04 March 2008 - 07:25 PM

Seed Magazine published an article on it... surprisingly, though i purchased the magazine, the whole article can be seen online for free (see link above)

I'm not sure if they're basing the assumption that the Project will talk back to them in the belief that if one replicates exactly the brain then the resulting system (if given the right variables) will result in consciousness. I realize they're trying to study consciousness by building a model, but the ideal of attaining consciousness through simply modeling a system seems strange to me (especially when the medium is inorganic). Is consciousness nothing more that the resultant interactions within the brain is the question i'm wondering. Can everything be simply described as complex mechanisms? I suppose that's the nature of science, but its shocking that consciousness can be described in the same manner one explains away advanced theoretical physics problems.

Does anyone know if we have a theory using neurobiology as to the causes of consciousness?

maestro949 Edit: Removed duplicate link per poster's request.

Edited by maestro949, 04 March 2008 - 07:51 PM.


#6 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 04 March 2008 - 10:33 PM

This is a pretty inspiring piece.

I'm awed at the concept of a huge supercomputer being used to control a rat robot ;)

#7 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 05 March 2008 - 10:09 AM

Isn't the blue brain project more or less the ultimate attempt to reverse engineer the structure of the human brain so that:

1) The brain is understood in its entirety, opening up doors never before thought possible

2) Refining the definition of consciousness

3) Having relevant knowledge to duplicate consciousness for AI purposes and beyond.

I've followed this project loosely, and I'm glad it's catching a bit of steam.


Indeed it's a worthy read and an excellent example of one of the frontiers of science. I just found it humorous how the author felt the need to interject a spattering of frankenscience comments which only scare people. And then there are the quips like "The soul will be stripped of its secrets" which is only going to solicit angry responses from the theists.

#8 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 05 March 2008 - 01:59 PM

They're welcome to advance their knowledge of the soul in their own way ... it's just a race!

#9 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 05 March 2008 - 07:00 PM

shrugs, if people are afraid that the soul will be stripped away its secrets then they have uncertainties in their religion/belief systems, if the soul, as per most beliefs, resides within another essence that isn't material, and consciousness is impacted by this essence, then they shouldn't fear any examination of consciousness, more importantly they shouldn't intervene in any pursuits of science. Shouldn't we be happy that we have the tools to understand this concept, this phenomenon, rather than fearful. Regardless, I think consciousness, and by that i mean awareness, is a little bit tricker than assuming it will magically appear by just modeling it using a supercomputer, but that's just my personal opinion.

I'm sure you can request or ask it logical commands much the same as if you would input commands into a computer, and it will announce the programmed response to any inquiry using certain components to interpret the request into its language, processing the interpretation somewhere else, and then storing this as an experience to call upon into working memory when another is deemed similar, yet who can say this isn't consciousness already. One needs a higher layer that will be observing as these processes are occurring. By this thought flow i wonder, is consciousness merely the observing of internal and external processes in order to draw upon it for further analysis, or is it something more? If the requirements are simply a talking aware (of processes) machine, then i don't see the difficulty, and it will not be emulating consciousness in its entireties. If most of it is below conscious level then how do we know we are emulating it? If you're talking about belief systems as determined by cognition and experiences manipulating the way the computer's behavior (how it runs and develops algorithms), then that gets interesting, but someone can say a computer is already doing that (minus the developing algorithms by itself).

I suppose we'll see the progress over the next 10 years and make our opinions then. But I'm sure we'll learn a lot in the process.

Thanks Maestro for editing the comment ;)

Edited by mysticpsi, 05 March 2008 - 07:31 PM.


#10 gashinshotan

  • Guest
  • 443 posts
  • -2

Posted 06 March 2008 - 01:41 AM

Seed Magazine published an article on it... surprisingly, though i purchased the magazine, the whole article can be seen online for free (see link above)

I'm not sure if they're basing the assumption that the Project will talk back to them in the belief that if one replicates exactly the brain then the resulting system (if given the right variables) will result in consciousness. I realize they're trying to study consciousness by building a model, but the ideal of attaining consciousness through simply modeling a system seems strange to me (especially when the medium is inorganic). Is consciousness nothing more that the resultant interactions within the brain is the question i'm wondering. Can everything be simply described as complex mechanisms? I suppose that's the nature of science, but its shocking that consciousness can be described in the same manner one explains away advanced theoretical physics problems.

Does anyone know if we have a theory using neurobiology as to the causes of consciousness?


DUDE NEUROBIOLOGY IS THE CAUSE OF CONSCIOUSNESS. PERIOD.

This has been proven by thousands of studies already and can easily be proven to yourself by chugging a beer or smoking a cig - anything you do physically that changes your mood only indicates that consciousness is physical. We can cut out every single emotion, feeling, desire, etc etc from the brain with lobotomies and drug use. So how can consciousness be anything but physical?

I'M TIRED OF RETARDS CLAIMING CONSCIOUSNESS IS ANYTHING BUT PHYSICAL. PLEASE STOP.

maestro949 edit: reduced unnecessarily large font size

Edited by maestro949, 06 March 2008 - 05:29 PM.


#11 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 06 March 2008 - 03:18 PM

Seed Magazine published an article on it... surprisingly, though i purchased the magazine, the whole article can be seen online for free (see link above)

I'm not sure if they're basing the assumption that the Project will talk back to them in the belief that if one replicates exactly the brain then the resulting system (if given the right variables) will result in consciousness. I realize they're trying to study consciousness by building a model, but the ideal of attaining consciousness through simply modeling a system seems strange to me (especially when the medium is inorganic). Is consciousness nothing more that the resultant interactions within the brain is the question i'm wondering. Can everything be simply described as complex mechanisms? I suppose that's the nature of science, but its shocking that consciousness can be described in the same manner one explains away advanced theoretical physics problems.

Does anyone know if we have a theory using neurobiology as to the causes of consciousness?

maestro949 Edit: Removed duplicate link per poster's request.

DUDE NEUROBIOLOGY IS THE CAUSE OF CONSCIOUSNESS. PERIOD.

This has been proven by thousands of studies already and can easily be proven to yourself by chugging a beer or smoking a cig - anything you do physically that changes your mood only indicates that consciousness is physical. We can cut out every single emotion, feeling, desire, etc etc from the brain with lobotomies and drug use. So how can consciousness be anything but physical?

I'M TIRED OF RETARDS CLAIMING CONSCIOUSNESS IS ANYTHING BUT PHYSICAL. PLEASE STOP.


Dude, first i wasn't saying anything otherwise, i was saying simply designing the brain in another frame other than biological might not result in consciousness, simply expressing my doubts in a electronic reproduction of the brain. I'm not saying this with certainity, i just think that electronics don't have the same plasticity of the organic material within the brain. And when i asked for the neuorbiological theory of consciousness, i wasn't saying that it doesn't exist, i was saying i want to read it for knowledge of what we've attained so far in that area, because it interests me and would be grateful to anyone that knows anything about it. I have stated the same thing you stated in your later paragraph as proof of this, and it's probably the most compelling argument for consciousness being a result of biology. When i say it's shocking, i'm simply amazed that we've gotten this far, and it's a strange sensation to know that the consciousness we experience daily, which is so emotionally charged and awe filled, can be attributed to organic mechanics.

Perhaps you mistook me when i said most beliefs consist of the soul being in another essence, i was stating most religious beliefs take the soul into a non material essence, and i made emphasis on typing "they" to state it's not my own belief. For "they" does not consist of the individual for which the word is spoken by.

shrugs, if people are afraid that the soul will be stripped away its secrets then they have uncertainties in their religion/belief systems, if the soul, as per most beliefs, resides within another essence that isn't material, and consciousness is impacted by this essence, then they shouldn't fear any examination of consciousness, more importantly they shouldn't intervene in any pursuits of science. Shouldn't we be happy that we have the tools to understand this concept, this phenomenon, rather than fearful. Regardless, I think consciousness, and by that i mean awareness, is a little bit tricker than assuming it will magically appear by just modeling it using a supercomputer, but that's just my personal opinion.

...


Now it's very likely you read a couple of my sentences and misinterpreted after what i was saying seemed out of the ignorance and brought annoyance, to which case i can't blame you since most people will take on beliefs that don't adhere to any rational nor experience out of desire for comfort, but can you blame them for attempting to bring something eternal to a mortal existence? Plus the argument was weak since even Descartes made an assumption à priori that the pineal gland was the seat of the soul, which would allow for both biological and some eternal non essence to interact. Of course pure dualism is silly but if you want to degrade a popular thought pattern you'll have to use a better argument.

So please do not be so quick to be critical, especially in such big font, it's rather bothersome.

Edited by mysticpsi, 06 March 2008 - 04:07 PM.


#12 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 07 March 2008 - 06:50 AM

and it's a strange sensation to know that the consciousness we experience daily, which is so emotionally charged and awe filled, can be attributed to organic mechanics.


It sure is.. and that in itself is awe inspiring... that a collection of neurons and/or patterns of computational information processing can give rise to the emergent phenomenon of consciousness.. but then... we've known the brain is responsible for consciousness for many a year now.. I think the sheer recursivity of looking at ourselves looking at ourselves is what brings us that sense of awe and that the ephemeral attention fovea that is the focusing of awareness we experience as "I" can get dizzy when it turns itself inward only to realize its not really there without the energy coursing through its substrate like the charging of an electrical distribution grid..

Truly amazing that "I" exist at the whim of a neural net..

KP

#13 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 08 March 2008 - 02:56 AM

and it's a strange sensation to know that the consciousness we experience daily, which is so emotionally charged and awe filled, can be attributed to organic mechanics.


It sure is.. and that in itself is awe inspiring... that a collection of neurons and/or patterns of computational information processing can give rise to the emergent phenomenon of consciousness.. but then... we've known the brain is responsible for consciousness for many a year now.. I think the sheer recursivity of looking at ourselves looking at ourselves is what brings us that sense of awe and that the ephemeral attention fovea that is the focusing of awareness we experience as "I" can get dizzy when it turns itself inward only to realize its not really there without the energy coursing through its substrate like the charging of an electrical distribution grid..

Truly amazing that "I" exist at the whim of a neural net..

KP


I'm glad someone shares my awe :) Just imagining that this selected experience that i am analyzing and storing is something that is more internally based then it is external, that reality itself is a portrait of the internal mechanics is sheer brilliance. I think these areas of science are more brilliant and awe filled than any "theory" of creation based on religion. It shows the genius that went into, and goes into, every second of every day.

As far as the Blue Brain, i still don't understand why they believe creating a duplicate without the same material will result in consciousness. Sure it will allow you to better understand the neuro mechanics but it won't result in anything else in my opinion, since i think it's missing a language.

Edited by mysticpsi, 08 March 2008 - 02:58 AM.


#14 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 08 March 2008 - 04:03 AM

I'm glad someone shares my awe :) Just imagining that this selected experience that i am analyzing and storing is something that is more internally based then it is external, that reality itself is a portrait of the internal mechanics is sheer brilliance. I think these areas of science are more brilliant and awe filled than any "theory" of creation based on religion. It shows the genius that went into, and goes into, every second of every day.

Wow, I was just going to say that, but then I was listening to Richard Dawkins on the radio today...

As far as the Blue Brain, i still don't understand why they believe creating a duplicate without the same material will result in consciousness. Sure it will allow you to better understand the neuro mechanics but it won't result in anything else in my opinion, since i think it's missing a language.

At current levels of computational power, I suppose it would have the "consciousness" of an insect or so. There is no reason for a simulation of a brain not to have consciousness. It's sort of like a unix emulator running on top of another OS... Would it seem more plausible if instead of a simulation running on a general purpose computer, they built an electronic device with the same interconnectedness and signalling behavior as a brain? Or if instead of implementing it in silicon, they used wet chemistry? In the end, it's a bunch of wires and switches... that can share your awe.

Although a simulated brain may be missing a language, there's no reason it couldn't learn one. But then you'd have those "Noooo, don't power me down!" moments when it was time for PM.

#15 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 08 March 2008 - 08:42 AM

This got me thinking...

Instead of building giant a supercomputer to do this, could we simply build a distributed recurrent neural net using internet protocols? From there we could train it to help us with biological complexity, e.g. teach it the known proteomic and metabolic pathway datasets. It might be slow and fairly useless at first but as it grew and the more we taught it, it would improve it's ability to predict. I googled around a bit and surprisingly, couldn't find any distributed AI projects. Perhaps there is something inherently difficult with a distributed neural net? Is there an ai@home in the future?

#16 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 09 March 2008 - 06:00 AM

This got me thinking...

Instead of building giant a supercomputer to do this, could we simply build a distributed recurrent neural net using internet protocols? From there we could train it to help us with biological complexity, e.g. teach it the known proteomic and metabolic pathway datasets. It might be slow and fairly useless at first but as it grew and the more we taught it, it would improve it's ability to predict. I googled around a bit and surprisingly, couldn't find any distributed AI projects. Perhaps there is something inherently difficult with a distributed neural net? Is there an ai@home in the future?

The lack of such a thing probably has something to do with the insanely slow intercommunication rate. Protein folding via molecular dynamics is kind of unique in that the ratio of crunching on the node to intercommunication is huge.

#17 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 09 March 2008 - 11:43 AM

The lack of such a thing probably has something to do with the insanely slow intercommunication rate. Protein folding via molecular dynamics is kind of unique in that the ratio of crunching on the node to intercommunication is huge.


That's what I was thinking. In Protein folding they break the problem up into many small pieces. I don't see how you could do that with neuron simulation. Your simulations would have to just run very slowly. Perhaps a task more appropriate for Internet2.

#18 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 09 March 2008 - 05:37 PM

Markham has no current plans for a distributed computing project. That said, the idea is totally viable, but would probably be slower than a supercomputer.

Though the transient nature of PC contributions could well be used to mimick "tip of the tongue" phenomena. :|o The AI wouldn't be able to 'remember' a specific detail until someone's PC spit out the answer.

#19 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 09 March 2008 - 06:17 PM

Markham has no current plans for a distributed computing project. That said, the idea is totally viable, but would probably be slower than a supercomputer.


Perhaps more suitable once a quantity of neurons is preferred over the speed of their interactions...

Though the transient nature of PC contributions could well be used to mimick "tip of the tongue" phenomena. :|o The AI wouldn't be able to 'remember' a specific detail until someone's PC spit out the answer.


Heh, this raises the problem of PCs going online and offline :)

Clearly some redundancy would be needed.

#20 PWAIN

  • Guest
  • 1,288 posts
  • 241
  • Location:Melbourne

Posted 11 March 2008 - 04:01 AM

I must admit, I had not heard of Blue Brain but from what I have read so far, it certainly looks like an excellent piece of work with a decent chance of success.

I think that this is the closest thing I have read about, to creating the singularity. If anything happening around the world today has a chance, this has got to be it.

I am really excited about this and have been thinking about it for days. There is still a long way to go but it certainly looks like something that will reach it's ultimate goal. Ultimately, once they achieve human level, then they can go beyond, making the 'brain' bigger or faster or both. I forsee some real ethical concerns when a human level of intelligence is created - religeous types are likely to attack this.

As for a distributed version, I wonder about the reasons behind the descision not to go for this at the moment. It may simply be due to a lack of resources and not wanting to waste too much energy in this direction. The question is then whether they would be willing to let an outside team have access to their code and data to develop an open source version for people to run. I guess that depends on whether they see this as a way to make money or are more intrested in the pure research.

Antoher obstical to a distributed version could be IBM who are currently providing the hardware and may not want competition - the blue brain project gives them a certain air of scientific validity.

Whatever happens, I really hope that this project continues and achieves it's goals. I believe that it puts us on the path to the singularity - which is the first time I have felt that way.

#21 PWAIN

  • Guest
  • 1,288 posts
  • 241
  • Location:Melbourne

Posted 11 March 2008 - 04:25 AM

A look at

http://fah-web.stanf...y?qtype=osstats

Shows that 1323 TFlops can be achieved with a distributed system. I imagine that it would be quite a bit more if the Blue Brain was distributed as it is a really really cool project and a lot of people would want to be a part of it. Use of PS3 seems to have had the most impact and if other hardware can be tapped (Xbox?), then even better.

#22 forever freedom

  • Topic Starter
  • Guest
  • 2,362 posts
  • 67

Posted 11 March 2008 - 05:03 PM

There's also this approach to creating AI through Second Life, which is also very interesting and maybe promising.

#23 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 11 March 2008 - 06:01 PM

There's also this approach to creating AI through Second Life, which is also very interesting and maybe promising.


Woah. Did anyone else just click that and suprisingly not get an article about Novamente.

#24 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 11 March 2008 - 07:30 PM

So who here is going to spearhead the research effort to build an open source distributed AI? I want a feasibility study on my desk by March 31. :|o

#25 PWAIN

  • Guest
  • 1,288 posts
  • 241
  • Location:Melbourne

Posted 13 March 2008 - 02:01 AM

So who here is going to spearhead the research effort to build an open source distributed AI? I want a feasibility study on my desk by March 31. :p


It would be fairly useless unless the basic data and program structure are available from the guys at the Blue Brain project. It comes down to whether they are willing to release that information. Starting from scratch would be too hard for an open source effort IMO.

#26 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 13 March 2008 - 07:08 AM

There's also this approach to creating AI through Second Life, which is also very interesting and maybe promising.


This character will then collect data on your personal beliefs, likes and dislikes, favorite books and movies. After a large collection of data is stored, it will analyze it using psychological evaluating algorithms to maintain a better understanding of you. Then after enough information is compiled, a variety of corporations will send to your email inbox incredibly alluring advertisements that will be so convincing you might not be able to refuse them and go bankrupt if it wasn't for your junk filter... the day is saved... thanks to Captain Planet...


They probably shouldn't be so cynical :p... for those searching for that quote it's not in there...

It's great though that they're pursuing it and also thanks for the link, quite enjoyable to know people are pursuing this and we'll be able to see it in action rather than rely on the words.

Regardless, I'm going to maintain my position that consciousness is difficult to emulate if not impossible, and will hold this opinion until i see it happening, though this is primarily dependent on what your definition of consciousness and general artificial intelligence is.

#27 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 13 March 2008 - 03:30 PM

So who here is going to spearhead the research effort to build an open source distributed AI? I want a feasibility study on my desk by March 31. :p


It would be fairly useless unless the basic data and program structure are available from the guys at the Blue Brain project. It comes down to whether they are willing to release that information. Starting from scratch would be too hard for an open source effort IMO.


I don't think we'd want to follow their architecture as they're using shared memory which will not work in a distributed model. The brain doesn't have a central memory bank but rather distributes it's memory across the network. They also use time dependent algorithms which we'd want to avoid.

Edited by maestro949, 13 March 2008 - 03:31 PM.


#28 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 02 April 2008 - 09:17 PM

Perhaps someone can explain this to me... but i thought the main way that we program AI is through creatively designing code that take into account whatever navigation would work best when handling a problem.

The only way that I myself, and you'll have to excuse my naiveness in this issue since i haven't yet designed very complex codes, would use something like a simulation (or rather a real observation) of neural networks when introduced with a real world problem, is understand the dynamics and try to reflect that into a computer language that will mimic that behavior if it is found to be better than the other options.

For instance, linguistics in dealing with text processing, or depth perception (cues) in dealing with real world objects and navigation... Our technology is a reflection of this method.

What i'm basically saying is where's the software component (rather the interpretation, higher layer) in all of this? Simply designing a piece of hardware that mimics brain dynamics isn't capturing it in its entireties. Wouldn't the software aspect only be our best guesses?

It could just be that the SEED magazine provided a very narrow and non-technical article when explaining the concepts held in the blue brain project...


Also Maestro949... where did you get that information as far as the central memory bank, and the time dependent algorithms...

considering the precision required in setting up a series of neurons in harmony, what other option do you have but to have the algorithms as a function of time.

#29 PWAIN

  • Guest
  • 1,288 posts
  • 241
  • Location:Melbourne

Posted 03 April 2008 - 12:06 AM

Perhaps someone can explain this to me... but i thought the main way that we program AI is through creatively designing code that take into account whatever navigation would work best when handling a problem.

The only way that I myself, and you'll have to excuse my naiveness in this issue since i haven't yet designed very complex codes, would use something like a simulation (or rather a real observation) of neural networks when introduced with a real world problem, is understand the dynamics and try to reflect that into a computer language that will mimic that behavior if it is found to be better than the other options.

For instance, linguistics in dealing with text processing, or depth perception (cues) in dealing with real world objects and navigation... Our technology is a reflection of this method.

What i'm basically saying is where's the software component (rather the interpretation, higher layer) in all of this? Simply designing a piece of hardware that mimics brain dynamics isn't capturing it in its entireties. Wouldn't the software aspect only be our best guesses?


I am trying to make out what you are trying to say. Are you discussing interfacing? Is you question how will we communicate with the simulation, how will it see etc?

Or are you saying that the simulation should have some type of coded routines to control things? I just can't quite see what you are getting at. Statements like:

"take into account whatever navigation would work best when handling a problem"

Are cryptic to me - please elaborate.

sponsored ad

  • Advert

#30 mentatpsi

  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 03 April 2008 - 03:03 AM

Statements like:
"take into account whatever navigation would work best when handling a problem"

Are cryptic to me - please elaborate.


my apologies....

let me take the sentence more into context:

"but i thought the main way that we program AI is through creatively designing code that take into account whatever navigation would work best when handling a problem"


This point isn't relevant for the discussion, i seemed to have missed the part that they intend to have the blue brain's neural circuitry be the full AI (programmed in the hardware)...

but if you're still interested in what i meant:
when designing a code the overall foundations is an algorithm designed to maneuver through data and interpret it... this maneuvering i called navigation.

For instance, a simple example that i recently designed on python is a conversion of a table pasted and formatted as a txt file into an output containing if then commands that could then be made into a program through a write file type function. The original purpose was to design this for a TI83 program, but i found the method to be obsolete later. The navigation through rows and columns converted into text containing if then commands is to me navigation because you need to tell the program how to navigate through data.

I am trying to make out what you are trying to say. Are you discussing interfacing? Is you question how will we communicate with the simulation, how will it see etc?

Or are you saying that the simulation should have some type of coded routines to control things?


I’m assuming you’re speaking of this:

What i'm basically saying is where's the software component (rather the interpretation, higher layer) in all of this? Simply designing a piece of hardware that mimics brain dynamics isn't capturing it in its entireties. Wouldn't the software aspect only be our best guesses?


You can consider Interfacing as the general gist of my question... and when i say this i mean that interfacing would be a better validity to their major goal to design a consciousness...

i have already answered my question as I was replying to you, the main issue i had wasn't interfacing with humans... but how the hardware was going to cause thought, which i have already answered in my own (erased) reply... my main argument was this Rosetta stone from an exact biological state to an exact thought... i thought of it as a crucial component towards designing consciousness, but if they're just going to mimic another brain (as long as it's already developed) than they really don't need this in order to get a consciousness up and running... what they need is an insane amount of funding....

Edited by mysticpsi, 03 April 2008 - 03:07 AM.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users