• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Artificial brain '10 years away'


  • Please log in to reply
41 replies to this topic

#31 exapted

  • Guest
  • 168 posts
  • 0
  • Location:Minneapolis, MN

Posted 26 September 2009 - 08:30 PM

By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap

See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.


I know this does exactly count as a "Manhatten Project", but the amount of money going into neuroscience, brain modeling, computer/software, AGI/AI, networking, robotics, narrow AI, etc. has got to be way more than a billion every year. The world GDP back in 2007 was 54 trillion and I would guess at least a trillion going into AI related fields and technologies.

Good point. I was basically just interpreting the predictions made in the paper.

But maybe..... if a few billion per year was spent on a supercomputer specifically made for whole brain emulation.... the computer architecture would be specific to the application and it would be years ahead of any other supercomputer for the application.

#32 okok

  • Guest
  • 340 posts
  • 239

Posted 28 September 2009 - 02:28 AM

I think that it's silly to try to build an ultra-giga-supercomputer to run what is surely a grotesquely inefficient simulation. It would be better to figure out the appropriate abstractions and/or use custom hardware to emulate low-level parts of the brain.


Exactly. So i'm not sure at all that building an AGI is necessarily by way of full-scale reverse-engineering of the brain as Henry Markram implies, taking the highground a bit in distinguishing BB from the AI approach: (original article)

Question 12: Have you collaborated with any members of the AI community? Is
your project affecting the AI field?

HM: No, Blue Brain adopts a philosophy that is pretty much 180 degrees opposite to the philosophy in AI. In my view, AI is an extreme form of engineering and applied math where you try to come up with a God formula to create magical human powers. If you want to go into AI, I think you have to realize you are making the assumption that your formula will have to capture 11 billion years of evolutionary intelligence. In most cases, AI researches do not even know what a neuron is, let alone how the brain works, but then they don't need to because they are searching
for something else. I don't blame them for trying because, if you want to build clever devices today, it is much easier to ignore the brain - it is just too complex to harvest the technology of the brain. Look at speech recognition today – the best ones out there don't use neural principles. Having said that, we all know how inadequate the current devices are and that is just because AI can't even come close to what the brain can do. Blue Brain is not trying to build clever devices, it is a biological project that will reveal systematically the secret formulas operating, but Blue Brain models and simpler derivative models will gradually replace all of AI.


I don't know how constructed their examples are, but the demonstrations by smartaction look promising. On the founding company's site they explain why they're confident of the end of the AI winter. At least it shows there's progress in simulating cognitive traits - nothing said about qualia and emotions.
All separate traits, which are sometimes failed being treated as such, maybe because evolution produced an ad hoc system where motor function, affect and cognition are closely tied together. So for obtaining computable cognition it's more a question of finding the right algorithms - be it by simulating giant wetware or using it. Where i see computing power still limiting for AGI purposes is in simulating a functioning ontology. What do you think?

Edited by okok, 28 September 2009 - 02:32 AM.


sponsored ad

  • Advert

#33 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 28 September 2009 - 04:47 AM

If you want to go into AI, I think you have to realize you are making the assumption that your formula will have to capture 11 billion years of evolutionary intelligence.

I think the first 9 billion, if not the first 10.5 billion, can be safely ignored, or at least subsumed into the general topic of "chemistry".

#34 Esoparagon

  • Guest
  • 227 posts
  • 32
  • Location:Australia

Posted 28 September 2009 - 07:24 AM

People need to keep two things in mind:

1) Accelerating rates of return
2) The Human Genome Project

If the rate of progress for mapping the human genome had continued only at the rate of the first 5 years, it probably wouldn't even be finished yet, but that's not how it works. Most of the mapping was done in the final year or two.

Technology in 10 years will not be the same as it is now, neither will scientific understanding. I personally think it'll take longer than 10 years, but there is a very good possibility that it could happen a lot sooner than we imagine with our linear thinking minds.

#35 exapted

  • Guest
  • 168 posts
  • 0
  • Location:Minneapolis, MN

Posted 29 September 2009 - 04:47 AM

People need to keep two things in mind:

1) Accelerating rates of return
2) The Human Genome Project

If the rate of progress for mapping the human genome had continued only at the rate of the first 5 years, it probably wouldn't even be finished yet, but that's not how it works. Most of the mapping was done in the final year or two.

The Human Genome Project is a fantastic example. So they achieved exponential growth by starting to work on it and continuously improving their methods. Something similar could happen for whole brain emulation. The Whole Brain Emulation roadmap outlines many of the fields that need to be involved, the research cycle, etc. What's interesting is that certain key fields involved in whole brain emulation are bottlenecks. There are many different kinds of improvements that can remove some of those bottlenecks. For example improve the efficiency of the algorithms, find some abstractions in the operation of the brain, build a better scanner, etc. At a certain point the goal of whole brain emulation might become widely known and supported/loved/hated/etc. At that point people working in various fields would consider their objectives to be a part of the larger goal of whole brain emulation.

Technology in 10 years will not be the same as it is now, neither will scientific understanding. I personally think it'll take longer than 10 years, but there is a very good possibility that it could happen a lot sooner than we imagine with our linear thinking minds.

I'm curious if there is any evidence at all that humans think "linearly". It might be more accurate to say that people have been brainwashed into believing in some "essential" natural division: people, machines, medicine, consciousness, life, matter, fire, air, water, etc. Actually there is evidence that people think hyperbolically about the future (actually evaluate the future hyperbolically, or discount future benefit according to a hyperbolic function over time), which suggests that people are hardwired to focus on the present and the distant future, kind of like a singulatarian who procrastinates a lot but starts a lot of projects, hoping for a big goal at the end. It's not always bad thing I guess, but to me it means there needs to be a focus on how we can tie the immediate future to the distant future in the medium-term. I think the Human Genome Project is a perfect example of this, and I think it is important how it's all organized.

Regarding "exponential thinking" vs. "linear thinking", I think there is a common sense meaning of "linear thinking", and yes most people think that way about history. But I think it just means that most people simply don't believe that their "paradigms" will be "shifted" so dramatically, and that has very little to do with quantitative progression. Tell some people on the street that the number of calculations performed by processors will continue to increase exponentially for a long time, eventually outstripping biological intelligence, and they might say "oh that makes sense, see you tomorrow". Tell them the social implications, and they will be a lot less likely to accept it.

Edited by exapted, 29 September 2009 - 05:45 AM.


#36 EmbraceUnity

  • Guest
  • 1,018 posts
  • 99
  • Location:USA

Posted 08 October 2009 - 01:34 AM

exapted,

You are spot on about software bloat, miniaturization, new computing paradigms, etc. Another factor is energy efficiency. While you may be right that moore's law could be outpaced, there are a number of metrics by which all this must be judged. Clearly we can overclock current CPUs far higher than older ones. Some can go over 6 gigahertz in overclocking competitions, but clearly the power consumption and wear on the chips are enormous. So trends based on simple price/performance ratios which don't factor in durability and energy efficiency are dubious.

Software bloat seems to have been decreasing as a result of Linux and Mac putting pressure on Windows to strip down, and we are seeing some cool new compilers like Clang and new GPGPU frameworks like OpenCL. Yet, games are just as demanding as ever, if not more demanding. With ray tracing, the sky is the limit. Even the new larrabee chip which is in development, which has 32 cores, is only barely able to do real-time ray tracing. I have a feeling that even as the "bloat" goes away, it will likely be more than compensated for by new features that will quickly be seen as essential. Higher definition video, stereoscopic 3D, haptics, you name it.

As for new computing paradigms, there are a number of promising avenues. Heterogenous computing on a single chip using different specialized cores for specific tasks could provide immense performance improvements. Another promising approach is using biological circuits that can reconfigure themselves on demand based on new designs which have been downloaded or developed as a result of genetic algorithms.

Yet another exciting new computing paradigm is the memristor. We might see a unified memory based on this, which would be more efficient, denser, and faster than either RAM or HDDs. It may also be especially useful for certain niche applications and algorithms, though less specialized than quantum computers.

I don't think it really matters which motivation is the primary driver for advancement. Just like how the urge for thinner, brighter displays led to the discovery of OLED which also happened to be more efficient, flexible, etc etc. Perhaps it is even good that the scientists are tackling the issues from many fronts.

Of course even with all of this, our knowledge of the brain even from a reductionist standpoint is pretty pitiful, and it is really hard to make any predictions because of this ignorance.

Furthermore, it should go without saying that even when we have a good idea of how the brain works, there really isn't any way of knowing if the resulting intelligence would have qualia or would be a "p-zombie." Thus, we shouldn't expect to be able to upload ourselves at least onto any digital substrate. We would need to do it to another substrate which would be homologous to our biological substrate, and perhaps nothing else is and instead we need to figure out how to improve our wetware. Artificial brains can be great research assistants, but the ethics around this whole issue is very murky and caution should be paramount.

Edited by progressive, 08 October 2009 - 01:44 AM.


#37 Lauren

  • Guest
  • 58 posts
  • 37
  • Location:Greensboro, NC

Posted 08 October 2009 - 09:05 PM

If they do make an artificial human brain, I hope they treat it as a person.


LOL :|w

Although you do have a point, there. Yeah, I'd have to agree that engineering an artificial brain within the next ten years is a complete impossibility at this technological juncture and makes Kurzweil's proposals seem almost realistic. The truth of the matter is that at present, neuroscientists simply do not KNOW enough about the parenchymal (functional) and stromal (structural) mechanisms in the brain, and how these systems coexist in symbiosis. And with every new discovery that neurogeneticists make, the science of neurology undergoes an almost kaleidoscopic paradigm shift and both preconceived theories about the human mind and current findings alike take one of two trajectories: they either undergo
successive revision as nuanced discoveries present themselves, or they are disproven altogether. I prognosticate that science will change so radically within the next 65 - 100 years that it will be almost unrecognizable to its forebears, retaining only a modical vestige of its present-day constitution.

It must equally be emphasized that although the pace of scientific progress has experienced an almost exponential rate of acceleration over the past century with the advent of multitudinous technological innovations, it will have to undergo a series of successive innovations spanning the period of over 60 years until we can even conceive of such a prospect as creating an artificial brain. It is a conceivable development, but the technology to engineer such an entity as a simulated human brain is still in its primitive stages. It will take at least a half a century (and even this is a rather liberal estimation) before we can even approximate that stage of technological advancement, and perhaps even longer for Aubrey de Grey's speculations regarding negligable senescence to come to fruition. There is still a distinct possibility of these milestones happening within our lifetime, but such developments will have to stand the test of at least 60 years of scientific research and technological innovations before we can even conceptualize such developments within the domain of reality.

#38 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 09 October 2009 - 03:18 AM

... but such developments will have to stand the test of at least 60 years of scientific research and technological innovations before we can even conceptualize such developments within the domain of reality.

I agree that people here tend to underestimate the time needed for various technological developments, but how do you arrive at 60 years? Should the Singularity actually occur within this timeframe, it's almost definitional that "all bets are off" as far as extrapolative prediction is concerned.

#39 erzebet

  • Guest
  • 195 posts
  • 145
  • Location:Bucharest

Posted 17 December 2009 - 07:14 PM

I am not sure if an artificial brain will be made in 10 years but my point of view is that we have loads of data but too few theories regarding the brain and we get lost in details.

#40 boundlesslife

  • Life Member in cryostasis
  • 206 posts
  • 11

Posted 09 February 2010 - 01:44 AM

Posted Image

http://news.bbc.co.u...ogy/8164060.stm

There's a 139 page report on Brain Emulation that may fit with the other resources on this thread. Hope this is not redundant. Although it's several years old, much of the detail may still be useful, since much of what is conceived to be possible remains hypothetical for long periods of time.

Edited by boundlesslife, 09 February 2010 - 01:47 AM.


#41 DaffyDuck

  • Guest
  • 85 posts
  • 11

Posted 11 March 2010 - 05:15 PM

I don't think so. If I wasn't so lazy, I'd be up for a longbet. The prediction is crazily optimistic, possibly even topping Ray's singularity prediction. Doesn't the blue brain project simulate like 10k nerve cells? If we are Moore and supercomputing optimists we can say that the next ~10y will yield a 1000 fold increase in computing power. To the best of my knowledge 10k*1k is not in any way close enough (even if we generously add the last years of progress, as the target was achieved earlier)...


The 10,000 neuron part almost certainly no longer applies since the project has upgraded from the Blue Gene/L to the Blue Gene/p (source 2).

I believe they went from 23 teraFLOPS to 560 teraFLOPS. I'm not sure how it scales but maybe they went from 10,000 neurons to 240,000 neurons. Still a long way to go.

I agree that if the software remains as inefficient as it now is, the 10 year estimate is bogus. The only way I see to reach that goal is chips designed especially for the purpose and many improvements in the simulation code. Both are feasible given enough money and I suspect that's what Henry is hinting at.

If I do some simple math:

10,000 neurons = 23*10^12 FLOPS

therefore...
1 neuron = 23*10^8 FLOPS

and for an entire brain...
100 billion neurons = 23*10^19 FLOPS

The IBM Blue Gene/Q to be installed in 2011 runs at 20 petaFLOPS or 20*10^15 FLOPS

Moore's law (Intel says it has at least 20 more years):
2011----20*10^15
2012.5--40*10^15
...
2032----33*10^19

So, it should be possible to expand his current simulation to an entire brain about 2031. But, I'm sure specialized hardware and improved software could cut that time down considerably. The other big question is if his simulation is good enough or will more complexity be required.

Edited by DaffyDuck, 11 March 2010 - 05:38 PM.


sponsored ad

  • Advert

#42 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 16 March 2010 - 01:35 PM

10 years will be somewhat impressive, 20 years will be depressing :/

They do seem to have quite a bit to go but hopefully we can advance in both software and hardware somehow to overcome it faster, because we need it.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users