• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Artificial brain '10 years away'


  • Please log in to reply
41 replies to this topic

#1 Reno

  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 22 July 2009 - 08:46 PM


Posted Image

http://news.bbc.co.u...ogy/8164060.stm

#2 ImmortalityFreedom

  • Guest
  • 60 posts
  • -16

Posted 22 July 2009 - 08:49 PM

Posted Image

http://news.bbc.co.u...ogy/8164060.stm


This is great news! I have to contribute to their funding, I will try to get more people to contribute for this as well!

sponsored ad

  • Advert

#3 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 22 July 2009 - 09:12 PM

I'd love to find Markram's speech. I went to TED's website but it's nowhere to be found.

#4 kismet

  • Guest
  • 2,984 posts
  • 424
  • Location:Austria, Vienna

Posted 22 July 2009 - 10:37 PM

I don't think so. If I wasn't so lazy, I'd be up for a longbet. The prediction is crazily optimistic, possibly even topping Ray's singularity prediction. Doesn't the blue brain project simulate like 10k nerve cells? If we are Moore and supercomputing optimists we can say that the next ~10y will yield a 1000 fold increase in computing power. To the best of my knowledge 10k*1k is not in any way close enough (even if we generously add the last years of progress, as the target was achieved earlier)...

Edited by kismet, 22 July 2009 - 10:58 PM.


#5 cyborgdreamer

  • Guest
  • 735 posts
  • 204
  • Location:In the wrong universe

Posted 22 July 2009 - 11:00 PM

If they do make an artificial human brain, I hope they treat it as a person.

#6 okok

  • Guest
  • 340 posts
  • 239

Posted 22 July 2009 - 11:46 PM

Stole this link from another thread here. Highly interesting!

#7 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 23 July 2009 - 12:22 AM

If they do make an artificial human brain, I hope they treat it as a person.

Considering the way persons are treated, I suspect they will treat it better.

#8 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 23 July 2009 - 02:26 AM

10 years is indeed pushing it. But i don't care, we'll get there eventually, in the next decades.

#9 Reno

  • Topic Starter
  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 23 July 2009 - 06:56 AM

It didn't take them long to develop the software to create these cells. If you watch that video posted above they didn't just create one nerve, they created the foundation software for creating and changing that cell as research mounts. That means all that's really lacking is the computer power, and we all know how fast computer power is increasing.

Edited by bobscrachy, 23 July 2009 - 06:56 AM.


#10 n25philly

  • Guest
  • 88 posts
  • 11
  • Location:Holland, PA

Posted 23 July 2009 - 04:59 PM

If memristors become viable I would say that ten years is very possible

#11 kismet

  • Guest
  • 2,984 posts
  • 424
  • Location:Austria, Vienna

Posted 23 July 2009 - 08:22 PM

That means all that's really lacking is the computer power, and we all know how fast computer power is increasing.

Yes, not fast enough to meet this bold timeline(to put it mildly). Not even close. They'd probably even fail if they had Manhattan project type of funds.

What hardware are they currently running on?

If they do make an artificial human brain, I hope they treat it as a person.

I hope not. Would be a pretty expensive human being, going into the mega or gigwatt hours.

Edited by kismet, 24 July 2009 - 01:08 PM.


#12 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 23 July 2009 - 09:50 PM

I think that BlueBrain is really cool research, but I'd also have to say that Markram is a bit of a P.T. Barnum.

#13 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 23 July 2009 - 09:56 PM

I think that BlueBrain is really cool research, but I'd also have to say that Markram is a bit of a P.T. Barnum.



I agree but i can't blame him, all scientists need to make their projects look interesting to get their funding. BB really deserves lots of funding.

#14 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 23 July 2009 - 10:00 PM

I agree but i can't blame him, all scientists need to make their projects look interesting to get their funding. BB really deserves lots of funding.

Yeah, I really want to see it funded too, so I do cut him some slack on the promotion. You're right, everyone fighting for funding has to self-promote to some extent. I just wish Markram wouldn't venture so far out into the speculative end of things, because that could come back to bite him, or could even cause problems with the field of brain simulation in general.

#15 Reno

  • Topic Starter
  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 24 July 2009 - 05:26 AM

Yes, not to fast enough to meet this bold timeline(to put it mildly). Not even close. They'd probably even fail if they had Manhattan project type of funds.

What hardware are they currently running on?


I hope not. Would be a pretty expensive human being, going into the mega or gigwatt hours.


I suppose the same could have been said about going to the moon in the 60s. "It won't happen no matter how much money they throw at it." " It's just a pipe dream." blaw blaw blaw

Edited by bobscrachy, 24 July 2009 - 05:27 AM.


#16 kismet

  • Guest
  • 2,984 posts
  • 424
  • Location:Austria, Vienna

Posted 24 July 2009 - 01:13 PM

I suppose the same could have been said about going to the moon in the 60s. "It won't happen no matter how much money they throw at it." " It's just a pipe dream." blaw blaw blaw

Look, going to the moon was unprecedented so no one could extrapolate anything. This is simple mathematics, even the biggest tech optimists are fearing for Moore's law. And even assuming Moore's "law" and by extension the "law" that supercomputing power increases 1000 fold every decade (actually 11 years) keeps up, they'd be more than 10^3 to 10^4 times short of their goal. 10k Neurons simulated in 2007, ~100billion needed in 2019 if they want to keep their promise. It's pretty obvious that it would be incredibly difficult even if they had much more resources.

If you don't agree why don't you just provide a calculation to refute mine? Words are cheap after all.  ;) Then again, maybe they want to simulate a brain dead person. That could work in 2019.  :|o So far it all boils down to unjustified Kurzweillian optimism.

Edited by kismet, 24 July 2009 - 01:18 PM.


#17 Reno

  • Topic Starter
  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 24 July 2009 - 07:51 PM

Look, going to the moon was unprecedented so no one could extrapolate anything. This is simple mathematics, even the biggest tech optimists are fearing for Moore's law. And even assuming Moore's "law" and by extension the "law" that supercomputing power increases 1000 fold every decade (actually 11 years) keeps up, they'd be more than 10^3 to 10^4 times short of their goal. 10k Neurons simulated in 2007, ~100billion needed in 2019 if they want to keep their promise. It's pretty obvious that it would be incredibly difficult even if they had much more resources.

If you don't agree why don't you just provide a calculation to refute mine? Words are cheap after all.  ;) Then again, maybe they want to simulate a brain dead person. That could work in 2019.  :|o So far it all boils down to unjustified Kurzweillian optimism.


Words are cheap. Just meet me back here in 10 years. 2019 will be here before you know it.

Edited by bobscrachy, 24 July 2009 - 07:52 PM.


#18 kismet

  • Guest
  • 2,984 posts
  • 424
  • Location:Austria, Vienna

Posted 24 July 2009 - 07:57 PM

Words are cheap. Just meet me back here in 10 years. 2019 will be here before you know it.

I know they are (calculations are slightly less cheap, though). OTOH that's what longbets.org is for, everyone can test hir Kurzweilian optimisim there (including the man himself, who's obviously betting on the prestigious 2049. Warren Buffet also has a bet running). I would bet - if I wasn't as lazy as stated - that both Kurzweillian bets* are off.  :|o Anyone else feel free to set up a bet. *artificial brain in 2019 or a machine passing the turing test in 2049 (the latter is still doable, though)

I don't think so. If I wasn't so lazy, I'd be up for a longbet.


Edited by kismet, 24 July 2009 - 08:08 PM.


#19 n25philly

  • Guest
  • 88 posts
  • 11
  • Location:Holland, PA

Posted 24 July 2009 - 08:17 PM

As I mentioned earlier look up memristors. New technology we are just learning to produce now. Memristance has been around for 40 years, but no one knew what it was as it gets stronger as circuits get smaller. These are key because memristors work the same way as the neurons in our brains do.

http://en.wikipedia.org/wiki/Memristor

It's going to be a few years before they start to show up in commercial devices, but once they do they will be perfect for creating a robotic brain, and for nanotechnology. With todays technology it would take a computer the size of a city that would need it's own nuclear power plant to simulate the human brain. Memristors would cut down the amount of circuitry, and allow the technology to work just like the brain instead of just trying to come up with algorithims that would act like it, which would also save time on the programming side as well.

#20 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 24 July 2009 - 08:49 PM

I know they are (calculations are slightly less cheap, though). OTOH that's what longbets.org is for, everyone can test hir Kurzweilian optimisim there (including the man himself, who's obviously betting on the prestigious 2049. Warren Buffet also has a bet running). I would bet - if I wasn't as lazy as stated - that both Kurzweillian bets* are off.  :|o Anyone else feel free to set up a bet. *artificial brain in 2019 or a machine passing the turing test in 2049 (the latter is still doable, though)


Then when do you think we're more likely to create an AI as smart as a human?

#21 Delorean

  • Guest
  • 78 posts
  • 23

Posted 24 July 2009 - 09:23 PM

I know they are (calculations are slightly less cheap, though). OTOH that's what longbets.org is for, everyone can test hir Kurzweilian optimisim there (including the man himself, who's obviously betting on the prestigious 2049. Warren Buffet also has a bet running). I would bet - if I wasn't as lazy as stated - that both Kurzweillian bets* are off. :|o Anyone else feel free to set up a bet. *artificial brain in 2019 or a machine passing the turing test in 2049 (the latter is still doable, though)


2029 = Pass Turing test

2045 = Singularity

#22 Forever21

  • Guest
  • 1,918 posts
  • 122

Posted 25 July 2009 - 12:51 AM

Can we upload ourselves to it?

Might come in handy when your body (and head) dies.

#23 EmbraceUnity

  • Guest
  • 1,018 posts
  • 99
  • Location:USA

Posted 25 July 2009 - 01:22 AM

Then when do you think we're more likely to create an AI as smart as a human?


Nick Bostrom calculated that we would have enough computing power to simulate a human brain by the end of the century, given conservative estimates of computational neurobiology and assuming Moore's Law holds constant. Even with less conservative estimates, we are still talking about many decades.

On the other hand, a good number of AI researchers seem to think AGI is not currently limited by hardware.

If you are hoping for superintelligence in a decade or two, it seems your only hope is with AGI. Though, this is a wildcard. There are no guarantees it will happen and no guarantees it will be a good thing.

I have a feeling the simulation of a human brain itself would only be half the battle... and there are numerous things that could be screwed up in the simulation. After that, you have to embody the thing, and then teach it all over again... which could very well take as long as raising a child, or even longer considering the simulation will likely be incredibly slow at first.... granted it would increase over time.

However, by the time we could get a reasonably fast simulation going, we are probably talking another 50 years on top of the numerous decades needed, and even then unless they are simulating Einstein's brain, a functional superintelligence wouldn't have been created because if you simulated me at hyperspeed you still wouldn't achieve recursive improvement. I would look at the schematics of my brain and say... I GIVE UP. Even after 1000 subjective years.

It is probably for this reason that Bostrom is so concerned with Existential Risk. It is reasonable to assume that many people alive now will live long enough to see the middle of next century, as long as we avert catastrophe.

Edited by progressive, 25 July 2009 - 01:27 AM.


#24 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 02 August 2009 - 12:02 PM

Saw this at Newscientist the other day.

WE HUMANS have let loose something extraordinary on our planet - a third replicator - the consequences of which are unpredictable and possibly dangerous.

What do I mean by "third replicator"? The first replicator was the gene - the basis of biological evolution. The second was memes - the basis of cultural evolution. I believe that what we are now seeing, in a vast technological explosion, is the birth of a third evolutionary process. We are Earth's Pandoran species, yet we are blissfully oblivious to what we have let out of the box.


Last year Google announced that the web had passed the trillion mark, with more than 1,000,000,000,000 unique URLs. Many countries now have nearly as many computers as people, and if you count phones and other connected gadgets they far outnumber people. Even if we all spent all day reading this stuff it would expand faster than we could keep up.


Gadgets like phones and PCs are already using 15 per cent of household power and rising (New Scientist, 23 May, p 17); the web is using over 5 per cent of the world's entire power and rising. We blame ourselves for climate change and resource depletion, but perhaps we should blame this new evolutionary process that is greedy, selfish and utterly blind to the consequences of its own expansion.


It is a good read, but nothing really new to the crowd here. The most interesting thing is that more people are thinking about the ramifications of our technological progress. More people are becoming aware of the extreme promise and peril that comes with creating systems, intelligence, or replicators more complex than us. As little as 2 or 3 years ago, this type of discussion was too fringe to appear in popular science-type publications.

#25 jb7756

  • Guest
  • 3 posts
  • 0

Posted 09 August 2009 - 01:28 PM

Ten years for real IA ? It is credible for me !

See this demo: http://www.smartaction.com/demos/
This system could pass the “commercial Turing test” !

The creator of this system give 8 years to reach the human level : http://www.accelerat...toward-real-ai/

And it is not required to mimic in detail human brain (like blue brain project), just reproduce main function with still existing technology. Reproduce human brain in detail is not efficient because the current computer hardware is not optimised to simulate neuron. You must adapt to the exiting hardware and the demo above show how it could be effective.


#26 exapted

  • Guest
  • 168 posts
  • 0
  • Location:Minneapolis, MN

Posted 26 September 2009 - 11:14 AM

Then when do you think we're more likely to create an AI as smart as a human?


Nick Bostrom calculated that we would have enough computing power to simulate a human brain by the end of the century, given conservative estimates of computational neurobiology and assuming Moore's Law holds constant. Even with less conservative estimates, we are still talking about many decades.

On the other hand, a good number of AI researchers seem to think AGI is not currently limited by hardware.

If you are hoping for superintelligence in a decade or two, it seems your only hope is with AGI. Though, this is a wildcard. There are no guarantees it will happen and no guarantees it will be a good thing.

I have a feeling the simulation of a human brain itself would only be half the battle... and there are numerous things that could be screwed up in the simulation. After that, you have to embody the thing, and then teach it all over again... which could very well take as long as raising a child, or even longer considering the simulation will likely be incredibly slow at first.... granted it would increase over time.

However, by the time we could get a reasonably fast simulation going, we are probably talking another 50 years on top of the numerous decades needed, and even then unless they are simulating Einstein's brain, a functional superintelligence wouldn't have been created because if you simulated me at hyperspeed you still wouldn't achieve recursive improvement. I would look at the schematics of my brain and say... I GIVE UP. Even after 1000 subjective years.

It is probably for this reason that Bostrom is so concerned with Existential Risk. It is reasonable to assume that many people alive now will live long enough to see the middle of next century, as long as we avert catastrophe.

According to the paper you linked to (p. 81), it is quite likely that there will be sufficient computing power to emulate an individual human brain in real-time by mid-century, assuming that an electrophysiological model of the brain is sufficient, and no other level separations are discovered (no abstractions to reduce hardware requirements). I would think that in, say, 20 years, we might discover a few abstractions that would help us to emulate a brain without emulating the electrophysiology to such detail.

I'll say something about that paper even though you've read it, since maybe others haven't. The authors (Anders Sandberg and Nick Bostrom) mentioned in one of the appendices that they wonder if the exponential rate of improvement in computational capacity will continue, despite evidence of faster-than-exponential improvement and a few significant types of computing hardware paradigms currently being researched. Their idea is that software bloat drives hardware development to some extent, and if it ever stopped it would slow down the feedback loop. They wonder if miniaturization will change priorities in hardware development.

I can't remember exactly what Ray Kurzweil's estimate was for when we would have the hardware capacity to emulate an individual human brain, but I think he said something in the 2020s. Please correct me if you know otherwise, since I left that book in another country.

I can appreciate both estimates. The more conservative estimate is an upper-bound and a proof-of-concept. Ray's estimate perhaps assumes we will be able to reverse engineer the human brain well enough (for example, find some level separation so that we don't need electrophysiological models of neurons) that we don't need such advanced hardware to emulate the human brain.

I personally think something like Novamente could be pretty clever within 10-20 years. I agree with you that AGI is the wildcard here.

Edited by exapted, 26 September 2009 - 11:59 AM.


#27 exapted

  • Guest
  • 168 posts
  • 0
  • Location:Minneapolis, MN

Posted 26 September 2009 - 12:17 PM

By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap

See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.

Edited by exapted, 26 September 2009 - 12:25 PM.


#28 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 26 September 2009 - 12:29 PM

By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap

See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.


I know this does exactly count as a "Manhatten Project", but the amount of money going into neuroscience, brain modeling, computer/software, AGI/AI, networking, robotics, narrow AI, etc. has got to be way more than a billion every year. The world GDP back in 2007 was 54 trillion and I would guess at least a trillion going into AI related fields and technologies.

#29 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 26 September 2009 - 05:21 PM

By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap

See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.


I know this does exactly count as a "Manhatten Project", but the amount of money going into neuroscience, brain modeling, computer/software, AGI/AI, networking, robotics, narrow AI, etc. has got to be way more than a billion every year. The world GDP back in 2007 was 54 trillion and I would guess at least a trillion going into AI related fields and technologies.



I agree. A lot is invested in the field, and the returns are solid.


Creating a "Manhattan Project" just to make a computer with a lot of computational capacity seems like a big waste of money, considering the extremely rapid rate at which the money invested would lose value, and also the high likelihood that we wouldn't get many results from it. Better to wait until computing power gets to the level where we don't need to spend and obscene amount to build a supercomputer with enough capacity to simulate a human brain. This link is interesting, as the power of the human brain is estimated to at 10^16 operations per second, which is 10 petaflops.

sponsored ad

  • Advert

#30 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 26 September 2009 - 05:58 PM

By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap
See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.

I know this does exactly count as a "Manhatten Project", but the amount of money going into neuroscience, brain modeling, computer/software, AGI/AI, networking, robotics, narrow AI, etc. has got to be way more than a billion every year. The world GDP back in 2007 was 54 trillion and I would guess at least a trillion going into AI related fields and technologies.

I'm not sure that we should count every dollar related in some way to computers or software as being connected to AI. I think that it's silly to try to build an ultra-giga-supercomputer to run what is surely a grotesquely inefficient simulation. It would be better to figure out the appropriate abstractions and/or use custom hardware to emulate low-level parts of the brain. Someone has already built such hardware, but I've lost track of who it was.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users