
http://news.bbc.co.u...ogy/8164060.stm
Posted 22 July 2009 - 08:49 PM
Posted 22 July 2009 - 09:12 PM
Posted 22 July 2009 - 10:37 PM
Edited by kismet, 22 July 2009 - 10:58 PM.
Posted 22 July 2009 - 11:00 PM
Posted 23 July 2009 - 12:22 AM
Considering the way persons are treated, I suspect they will treat it better.If they do make an artificial human brain, I hope they treat it as a person.
Posted 23 July 2009 - 02:26 AM
Posted 23 July 2009 - 06:56 AM
Edited by bobscrachy, 23 July 2009 - 06:56 AM.
Posted 23 July 2009 - 04:59 PM
Posted 23 July 2009 - 08:22 PM
Yes, not fast enough to meet this bold timeline(to put it mildly). Not even close. They'd probably even fail if they had Manhattan project type of funds.That means all that's really lacking is the computer power, and we all know how fast computer power is increasing.
I hope not. Would be a pretty expensive human being, going into the mega or gigwatt hours.If they do make an artificial human brain, I hope they treat it as a person.
Edited by kismet, 24 July 2009 - 01:08 PM.
Posted 23 July 2009 - 09:50 PM
Posted 23 July 2009 - 09:56 PM
I think that BlueBrain is really cool research, but I'd also have to say that Markram is a bit of a P.T. Barnum.
Posted 23 July 2009 - 10:00 PM
Yeah, I really want to see it funded too, so I do cut him some slack on the promotion. You're right, everyone fighting for funding has to self-promote to some extent. I just wish Markram wouldn't venture so far out into the speculative end of things, because that could come back to bite him, or could even cause problems with the field of brain simulation in general.I agree but i can't blame him, all scientists need to make their projects look interesting to get their funding. BB really deserves lots of funding.
Posted 24 July 2009 - 05:26 AM
Yes, not to fast enough to meet this bold timeline(to put it mildly). Not even close. They'd probably even fail if they had Manhattan project type of funds.
What hardware are they currently running on?
I hope not. Would be a pretty expensive human being, going into the mega or gigwatt hours.
Edited by bobscrachy, 24 July 2009 - 05:27 AM.
Posted 24 July 2009 - 01:13 PM
Look, going to the moon was unprecedented so no one could extrapolate anything. This is simple mathematics, even the biggest tech optimists are fearing for Moore's law. And even assuming Moore's "law" and by extension the "law" that supercomputing power increases 1000 fold every decade (actually 11 years) keeps up, they'd be more than 10^3 to 10^4 times short of their goal. 10k Neurons simulated in 2007, ~100billion needed in 2019 if they want to keep their promise. It's pretty obvious that it would be incredibly difficult even if they had much more resources.I suppose the same could have been said about going to the moon in the 60s. "It won't happen no matter how much money they throw at it." " It's just a pipe dream." blaw blaw blaw
Edited by kismet, 24 July 2009 - 01:18 PM.
Posted 24 July 2009 - 07:51 PM
Look, going to the moon was unprecedented so no one could extrapolate anything. This is simple mathematics, even the biggest tech optimists are fearing for Moore's law. And even assuming Moore's "law" and by extension the "law" that supercomputing power increases 1000 fold every decade (actually 11 years) keeps up, they'd be more than 10^3 to 10^4 times short of their goal. 10k Neurons simulated in 2007, ~100billion needed in 2019 if they want to keep their promise. It's pretty obvious that it would be incredibly difficult even if they had much more resources.
If you don't agree why don't you just provide a calculation to refute mine? Words are cheap after all.Then again, maybe they want to simulate a brain dead person. That could work in 2019.
So far it all boils down to unjustified Kurzweillian optimism.
Edited by bobscrachy, 24 July 2009 - 07:52 PM.
Posted 24 July 2009 - 07:57 PM
I know they are (calculations are slightly less cheap, though). OTOH that's what longbets.org is for, everyone can test hir Kurzweilian optimisim there (including the man himself, who's obviously betting on the prestigious 2049. Warren Buffet also has a bet running). I would bet - if I wasn't as lazy as stated - that both Kurzweillian bets* are off.Words are cheap. Just meet me back here in 10 years. 2019 will be here before you know it.
I don't think so. If I wasn't so lazy, I'd be up for a longbet.
Edited by kismet, 24 July 2009 - 08:08 PM.
Posted 24 July 2009 - 08:17 PM
Posted 24 July 2009 - 08:49 PM
I know they are (calculations are slightly less cheap, though). OTOH that's what longbets.org is for, everyone can test hir Kurzweilian optimisim there (including the man himself, who's obviously betting on the prestigious 2049. Warren Buffet also has a bet running). I would bet - if I wasn't as lazy as stated - that both Kurzweillian bets* are off.
Anyone else feel free to set up a bet. *artificial brain in 2019 or a machine passing the turing test in 2049 (the latter is still doable, though)
Posted 24 July 2009 - 09:23 PM
I know they are (calculations are slightly less cheap, though). OTOH that's what longbets.org is for, everyone can test hir Kurzweilian optimisim there (including the man himself, who's obviously betting on the prestigious 2049. Warren Buffet also has a bet running). I would bet - if I wasn't as lazy as stated - that both Kurzweillian bets* are off.
Anyone else feel free to set up a bet. *artificial brain in 2019 or a machine passing the turing test in 2049 (the latter is still doable, though)
Posted 25 July 2009 - 12:51 AM
Posted 25 July 2009 - 01:22 AM
Then when do you think we're more likely to create an AI as smart as a human?
Edited by progressive, 25 July 2009 - 01:27 AM.
Posted 02 August 2009 - 12:02 PM
WE HUMANS have let loose something extraordinary on our planet - a third replicator - the consequences of which are unpredictable and possibly dangerous.
What do I mean by "third replicator"? The first replicator was the gene - the basis of biological evolution. The second was memes - the basis of cultural evolution. I believe that what we are now seeing, in a vast technological explosion, is the birth of a third evolutionary process. We are Earth's Pandoran species, yet we are blissfully oblivious to what we have let out of the box.
Last year Google announced that the web had passed the trillion mark, with more than 1,000,000,000,000 unique URLs. Many countries now have nearly as many computers as people, and if you count phones and other connected gadgets they far outnumber people. Even if we all spent all day reading this stuff it would expand faster than we could keep up.
Gadgets like phones and PCs are already using 15 per cent of household power and rising (New Scientist, 23 May, p 17); the web is using over 5 per cent of the world's entire power and rising. We blame ourselves for climate change and resource depletion, but perhaps we should blame this new evolutionary process that is greedy, selfish and utterly blind to the consequences of its own expansion.
Posted 09 August 2009 - 01:28 PM
Posted 26 September 2009 - 11:14 AM
According to the paper you linked to (p. 81), it is quite likely that there will be sufficient computing power to emulate an individual human brain in real-time by mid-century, assuming that an electrophysiological model of the brain is sufficient, and no other level separations are discovered (no abstractions to reduce hardware requirements). I would think that in, say, 20 years, we might discover a few abstractions that would help us to emulate a brain without emulating the electrophysiology to such detail.Then when do you think we're more likely to create an AI as smart as a human?
Nick Bostrom calculated that we would have enough computing power to simulate a human brain by the end of the century, given conservative estimates of computational neurobiology and assuming Moore's Law holds constant. Even with less conservative estimates, we are still talking about many decades.
On the other hand, a good number of AI researchers seem to think AGI is not currently limited by hardware.
If you are hoping for superintelligence in a decade or two, it seems your only hope is with AGI. Though, this is a wildcard. There are no guarantees it will happen and no guarantees it will be a good thing.
I have a feeling the simulation of a human brain itself would only be half the battle... and there are numerous things that could be screwed up in the simulation. After that, you have to embody the thing, and then teach it all over again... which could very well take as long as raising a child, or even longer considering the simulation will likely be incredibly slow at first.... granted it would increase over time.
However, by the time we could get a reasonably fast simulation going, we are probably talking another 50 years on top of the numerous decades needed, and even then unless they are simulating Einstein's brain, a functional superintelligence wouldn't have been created because if you simulated me at hyperspeed you still wouldn't achieve recursive improvement. I would look at the schematics of my brain and say... I GIVE UP. Even after 1000 subjective years.
It is probably for this reason that Bostrom is so concerned with Existential Risk. It is reasonable to assume that many people alive now will live long enough to see the middle of next century, as long as we avert catastrophe.
Edited by exapted, 26 September 2009 - 11:59 AM.
Posted 26 September 2009 - 12:17 PM
Edited by exapted, 26 September 2009 - 12:25 PM.
Posted 26 September 2009 - 12:29 PM
By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap
See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.
Posted 26 September 2009 - 05:21 PM
By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap
See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.
I know this does exactly count as a "Manhatten Project", but the amount of money going into neuroscience, brain modeling, computer/software, AGI/AI, networking, robotics, narrow AI, etc. has got to be way more than a billion every year. The world GDP back in 2007 was 54 trillion and I would guess at least a trillion going into AI related fields and technologies.
Posted 26 September 2009 - 05:58 PM
I'm not sure that we should count every dollar related in some way to computers or software as being connected to AI. I think that it's silly to try to build an ultra-giga-supercomputer to run what is surely a grotesquely inefficient simulation. It would be better to figure out the appropriate abstractions and/or use custom hardware to emulate low-level parts of the brain. Someone has already built such hardware, but I've lost track of who it was.I know this does exactly count as a "Manhatten Project", but the amount of money going into neuroscience, brain modeling, computer/software, AGI/AI, networking, robotics, narrow AI, etc. has got to be way more than a billion every year. The world GDP back in 2007 was 54 trillion and I would guess at least a trillion going into AI related fields and technologies.By the way I think everyone in this thread should check out the following paper by neuroscientist Anders Sandberg and philosopher Nick Bostrom, both at Oxford: Whole Brain Emulation: A Roadmap
See pages 79-81. They say, if there were a "Manhattan Project" spending a billion USD (that seems a bit low to me), it could achieve the computational capacity to emulate an individual brain to the level of electrophysiological models of cells by 2014. Then we should consider scanning and image processing, the other bottle-neck. Maybe computational capacity will not be the bottleneck, because we might find that we can improve on the computational efficiency of the human brain.
0 members, 1 guests, 0 anonymous users