• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
* * * * * 2 votes

20 Petaflops? On Its Way.


  • Please log in to reply
16 replies to this topic

#1 Putz

  • Guest, F@H
  • 55 posts
  • 0
  • Location:Providence, RI

Posted 03 February 2009 - 05:46 AM


http://blog.wired.co...ercomputer.html

Breaking news, IBM is contracted to build this 20 petaflop beast for the Department of Energy. Moore's law is still going strong.

#2 kismet

  • Guest
  • 2,984 posts
  • 424
  • Location:Austria, Vienna

Posted 03 February 2009 - 12:04 PM

As always the Lawrence Livermore National Laboratory (LLNL) will deliver bleeding edge supercomputing. :) Unforunately - as always - it will be used to simulate their nuclears missiles and such. Stockpile stewardship for the win....

sponsored ad

  • Advert

#3 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 03 February 2009 - 03:01 PM

The announcement precisely matches Ray Kurzweil's forecast in his 2005 book The Singularity is Near that "supercomputers will achieve my more conservative estimate of 10^16 cps [computations per second] for functional human-brain emulation by early in the next decade." (20 petaflops is 2*10^16 cps.) - Ed.
( http://www.kurzweila.....html?id=10072 )



Nice eh. But the software will take much longer, unfortunately.

#4 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,047 posts
  • 2,003
  • Location:Wausau, WI

Posted 03 February 2009 - 07:29 PM

Folding@home will probably beat Livermore to the 20 petaflop barrier. It is already nearing 5 petaflops. You can be part of the revolution.

#5 Putz

  • Topic Starter
  • Guest, F@H
  • 55 posts
  • 0
  • Location:Providence, RI

Posted 03 February 2009 - 08:12 PM

As always the Lawrence Livermore National Laboratory (LLNL) will deliver bleeding edge supercomputing. :) Unforunately - as always - it will be used to simulate their nuclears missiles and such. Stockpile stewardship for the win....


Yeah, though we must keep in mind that thankfully supercomputing has put an end to real nuclear tests. Hopefully, the nuclear physics testing will finish quickly (perhaps giving us new insight on atomic energy and fusion application), and it will be turned over to cellular simulation, protein folding, or other more technologically applicable tasks.

#6 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 21 February 2009 - 10:29 AM

Folding@home will probably beat Livermore to the 20 petaflop barrier. It is already nearing 5 petaflops. You can be part of the revolution.


I predict a big spike in output FAH output over the next few years:

http://en.wikipedia..../Larrabee_(GPU)
http://www.theinquir...playstation-gpu

GPUs will become more powerful and easier to program. Since FAH is mostly driven by hardware enthusiasts, the spike will come pretty soon.

#7 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 21 February 2009 - 10:33 AM

Folding@home will probably beat Livermore to the 20 petaflop barrier. It is already nearing 5 petaflops. You can be part of the revolution.


I predict a big spike in output FAH output over the next few years:

http://en.wikipedia..../Larrabee_(GPU)
http://www.theinquir...playstation-gpu

GPUs will become more powerful and easier to program. Since FAH is mostly driven by hardware enthusiasts, the spike will come pretty soon.

The great thing about F@H (and all of the DC projects, I suppose) is that they automatically upgrade themselves as people update their hardware.

#8 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,047 posts
  • 2,003
  • Location:Wausau, WI

Posted 21 February 2009 - 01:49 PM

Also, the Pande Lab is in negotiations to get folding@home pre-installed on many different software and hardware platforms. That way, all a user has to do a click an icon to start folding. When I interviewed Pande, he couldn't give me any specifics, just that the Lab is in communication with some companies.

Edited by Mind, 21 February 2009 - 01:51 PM.


#9 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 22 February 2009 - 09:27 AM

Also, the Pande Lab is in negotiations to get folding@home pre-installed on many different software and hardware platforms. That way, all a user has to do a click an icon to start folding. When I interviewed Pande, he couldn't give me any specifics, just that the Lab is in communication with some companies.

And that was a great interview, too, Mind. I learned several things about the project that I had not known before.

#10 Lurker

  • Guest
  • 87 posts
  • -0
  • Location:California

Posted 02 April 2009 - 03:32 PM

Nice eh. But the software will take much longer, unfortunately.


And memory latency and size do not scale with Moore's law.

From my understanding, the two big current trends in the computer engineering are focusing on reducing power consumption and concurrent programming to take advantage of multi-core processor designs.

#11 Mariusz

  • Guest
  • 164 posts
  • 0
  • Location:Hartford, CT

Posted 02 April 2009 - 04:06 PM

Also, the Pande Lab is in negotiations to get folding@home pre-installed on many different software and hardware platforms. That way, all a user has to do a click an icon to start folding. When I interviewed Pande, he couldn't give me any specifics, just that the Lab is in communication with some companies.


Recently when I installed on of the newest driver for my client's ati card it asked me to install fah!

Unfortunatelly fah crashing on his core i7 after just few minutes.

Mariusz

#12 Mariusz

  • Guest
  • 164 posts
  • 0
  • Location:Hartford, CT

Posted 02 April 2009 - 04:15 PM

Nice eh. But the software will take much longer, unfortunately.


And memory latency and size do not scale with Moore's law.

From my understanding, the two big current trends in the computer engineering are focusing on reducing power consumption and concurrent programming to take advantage of multi-core processor designs.


Yeah, if the cpu manufacturers where able to double cpu's speed every 18 months we would already be waiting for 32 GHz cpus:D
Unfortunatelly they were "only" able to double number of transistors. :-D

Mariusz

#13 harris13.3

  • Guest
  • 87 posts
  • 6

Posted 27 April 2009 - 05:49 AM

That was true until about 2003/2004. Then, excess heat & thermal leakage forced Intel to move towards multi core designs.

#14 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 28 April 2009 - 02:31 AM

From my understanding, the two big current trends in the computer engineering are focusing on reducing power consumption and concurrent programming to take advantage of multi-core processor designs.




2009 and we're up to 16 cores (later this year). From this article about AMDs "Bulldozer". Terrible name.

It all sounds pretty impressive on paper. But how fast will [AMD's new] 16-core [Bulldozer] chip be in practice? Well, according to AMD, Bulldozer is designed to be nothing less than "the highest performing single and multi-threaded compute core in history".

Cloud computing is also being pushed by the IBM/Google alliance. One area they are focusing on is informatics. Some large grants are flowing in that direction. Should be interesting to see how it plays out.

#15 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 28 April 2009 - 04:08 AM

From my understanding, the two big current trends in the computer engineering are focusing on reducing power consumption and concurrent programming to take advantage of multi-core processor designs.




2009 and we're up to 16 cores (later this year). From this article about AMDs "Bulldozer". Terrible name.

It all sounds pretty impressive on paper. But how fast will [AMD's new] 16-core [Bulldozer] chip be in practice? Well, according to AMD, Bulldozer is designed to be nothing less than "the highest performing single and multi-threaded compute core in history".




Did you see the date of the article? July 2007. Now, the 16-core processor is expected for 2011 only, unfortunately.

#16 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 28 April 2009 - 08:38 AM

From my understanding, the two big current trends in the computer engineering are focusing on reducing power consumption and concurrent programming to take advantage of multi-core processor designs.




2009 and we're up to 16 cores (later this year). From this article about AMDs "Bulldozer". Terrible name.

It all sounds pretty impressive on paper. But how fast will [AMD's new] 16-core [Bulldozer] chip be in practice? Well, according to AMD, Bulldozer is designed to be nothing less than "the highest performing single and multi-threaded compute core in history".



Did you see the date of the article? July 2007. Now, the 16-core processor is expected for 2011 only, unfortunately.


Good catch. I wonder what the hold up is or is it just AMD substituting future hype since it's falling behind Intel. Intel will have an 8 core Xeon shortly (16 logical) designed for 4-socket platforms, meaning 64 threads (2 threads/core still).

sponsored ad

  • Advert

#17 Luke Parrish

  • Guest
  • 140 posts
  • 31
  • Location:Salem, OR

Posted 28 May 2009 - 12:45 AM

Great stuff, but I had to laugh about this:

By almost any standard, the new computer will be staggering. It will have 1.6 million processing cores, 1.6 petabytes of memory, 96 racks and 98,304 computing nodes. Yet, the new computer will have a much smaller footprint at 3,400 square feet than the current fastest computer’s 5,200 square feet. And it will be much more energy efficient than its predecessors, only drawing 6 megawatts of power a year. That’s about how much energy 500 American homes use in the same period.


A watt, kilowatt, or megawatt is a rate of energy consumption, not a unit of energy. It is possible to create one megawatt over a course of 3.6 seconds (1/1000th of an hour) at the cost of 1 kilowatt-hour. Depending where you live, that costs roughly 10 cents. If you want 6 megawatts, you can do that for the same amount of energy by compressing it into 1/6th of the time... 0.6 seconds.

Put another way, one kilowatt-hour is 3.6 megawatt-seconds or 1/1000th of a megawatt-hour. Or you could say 10 cents can power 500 homes or a 20-petaflop computer for about 0.6 seconds. Sounds cheap, but try paying for a whole year... :-D




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users