• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Geometric exponential growth?


  • Please log in to reply
21 replies to this topic

#1 brandonreinhart

  • Guest
  • 67 posts
  • 0

Posted 04 October 2006 - 07:03 PM


Assuming that Moore's Law continues and transistor density persists in rising exponentially, combined with the fact that vendors are now moving to symmetric multiprocessing in consumer machines, does this mean we are seeing the start of a geometric exponential curve in computing power? Reference this article from slashdot:

"Yorkfield Extreme Edition based on the 45nm Penry core architecture will meet heads-on with AMD Altair based on the 65nm K8L core in Q3 2007 as reported by VR-Zone. Due to its advanced 45nm process technology, Yorkfield XE is able to pack a total of 12MB L2 cache (2 x 6MB L2) and still achieving a much smaller die size and higher clock speed of 3.43-3.73Ghz. Yorkfield will feature Penryn New Instructions (PNI) or more officially known as SSE4 with 50 more new instructions. Yorkfield XE will pair up nicely with the Bearlake-X chipset supporting DDR3 1333, PCI Express 2.0 and ICH9x coming in the Q3 '07 timeframe as well."


Quad-core CPUs in 2007 with even denser chips on each core. Are the predictions of computing-power per $1000 too conservative? If multi-processor fabrication and integration techniques further depress computing power prices, would we see human-brain level power in the desktop prior to 2030?

#2 brandonreinhart

  • Topic Starter
  • Guest
  • 67 posts
  • 0

Posted 04 October 2006 - 07:09 PM

http://www.xbitlabs....0602090104.html

AMD Preps “4x4” Octa-Core for Enthusiasts.
The Triumph of the Skill: AMD 4x4

Category: CPU

by Anton Shilov

[ 06/02/2006 | 09:03 AM ]


Advanced Micro Devices said Thursday it would offer a platform that would allow users to run two processors with four processing engines into desktop computers. This is generally the first time that computer enthusiasts can use two processors on an appropriate platform and certainly the first time that the performance-demanding users can enjoy advantage of as many as eight processing cores.

“AMD announced plans for a new enthusiast platform codenamed ‘4x4’ that will extend AMD’s long-standing commitment to those consumers who demand the highest-performing PCs,” an AMD statement claimed.

The announcement means that AMD is planning to allow using enthusiast-class processors in 2-way configurations, something, which market leader Intel and AMD have been trying to avoid in the past. No Athlon on Pentium processors currently support technology that allows the chips to work in pairs.

AMD has already been rumored in regards of introduction of a desktop processor that contains four processing engines, however, this so far it has not been confirmed. Today the company says it would allow computer enthusiasts to use two dual-core processors to achieve ultimate performance in applications that take advantage of multi-core systems. Moreover, quad-core chips will also be able to work in pairs, it seems.

“The 4x4 platform features a four-core, multi-socket processor configuration uniquely possible via AMD’s direct connect architecture,” the statement reads.

Additionally, AMD said it would develop microprocessors featuring four processing engines for performance-minded enthusiasts.

“The 4x4 platform will be designed to be upgraded to eight total processor cores when AMD launches quad-core processors in 2007. Project 4x4 represents system-level enthusiast enhancements and is designed for ultimate multi-tasking performance across gaming, digital video, processor-intensive and heavily threaded applications,” the firm said.


Dual CPU Quad-Core! The OCTABRAIN!

sponsored ad

  • Advert

#3 brandonreinhart

  • Topic Starter
  • Guest
  • 67 posts
  • 0

Posted 04 October 2006 - 07:24 PM

There has been a lot of discussion lately in the game industry about developing efficient code for SMP architecture of more than two cores. It's hard. Really, really hard. Imagine a typical metal tooth jacket zipper. That's a dual core. Now imagine if that zipper somehow had like 8 other toothy lines that all had to zip together in a particular weave...then you're getting closer to something like the Playstation 3.

Code that is poorly written or doesn't handle the multiple cores properly can actually run slower than on a single core, because the code can stall waiting for other cores to finish up. Or it can put too much traffic on the bus and the cores end up idle waiting for tasks and information to get to them. It's a hard enough problem that a human can't reasonably solve it. Smarter compilers that understand SMP and are able to structure code to take advantage of hyper-pipelined and specialized cores is the solution...and writing that is hard too.

That's also why hearing about advances in other parts of computer architecture is exciting. Carbon nanoram might make accessing stored data faster. Optical computing could speed up everything between the cpu and the ram, meaning data would get to the cpu faster. All that means that those ravenous CPUs can spend more time working and less time waiting on the slower parts of the machine.

Seems like we are able to crush hardware issues left and right. Software is the hairy part.

#4 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 05 October 2006 - 07:09 AM

Software is the hairy part. Yup

3rd parties have already done some limited benchmarking of the Core 2 Extreme "Quadro". And surprise, it's being released next month.

#5 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 05 October 2006 - 12:08 PM

It's a natural tendency to write algorithms the way you think. Single threaded step by step approach. This has to change to take advantage of the next generation of multi-core & distributed computing.

#6 knite

  • Guest
  • 296 posts
  • 0
  • Location:Los Angeles, California

Posted 05 October 2006 - 11:25 PM

^^^ which is funny because dontour brains actually think like the ultimate multicore?

#7 Centurion

  • Guest
  • 1,000 posts
  • 19
  • Location:Belfast, Northern Ireland

Posted 05 October 2006 - 11:33 PM

I suppose that depends if your talking about the dynamics of the brain's physical operations (hardware, in a manner of speaking) or the thought processes themselves (software?)

#8 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 05 October 2006 - 11:38 PM

The irony is that games used to be fairly resistant to multi-threading, but today's games are wide open to it. The physics engines could be coded to take advantage of dozens if not hundreds of cores. The graphics engines could similarly parallelize, to some extent. Even AI could probably parallelize, depending on the engine. Intel recently announced their plans to eventually reach 80 cores (don't recall if it was serious or a joke, but let's assume they were serious), and I don't think current games would have a problem with that, if properly rewritten. Games from the 80's couldn't handle that many cores, because of the amount of sequential logic.

As we continue going forward, games will most likely become more and more parallelizable, so that by the time we see 1024-core machines (e.g., an eight-CPU PC with 128 cores per CPU), the games will be ready to handle it.

There are obviously some things that need to stay in sequence, but the number of things that needs to happen in sequence is very small compared to the processing rate of a single core. As long as a single core can process sequential events (triggers really) in order, then the whole system should be able to keep up.

#9 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 05 October 2006 - 11:48 PM

By the way, getting back to the original question of this thread, I don't necessarily see the increase in cores as something that is pushing up the exponential rate, at least in the short term. The various chip manufacturers have hit something of a barrier in speed, at least with current chip manufacturing techniques. In 1999-2000, chip speeds went from about 500 MHz to about 1 GHz. 2 GHz took a while to follow, as did 3 and 4, but we've been stuck for a while. The efforts to increase the depth of CPU pipelines, and now to add cores, are efforts to continue increasing the overall throughput of a processor at the expected rate predicted by Moore's Law, even though clock speed has been stagnant for a couple years.

Once new chip manufacturing techniques allow us to start pushing clock speeds back up, and as software (even OS software) is redesigned to take advantage of multiple cores, I expect that the exponential growth of Moore's Law will continue, and may even slightly accelerate. But I don't foresee a huge acceleration.

Just my two cents. I haven't paid close attention to the PC/CPU industry since 2000, so I'm mostly ignorant of the situation. Perhaps we are already exceeding Moore's law. As far as I can tell, we're more or less still on schedule.

Edit: By the way, there are various versions of Moore's Law. Some deal with transistor density, some deal with transistor count per dollar, some deal with clock speed, some deal with instructions per unit time, some deal with instructions per dollar, etc., etc. I'm mainly focussed on instructions per unit time and instructions per dollar, when I think of Moore's Law. But I don't think this is the historical version of the "law".

#10 garethnelsonuk

  • Guest
  • 355 posts
  • 0

Posted 06 October 2006 - 12:10 PM

Regarding software vs hardware:

It is entirely possible that massive advances could come simply from more efficient programming. The problem though is that human programmers tend to need a lot of abstraction to even understand what is going on inside the computer and fully optimised code is very difficult to read (or to write by hand for that matter). Some of the greatest hackers in the world that are well known for writing fast and efficient code still can't do even a simple chatbot (let alone a superhuman AI) in fully optimised assembly. Perhaps the focus does need to shift from better hardware to better software and better programmers. Optimising compilers and other tools can help but only so much - they're always inferior to manual optimisation.

#11 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 06 October 2006 - 12:52 PM

^^^ which is funny because don't our brains actually think like the ultimate multicore?


While yes our minds as a whole are doing many tasks in parallel to maintain biological function, active consciousness (i.e. focus) is fairly single threaded. Try watching a few television shows at a time. Or read a book while listening to another audio book. You can context switch between the two fairly quickly and even train your mind to do so rapidly such that you can absorb both in what appears to be a multi-threaded fashion but you are essentially still context switching. Try adding a 3rd, 4th, 5th stream. It doesn't scale.

Most engineers will start with complex task, model it and then break it up into subroutines which execute in a linear fashion. Not unlike a cook following a recipe or a farmer preparing and planting crops. The challenge with making software multi-threaded is that sometimes you just simply can't parallelize many of the tasks the same way a farmer can't prepare the soil, plant the seeds and apply the fertilizer simultaneously without stepping back and abstracting the entire process and analyzing the problem from many angles looking for creative efficiencies. After doing such an excercise the farmer might come up with a a tractor that makes one pass across acres of farm rototilling, seeding and fertilizing in one shot. The same has to be done with software. Simple routines that solve simple problems can be hacked out in no time. Multithreaded applications that tackle complex problems can take mohths if not years to model, abstract and design.

#12 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 06 October 2006 - 05:39 PM

Optimising compilers and other tools can help but only so much - they're always inferior to manual optimisation.

Actually, I doubt we need to worry about hand-optimizing low-level code. If you take the best compiler-optimized machine code and have a human optimize it further, you'll get anywhere between 5% to 100% better performance (sometimes 0%). So at most, you'll double the program's speed.

20 years from now, when computers are 100 times faster (give or take), and compilers are even more efficient than they are now, a human might achieve anywhere from 1% to 20% increase over the compiler's code. That's 1%-20% compared to 100 times, 10,000%, increase due to hardware.

Now, a poor programmer might program something that can be optimized within the higher-level language, but I don't think we'll be needing to learn assembly language to actually do it. I think assembly language is useful to learn, just to know what's going on, but I don't think we'll be using it much in the future. So long as we can write code to take advantage of as many cores (and GPUs!) as possible, the compiler's can do the rest. Better high-level code design, not low-level optimization, is where we need to focus our software efforts.

#13 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 06 October 2006 - 05:45 PM

Most engineers will start with complex task, model it and then break it up into subroutines which execute in a linear fashion. Not unlike a cook following a recipe or a farmer preparing and planting crops. The challenge with making software multi-threaded is that sometimes you just simply can't parallelize many of the tasks the same way a farmer can't prepare the soil, plant the seeds and apply the fertilizer simultaneously without stepping back and abstracting the entire process and analyzing the problem from many angles looking for creative efficiencies. After doing such an excercise the farmer might come up with a a tractor that makes one pass across acres of farm rototilling, seeding and fertilizing in one shot. The same has to be done with software. Simple routines that solve simple problems can be hacked out in no time. Multithreaded applications that tackle complex problems can take mohths if not years to model, abstract and design.

Yes, you will always have tasks that must be done in order. But within tasks, you will usually be able to divide up work.

In the farmer's example, let's say you have to prepare the soil, then plant the seeds, the apply fertilizer. Well, you could have 10 farmers prepare the soil (parallel!), then have the same 10 farmers plant the seeds, then the same ten farmers could apply fertilizer. In a large field, you'll have multiple rows of crops, so the individual tasks, which must be performed sequentially, can be broken in parallelizable chunks of work.

In video games, you have to calculate the physics, update the universe, then draw the graphics.

But the physics engine can be made parallelizable, as can the drawing of the graphics.

Most of the programs that crunch large amounts of data do so on highly parallelizable tasks. There are rare exceptions, but for the most part, many forms of software can be made parallelizable with the right approach. It just takes more forethought. But with multiple cores becoming the norm, this type of forethought will have to become the norm. And once you've done it a few times, it becomes more and more natural to figure out how a seemingly serial set of tasks can be broken down into independent, parallelizable threads, with a few events being used to synchronize the parts that have to be in order.

#14 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 07 October 2006 - 01:08 AM

I agree, most software will need to be redesigned for distributed computing models but there are still many algorithms that are NP complete that can only be tackled with genetic algorithms and heuristics. Unfortunately many of the important biological challenges fall into this category such as large scale structural prediction.

#15 Brian

  • Guest
  • 29 posts
  • 1
  • Location:Earth

Posted 10 October 2006 - 05:46 AM

20 years from now, when computers are 100 times faster


Just a nitpick Jay, but you're way too low on your estimate there. For an example go back and look 20 years ago at what we had:

http://en.wikipedia..../Motorola_68020

That chip was a sped up and tweaked version of:

http://en.wikipedia..../Motorola_68000

Which had around 70k transistors.

Ok, now here we are roughly 20 years later:

http://www.anandtech...doc.aspx?i=2795

Scroll down to the 2nd table, and you can see the new Intel Core 2 Duo has 291 million transistors, actually less than the older Pentium D you can see there. But Intel is going to be releasing a "quad core" version of this before the end of this year, so let's call it around 600 million transistors on the best available consumer chip currently.

Doing the math, that's around a factor of 10,000x more transistors over the past 20 years, plus of course this new chip operates in the gigahertz range, about more than 100 times faster clockrate compared to an old 68020. If you multiply the clock rate increase by the transistor increase, that's around a factor of 1 million times "better" over the past 20 years.

Less than 10 years from now Intel may have 1280 core chips. Or they'll come up with something even better to use up all those upcoming transistors on. But anyway, "100 times better" in 20 years is unrealistically wimpy estimate.

#16 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 10 October 2006 - 06:27 AM

Just a nitpick Brian, but transistor flip rate per chip isn't a very useful indicator of progress. For better or worse, a transistor today doesn't do the same work as a transistor of yesteryear, because of the huge changes in CPU structure. First of all, a large number of those transistors are cache, so they don't really count in the same way. Second of all, you can't compare transistors with transistors, if what really matters is instruction and data throughput. Today's CPUs process about 5-10 times as many instructions per clock cycle (despite a 100- to 1000-fold increase in transistors), and a heck of a lot more in the case of multiplications and floating point operations. But the clock speed itself is perhaps 100 times faster. So at best, we're looking at a 500-1000 times increase in instruction/data throughput. Special purpose processors can improve on that somewhat: witness the GPU. But in general, a transistor did a lot more work back then.

Multi-core and Multi-CPU designs can increase throughput further, but that's always been the case: that's how supercomputers are made.

Regarding my original point, what matters is approximate orders of magnitude. There's really very little difference between saying 100 times better in 20 years and 1000 times better in 20 years. It's 2 or 3 orders of magnitude, an error of 50%. But we won't see 1,000,000 times improvement per dollar in that timeframe.

#17 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 10 October 2006 - 06:40 AM

At any rate, I'm not sure where CPU technology is going in the short term. So far, multi-core philosophy seems to be the industry's way of overcoming a temporary limit they've hit in increasing speed and/or density. If core count and clock speed could both continue to increase exponentially, then we wouldn't see anything faster than exponential growth (it wouldn't be doubly exponential). It would just be Moore's Law with a shorter cycle, maybe 12-18 months instead of 18-24. With an 18-month doubling cycle, instead of the typical 24, we could see about a 100-fold increase in processing power in a decade, pushing out to a 10,000-fold increase in 20 years. I don't see that rate being sustained the entire time, but perhaps at times it will exceed that rate.

By the way, when I made my original 100-fold estimate, I was thinking of a 2-year doubling period, but when I did the math in my head, I used 3 years for some odd reason. Short circuit somewhere I guess. A three-year doubling in price-performance would mean about ten years to double performance, 20 years to get a 100-fold increase. A two-year doubling period would allow a 1000-fold increase in 20 years.

Anyway, the point being, if I had done the math the way I'd meant to (in my head, anyway), I would have said "20 years from now, when computers are 1000 times faster (give or take)"... And like I said earlier, I was off by 50%. My bad.

#18 arrogantatheist

  • Guest
  • 56 posts
  • 1

Posted 10 October 2006 - 08:57 AM

jaydfox that is a brilliant analogy for making what seems to be an ordered task into one that can take advantage of parallel processing.

You guys are right that its hard now to estimate performance improvements. Because performance depends on what application you are running. The supercomputers ranking seems decent, looking at gflops. I read intel looking at its chip and it put it in gflops once recently. I think the core 2 duo, they say is 25 gflops.

#19 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 10 October 2006 - 01:13 PM

With greater access to distributed and supercomputing the applications design and development and their network and hardware architectures are becoming the bottleneck. I think we need need to forget about moore's law and gflops as metrics the same way we don't worry about diskspace and pay more attention to metrics that really matter. e.g. number of atoms simulated per application run in x hours, etc. Graphing that statistic over time would be more telling as to the benefits of computational growth. I could cash in my 401k and build a fairly robust supercomputer. But what the hell would I run on it?

#20 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 17 January 2007 - 11:10 PM

I was going to start a new thread but this will do. I was wondering
what is being considered to replace silicon in about 10 years. (The
silicon atomic barrier concern is what I'm thinking about. Sorry if
this was recently addressed here.)

-Stephen

#21 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 18 January 2007 - 01:08 AM

I was going to start a new thread but this will do. I was wondering
what is being considered to replace silicon in about 10 years. (The
silicon atomic barrier concern is what I'm thinking about. Sorry if
this was recently addressed here.)

-Stephen

Molecular electronics.

sponsored ad

  • Advert

#22 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 18 January 2007 - 01:40 AM

I was going to start a new thread but this will do. I was wondering
what is being considered to replace silicon in about 10 years. (The
silicon atomic barrier concern is what I'm thinking about. Sorry if
this was recently addressed here.)

-Stephen

Molecular electronics.



I see it here:

http://en.wikipedia....lar_electronics

It seems that they are still uncertain what is going to happen here, correct?
Many technologies seem promising.

-Stephen




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users