NVIDIA VP Declares CPU Scaling, Moore's Law, Dead - HotHardware
NVIDIA VP Declares CPU Scaling, Moore's Law, Dead

NVIDIA VP Declares CPU Scaling, Moore's Law, Dead

Bill Dally, chief scientist at NVIDIA, has written an article at Forbes alleging that traditional CPU scaling and Moore's Law are dead, and that parallel computing is the only way to maintain historic performance scaling. With six-core processors now available for $300, Dally's remarks are certainly timely, but his conclusions are a bit premature.

Will The Real Moore's Law Please Stand Up And/Or Die Already?


Moore's original representation of his now-famous law.

Dally's claims Moore's Law is dead because "CPU performance no longer doubles every 18 months." This is little more than a straw man; Moore's Law states that the number of transistors that could be built within a chip for minimal cost would double about every two years. The assumption that there's a 1:1 correlation between additional transistors and additional performance neglects significant sections of Moore's work.

Dally's larger point is that we've reached the effective limit of serial computing and must switch to parallel computing, aka GPU computing, to return to historical performance. He writes: "Every three years we can increase the number of transistors (and cores) by a factor of four. By running each core slightly slower, and hence more efficiently, we can more than triple performance at the same total power. This approach returns us to near historical scaling of computing performance."

Dally characterizes modern multi-core CPU designs as akin "to putting wings on a train," and claims conventional CPUs consume too much power per instruction executed to continue scaling at historical level. Switching to parallel computing will be difficult thanks to entrenched standard practices, the sheer number of serial programs that need to be converted, and a scarcity of programmers trained in parallel programming techniques, but in his view, it's the only solution. We're not so optimistic.

The Myth of Hard Work

Dally's explanation of the current state of parallel programming presupposes that the only thing standing between us and vast multicore processor arrays is hard work and funding. This is oversimplified almost to the point of being disingenuous. First, not all programs can be parallelized, and second, parallelism itself inevitably hits a point of diminishing marginal return. GPU computing, with its vast banks of processors, excels precisely in those rare cases where programs can scale almost linearly to take advantage of more processing cores.

Taking advantage of parallel cores in the manner Dally suggests in consumer products requires that we literally reinvent the wheel. If the goal is to use small, simple processors, code compilers would have to be designed from the ground up to handle the complexity of spinning threads to at least dozens of tiny cores. We can't accurately predict what performance might look like in these sorts of systems because we haven't even invented the tools we'd need to build them.

AMD
and Intel seem to collectively have a better idea. Technologies like Turbo Boost—increasing the speed of 1-2 cores while turning other cores off—provides performance the consumer can take advantage of immediately. It's currently thought that Moore's Law will hit an immutable barrier sometime around 2021, but it still has quite a ways to go. No one is denying the tremendous performance of GPU computing in the right areas, but Dally's report on the death of Moore's Law is greatly exaggerated.
0
+ -

I hope I live to see Moore's law die, and for the whole reinventing the wheel thing. It should be interesting to see everything we know change over the next decade.

0
+ -

I've always thought of technology being somewhat exponential. But honestly if something so incredibly ground breaking occurs that it disowns Moore's law then I'm fine with that too =D

0
+ -

Nvidia has been saying the same thing for the last 5 years haven't they?

0
+ -

Yes they have Bob, and have probably been thinking it up considerably longer than that. I think the spout it when either there getting no attention, or have come upon a brick wall of sorts. I would say that is kinda where there at right now. There main money cow seems to be making no milk. Yes; there mini processor market is doing fine, there old GPU's are probably still doing OK to. There new ones are energy hogs which gain a minimal amount over a competitors (the only one really left ATI) GPU, and uses more energy, creates more heat, and is debatable as barely any better than the competitions last breed technology.

So it is time to start throwing it out there again I guess. I am not in any way a Nvidia hater, but I do think they've always been over priced. I have used there equipment in the past on several occasions. Recently (last 3 gen) I have seen no reason as ATI's equipment has done as good or better than there's and used less resources as well as energy. Heck the 5K series are energy misers and run cool as well as perform flawless from what I've seen.

0
+ -

I feel a little like I'm just along for the ride. I like the way things are progressing though, as my two current systems play all of my games just as well as I could want them to. (games being the most taxing thing that I do with them so far)

So I don't really care how they improve computers, as long as they do, and as long as they remain an affordable luxury.

0
+ -

Moore's Law, Murphy's Law, etc.; I've always preferred Cole's Law. (Finely chopped cabbage in a creamy dressing.)

Dally's comments are self-serving to say the least. He merely left it unsaid (or said, but edited out in the HH story) that powerful subsidiary processors, like ATi's and nVidia's graphics processors, bear the brunt of modern computing-- and that we're all essentially running parallel processors already if we have a graphics card. And especially if we have more than one.

There is, indeed, a limit to Moore's Law in that once you have to shrink transistors down so small that they're single molecules, you can't go any farther; there's a more conservative limit set because of heat dissipation. The thing is, I believe it to be irrelevant for the consumer, gamer, or even enthusiast. While we do try to get every last bit of performance out of a computer, the greatest majority of the time it's sitting idle; that 6-core Phenom is doing as much work as a single-core Pentium. Even contemporary games don't exercise CPUs to their fullest; they're mostly GPU-intensive rendering of relatively simple algorithms. (But since programmers have lost the cleverness of the early coders, who would brag of saving twelve bytes or ten microseconds, those simple algorithms run slowly and often clumsily.)

0
+ -

"Dally suggests in consumer products requires that we literally reinvent the wheel"

Surely he doesn't suggest that we *literally* reinvent the wheel...

0
+ -

KOlhovsky:

"Dally suggests in consumer products requires that we literally reinvent the wheel"

Surely he doesn't suggest that we *literally* reinvent the wheel...

Well it's a figure of speech isn't it?

Anyways, Intel themselves have made consumer processors 22MM small. Within the next 10 years or so Moores Law will of been broken as we reach 0MM processors.

I don't entirely agree with him yet. I mean sure, they've made capacitors smaller but can they make it work for the consumer and industrial market? I mean if they're able to sell CPUs with 0MM to those markets then they've truly broken Moore's Law.

0
+ -

While it's interesting to surmise that Moore's Law is dead or soon to be, we never can really know what the future is to hold.  Remember when the Commodore 64 was out, "Who needs more than [64k of memory]?".  Then the 8088 was released and we had 5.25" disk drives and 10 megabyte hard drives and systems capable of addressing 3 gigs of ram (but limited to much less at the time).  No one then could even think that a number of years down the road, we would have HP Calculators capable of doing the calculations required to run the Space Shuttle, or laptops with the computing power of 10 Eniac's.

 

My take is, we never know what's around the corner.  Until I can see the generalized line on Moore's Law graph actually drop - more like plummets, then I'll believe it's dead.

 

Food for thought though: do we allow photonic or quantum computing an "in" to Moore's Law?

Login or Register to Comment
Post a Comment
Username:   Password: