NVIDIA GeForce GTX 480: GF100 Has Landed - HotHardware

NVIDIA GeForce GTX 480: GF100 Has Landed

8 thumbs up

Performance Summary: NVIDIA has created a powerful GPU in the GF100, as our performance data of the new GeForce GTX 480 has shown. Generally speaking, versus the single-GPU powered Radeon HD 5870, the GeForce GTX 480 is on average roughly 5% - 10% faster, give or take a few percentage points depending on the test, which technically makes it the fastest single-GPU on the market (almost). The GeForce GTX 480 held had the largest lead in the DX11-based Unigine Heaven benchmark and in Tom Clany's H.A.W.X.  Unfortunately for NVIDIA, however, the Radeon HD 5870 is cheaper to produce, consumes less power, is quieter, and it costs about 25% less ($499 vs $399). And AMD also has the dual-GPU powered Radeon HD 5970 in its arsenal, which remains the fastest single-graphics card available for most current game titles.

The GeForce GTX 480's performance lead over the Radeon HD 5870 increase when paired up in a dual-GPU SLI configuration. With their current drivers, NVIDIA-power cards offered better performance scaling in multi-GPU configurations, which resulted in larger performance increases for the GeForce GTX 480.  With that in mind however, a dual-card Radeon HD 5970 quad-CrossFireX configuration was still fastest overall.

Depending on your perspective, today will either be considered a great victory or perhaps a crushing defeat for NVIDIA. On one hand, the company has produced was is undoubtedly the most powerful and complex graphics processor in the world. The 3-Billion transistor GF100 is a very capable chip, both in terms of gaming and in terms of compute performance and NVIDIA owns the single-GPU performance crown again. The GeForce GTX 480 is faster than the Radeon HD 5870 overall and its forward thinking design lays the foundation for future generations of NVIDIA processors moving forward, as the G80 did for much of the previous generation. On the other hand, the GeForce GTX 480 is late to market, the GPU consumes a lot of power and hence generates a lot of heat, even with "only" 480 of its 512 shader cores exposed, and its performance lead doesn't exactly jibe with its projected 25% price premium.


The NVIDIA GeForce GTX 480 Reference Card

Although the company is announcing the cards tonight at the PAX event taking place in Boston, MA, widespread e-tail availability of both GeForce GTX 480 and GTX 470 cards, at prices of $499 and $349 respectively, won't happen until the week of April 12, 2010. Questions linger as to how many GF100-based graphics cards will ultimately hit store shelves, but NVIDIA tells us plenty are on the way. NVIDIA claims, "We are building 10s of thousands of units for initial availability, and this will ensure our partners have ample volume for what is the most anticipated GPU launch ever." If you're an NVIDIA fan and have been waiting for their next-gen GPU, your wait is almost over.

Having spent some quality time with the GeForce GTX 480, we can't help but expect the card, as we have shown it to you here today, will not be NVIDIA flagship for an extended period of time. The true potential of the Fermi architecture hasn't been fully realized just yet. We suspect a re-worked GF100 is on tap that will have all of its 512 cores available and hopefully hit higher clocks, with lower power consumption. We are only speculating at this point, of course, but we can't help but feel the GeForce GTX 480 isn't the card NVIDIA really wanted to launch to take on AMD's finest, and that its successor is priority #1 within the company. The GeForce GTX 480 is an extremely potent product, it's just not the game changer some may have expected.

  • Relatively Fast
  • DirectX 11 Support
  • PhysX + CUDA Support
  • Great SLI Scaling

 

  • High Power Consumption
  • Hot and Can Be Loud
  • Late To Market
  • Only Slightly Faster Than 5870, For Much More Money

 

Article Index:

1 2 3 4 5 Next ... Last
0
+ -

And here I was just about to complain that it's Firday and no news on the GTX480 :-P

0
+ -

Shouldn't "High Power Consumption" be labeled under "Hot"? ;)

 

*ducks*

0
+ -

I upgraded my gateway m685e workstation from vista to windows 7. Later, I downgraded the video card drivers downloaded from windows update to the vista compatible drivers on the nvidia website. Rumor is the vista compatible drivers were not powering the fan after a reboot resulting in many dead gpus. I am blind and have already lost an IT job for lack of a working laptop.

0
+ -

canyon1985:

I upgraded my gateway m685e workstation from vista to windows 7. Later, I downgraded the video card drivers downloaded from windows update to the vista compatible drivers on the nvidia website. Rumor is the vista compatible drivers were not powering the fan after a reboot resulting in many dead gpus. I am blind and have already lost an IT job for lack of a working laptop.

I'm not sure if there's an off-topic question in there or what, but if you updated/downgraded your drivers via Windows Update, then you never got the bad (196.75) drivers.

If what I read is true, Microsoft never had them available for download via Windows Update; you would have had to manually download them from nVidia to get them.

0
+ -

Meh, I can't believe that after all the hype. It's only marginally faster then an ATI card and it should of been $50 less for that kind of performance. I'm thinking this is part of a marketing scheme by NVIDIA for them to buy 2 cards and run them in SLI so they can get world class performance and more money, then again I might be crazy.

I'm also surprised by the fact that it makes more heat and consumes more power. I wonder what they thinking when they made the card, at least they could of done an Intel thing and made it more efficient so that it'd leapfrog over ATI just like the Core 2 Duo did with AMD's processors back in 2006.

And a card with 512 cores unlocked... I guess it's going to be 10-20 fps faster then the ATI cards released right now but damn are they going to need a cooler that can cool this card effectively.

I'm going to wait for the GeForce GTX 470 review until I make up my mind but I think NVIDIA has a GeForce FX on it's hands.

0
+ -

I'm pretty surprised with the results... IMHO GTX 480 is meh and I'm gonna buy Ati this time.

0
+ -

I think it is lacking in some areas due to drivers. It's a brand new peice of hardware.

Comparing it the the previous Gen of Nvidia cards, it's hugely impressive. A single chip GTX480 is faster than a Dual Chip GTX295. On top of that, in some bench marks it can nearly hang with the 5970. When paired in SLI it really shined.

While in Single card mode in the HAWX benchmark it was 20% faster than the 5870. Then in  Dual card mode it was 40-60% faster than a pair of 5870s. On top of that, a pair of 5970 is only 20% faster while having twice as many chips to work with.

More evidence of potential for driver optimization becomes apparant here as well since scaling from single card to dual picked up 90% at one resolution, but only 70% at the other. With some optimization Nvidia could squeeze out a few more frames.

When a dual chip Nvidia card comes out.... it'll be a beast.

Also, pay close attention to the Unigine Heaven benches as they can be a bit deceiving. While it looks like the 480 is only slightly faster than the 5870s, it is in actuallity nearly twice as fast. Add to that, that it was faster than the 5970.

I think for this card DX11 is where it's at. This thing will probably, kindly, hand the ATI card it's a$$ in DX11 games w/ Tessellation. And as we all know DX11 is the future. So I don't mind if it's not the best at DX10 as long as it's great at playing the games of the future :-)

0
+ -

Acarzt,

I'll hit the rest of this in the morning, but I'm going to respond to one thing you wrote now. It's physically impossible for NVIDIA to build a dual-GPU GTX 480 on a 40nm process. Board power for the single-GPU flavor of the card is 250W. The maximum amount of power you can feed a PCI-Express card is 300W. NVIDIA isn't going to break the PCI-Express specification (they'd get no support from motherboard vendors, all of whom would have to specifically vet their motherboards against a non-standard component).

It's possible that these first parts are terrible examples of 40nm, so yes, NVIDIA could re-spin the silicon. Statistically, that won't happen for at least six months; NVIDIA's already respun it (and TSMC's yields aren't exactly fabulous even now).  Farther out, Fermi probably will go through a die shrink at some point in 2011 once 28nm tech is ramping, but that's a year away.

Sure, NV could opt to build a svelte mid-range dual-GPU, but that wouldn't be a dual GTX 480. It's very likely that we'll see NVIDIA roll a full 512-core Fermi and I think it's a good bet that they'll build a Fermi 2.0 on an improved 40nm die, but you can only rearchitect so much. If we look at the GTX 285 / GTX 295 as indicative, NV would need to pull the power consumption of a GTX 480 down by 50-60W to have enough room in the PCI-E spec to build a dualie. The only time you see leaps that massive on the same process, maybe, is if you compare A0 hardware to the mature, high-yield silicon you're building 1-2 years later.

0
+ -

Joel that's a good point I did not even think of. While it would be a major difficulty on what you say, I think for the first thing if you run it in SLI it is not only pulling 500 Watt's so forget Tri-SLI. I would also totally flood the PCI-X bus on anything but whats on a 1366 chip socket board as well. Well maybe a 1366 and a AM3. I don't know if the AM3 even has enough bandwidth either though, not to mention $1500 for the video cards alone for either of these with Tri-SLI. Then you get to the PSU in standard SLI the cards would be pulling 500 watts, in Tri they'd be pulling 750 watts which you could get with nothing less than a server power block which would not fit in a standard case anyway (not to mention would probably be at least 500 more dollars. On top of that I doubt a PCI-X would even have the bandwidth available to actually run it, where with 2 5970's it is still fine (greatly because of the reduced energy pull and of course heat ramp).

0
+ -

Well I never expected a dual GPU card based on the GTX480. That would be pretty awesome, but ridiculous to expect.

Even the GTX 295 isn't based off the 280. It's essentially 2 260s on 1 board. And with that it's a good bit faster than any other single card offering of it's Generation at Nvidia.

I was expecting a fermi equivilent to the 260 to be part of the dual gpu card. I would expect a 40-50% increase over the current single gpu 480. I don't think that would be unreasonable to expect :-)

I also don't expect this any time soon lol The GTX 295 didn't come out for some time after the GTX 280 was launched. If i'm remembering the time line correctly, I think it actuall even came out after the refresh of those cards.

1 2 3 4 5 Next ... Last
Login or Register to Comment
Post a Comment
Username:   Password: