Logo   Banner   TopRight
NVIDIA GeForce GTX 580: A New Flagship Emerges
Date: Nov 09, 2010
Author: Marco Chiappetta
Introduction and Specifications

Even before the GF100 GPU-based GeForce GTX 480 officially arrived, a myriad of news reports and rumors swirled claiming the cards would be hot, loud, and consume a lot of power, not to mention, be late to market. Unfortunately for NVIDIA, in the end, all of those things ended up being true to some degree. In all fairness, the GeForce GTX 480 did end up being the fastest single-GPU available, and things have only gotten better with recent driver releases, but it’s no secret that the GeForce GTX 480 wasn’t everything NVIDIA had hoped it would be.

Of course, NVIDIA knew that well before the first card ever hit a store shelf. And it turns out the company got to work on a revision of the GPU and card itself that would attempt to address the concerns with the GF100 and in turn, the GeForce GTX 480. The fruit of NVIDIA’s labor culminate in the product we’re going to be showing you here today, the GF110-based GeForce GTX 580.

Its name suggests the GeForce GTX 580 is a next-gen product, but make no mistake, the GF110 GPU powering the card is largely unchanged from the GF100 in terms of its features. However, refinements have been made to the design and manufacture of the chip, along with its cooling solution and PCB. The end product is a higher-performing, lower-power card that also happens to be much quieter than its predecessor.

Take a look at the GeForce GTX 580’s specs below and then move on for the deep dive, complete with a full suite of tests in both single and dual-card configurations on the pages ahead...

NVIDIA's GeForce GTX 580 Exposed

NVIDIA GeForce GTX 580
Specifications & Features


Looking at the above features and specifications, it's obvious that the new GeForce GTX 580 is very similar, if not virtually identical, to the GeForce GTX 480, which was released a few months back. In fact, the GF100 GPU (GTX 480) and GF110 (GTX 580) share the very same architecture and feature set. As such, we'd strongly recommend checking out our coverage of the GeForce GTX 480 launch for the full scoop on what NVIDIA's high-end DirectX 11-class, high-end CPU can do, because we're not going to re-hash it all again here. With that said, the GF110 is a refinement of the GF100 design and some changes have been made to the ROPs and a few various other transistors in the chip.

Like the GF100, the GF110 is comprised of roughly 3 billion transistors and is manufactured using TSMC's 40nm process node. The GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface. Remember though, only 480 cores are exposed on the GeForce GTX 480--on the GF110 powering the GTX 580, all 512 CUDA cores are enabled. The reference GPU clock is 772MHz, up from 700MHz on the GTX 480. The shader clock on the 580 is also increased to 1544MHz (1401 on GTX 480) and the memory clock is similarly increased from 924MHz on the GTX 480 to 1001MHz on the GTX 580. The combination of additional CUDA cores and higher frequencies alone will result in increased performance over the GTX 480, but we're also told that some enhancements have been made to the GF100's ROPs as well, which result in better Z-Cull performance. Details of those changes weren't made readily available, however.

In addition to the aforementioned items, NVIDIA also tells us that they have been working closely with foundry partner TSMC and have modified the transistors used in some parts of the chip. Whereas the GF100 used TSMC's fastest switching, and also "leakiest", transistors throughout, the GF110 uses a combination of high-speed and lower-speed transistors, to somewhat reduce current leakage in the chip.

Along with the changes at the chip level, NVIDIA has also made some tweaks to the cooler design, the PCB and the power delivery circuitry on the GeForce GTX 580. Unlike the GeForce GTX 480 which used a huge GPU cooler with multiple heat-pipes, the GeForce GTX 580 employs a newly designed Vapor Chamber cooler. Vapor Chamber coolers are not new, but the custom Vapor Chamber used on the GTX 580 is better equipped to handle the intense heat output and temperature fluctuations of a high-end GPU.  When used in conjunction with a newly designed adaptive fan controller and fan, the end-game is a cooling solution on the GTX 580 that is significantly quieter than the GTX 480. We should also point out that the fan shroud on the GeForce GTX 580 has been optimized as well and now features a sharp drop-off on one end to aid in better air-flow into the fan, when two cards are butted up close together running in an SLI configuration.

The GeForce GTX 580 also sports a new hardware monitoring feature that monitors current on each of the card's 12v rails and dynamically adjusts voltage to keep total power in check. Currently, this feature works in conjunction with the card's drivers and detects only two applications, Furmark and OCCT. These two applications employ workloads that are known to push the power consumption of some graphics cards so high, that they'll operate outside of their thermal and power envelopes. To protect the card in these situations, power to the card can be managed using this new hardware monitoring feature.

The NVIDIA GeForce GTX 580


The GeForce GTX 580 reference design looks very much like a cross between the GeForce GTX 470 and 285. The cooler is certainly smaller than that found on the GTX 480, and it lacks the protruding heat-pipes found on the 480 as well. The entire front side of the card is also encased in a shroud, much like previous designs, which gives it a nice, clean look.


NVIDIA GeForce GTX 580 Reference Design

The GeForce GTX 580’s PCB measures 10.5”. As you can see in the pictures above, the back-side of the PCB is exposed, but short of the PCI Express edge connector the front side is all fan shroud and fan from tip to tail. As we mentioned on the previous page, the tail end of the fan shroud has a sharp, angled drop-off that allows for better inward air flow for the top card when used in multi-card configurations, where the PEG slots are next to one another. And the top-front of the shroud, along with half of the case bracket, is vented to allow heated air to be mostly expelled from a system.

The outputs on the GeForce GTX 580 are identical to those found on the GTX 480—two dual-link DVI connectors are adjacent to a mini-HDMI connector at the top of the case bracket. Please note, despite having three outputs, only two can be used at any given time to drive displays. For three monitor NVIDIA Surround gaming configurations, like its predecessors, two cards must be used.

Two PCI Express power connectors are present on the GTX 580, one 8-pin and one 6-pin.  The card also has dual SLI edge connectors to support, one, two, three or four-way SLI configuration.


The GeForce GTX 580's Vapor Chamber Cooling Solution

With the GeForce GTX 580 disassembled, we can take a much better look at the cooler’s inner-workings and the GPU itself. The GTX 580’s Vapor Chamber cooler has a smooth copper base, fused to a dense array of thin, aluminum cooling fins. The Vapor Chamber cooler assembly is basically a large, rectangular block where air from the barrel-type fan can easily flow from one end to the other. In comparison to the elaborate (and massive) coolers we’ve seen on some recent overclocked cards, the GTX 580’s Vapor Chamber seems somewhat small in comparison, but it definitely gets the job done. Without question, the GTX 580 is quieter than the GTX 480. In fact, there’s no comparison in real-world use. The GTX 580 is downright quiet next to the GTX 480. It still gets relatively hot though, especially in comparison to AMD’s latest cards.

The GeForce GTX 580 GPU and Heat-Plate

You’ll also notice with the card disassembled that the long, metal retention plate that holds the fan also acts as a heat spreader for the GTX 580’s on-board memory and some other components. The overall design of the GTX 580’s cooling solution, dare we say, seems elegant in comparison to some of the triple-slot, oversized monstrosities that have hit the market recently.

Dominating the center of the PCB is the GPU itself, flanked by memory on three sides. The actual GPU die is hidden under the metal heat-spreader, but the sheer size of the chip alludes to the fact that there’s a big, honkin’ 3-Billion transistor slab of silicon underneath.

Test Setup & Unigine Heaven v2.1

How We Configured Our Test Systems: We tested the graphics cards in this article on a Gigabyte GA-EX58-UD5 motherboard powered by a Core i7 965 quad-core processor and 6GB of OCZ DDR3-1333 RAM. The first thing we did when configuring the test system was enter the system BIOS and set all values to their "optimized" or "high performance" default settings. Then we manually configured the memory timings and disabled any integrated peripherals that wouldn't be put to use. The hard drive was then formatted, and Windows 7 Ultimate x64 was installed. When the installation was complete we fully updated the OS and installed the latest hotfixes, along with the necessary drivers and applications.

HotHardware's Test Systems
Core i7 Powered

Hardware Used:
Core i7 965 (3.2GHz)
Gigabyte EX58-UD5 (X58 Express)

Radeon HD 5850 (2)
Radeon HD 5870 (2)
Radeon HD 6850 (2)
Radeon HD 6870 (2)
GeForce GTX 460 (2)
GeForce GTX 470 (2)
GeForce GTX 460 OC (EVGA)
GeForce GTX 470 OC (Galaxy)
GeForce GTX 480 (2)
GeForce GTX 480 OC (Gigabyte)
GeForce GTX 580 (2)

6GB OCZ DDR3-1333
Western Digital Raptor 150GB
Integrated Audio
Integrated Network

Relevant Software:
Windows 7 Ultimate x64
DirectX June 2010 Redist
ATI Catalyst v10.10d
NVIDIA GeForce Drivers 262.99

Benchmarks Used:

Unigine Heaven v2.1
3DMark Vantage v1.0.1
FarCry 2
Just Cause 2
Alien vs. Predator
Left 4 Dead 2*
Enemy Territory: Quake Wars v1.5*

* - Custom benchmark

Unigine Heaven v2.1 Benchmark
Synthetic DirectX 11 Gaming

Unigine Heaven

The Unigine Heaven Benchmark v2.0 is built around the Unigine game engine. Unigine is a cross-platform real-time 3D engine, with support for DirectX 9, DirectX 10, DirectX 11 and OpenGL. The Heaven benchmark--when run in DX11 mode--also makes comprehensive use of tessellation technology and advanced SSAO (screen-space ambient occlusion), and it also features volumetric cumulonimbus clouds generated by a physically accurate algorithm and a dynamic sky with light scattering.

NVIDIA's Fermi-architecture based derivatives are partially known for their excellent geometry and tessellation capabilities, hence their strong performance in the Unigine Heaven benchmark. This test features and extreme tessellation load that's much more taxing than what is used is today's DX11 games, but it offers a clear view as to the relative tessellation performance of the GPUs we tested here. As you can see, the GeForce GTX 580 clearly leads the pack, easily besting even AMD's dual-GPU powered Radeon HD 5970.

3DMark Vantage Performance

Futuremark 3DMark Vantage
Synthetic DirectX Gaming

3DMark Vantage

The latest version of Futuremark's synthetic 3D gaming benchmark, 3DMark Vantage, is specifically bound to Windows Vista-based systems because it uses some advanced visual technologies that are only available with DirectX 10, which y isn't available on previous versions of Windows.  3DMark Vantage isn't simply a port of 3DMark06 to DirectX 10 though.  With this latest version of the benchmark, Futuremark has incorporated two new graphics tests, two new CPU tests, several new feature tests, in addition to support for the latest PC hardware.  We tested the graphics cards here with 3DMark Vantage's Extreme preset option, which uses a resolution of 1920x1200 with 4x anti-aliasing and 16x anisotropic filtering.

3DMark Vantage paints an interesting picture. As you can see, in comparison to any single-GPU powered AMD-based graphics card, the GeForce GTX 580 is simply a monster. It outperforms the Radeon HD 5870 by a wide margin. The dual-GPU powered--and year old, we might add--Radeon HD 5970, however, pulled well ahead of the GTX 580, mostly due to its strong performance in GPU Test 1.

6870 6850 5870 5850 5970  GTX 460  GTX 470  GTX 480 GTX 580
% Increase 88.0% 90.9% 74.8% 76.4% 67.6% 91.3% 90.2% 76.2% 78.2%

The tables turn in our multi-GPU tests. Due to suprerior scaling, the GeForce GTX 580 SLI configuration is the fastest of the bunch, outperforming even the four GPUs that comprise the Radeon HD 5970 CrossFire configuration and decimating the pair of Radeon HD 5870s.

ET: Quake Wars Performance

Enemy Territory: Quake Wars
OpenGL Gaming Performance

Enemy Territory:
Quake Wars

Enemy Territory: Quake Wars is based on a radically enhanced version of id's Doom 3 engine and viewed by many as Battlefield 2 meets the Strogg, and then some.  In fact, we'd venture to say that id took EA's team-based warfare genre up a notch or two.  ET: Quake Wars also marks the introduction of John Carmack's "Megatexture" technology that employs large environment and terrain textures that cover vast areas of maps without the need to repeat and tile many smaller textures.  The beauty of megatexture technology is that each unit only takes up a maximum of 8MB of frame buffer memory.  Add to that HDR-like bloom lighting and leading edge shadowing effects and Enemy Territory: Quake Wars looks great, plays well and works high end graphics cards vigorously.  The game was tested with all of its in-game options set to their maximum values with soft particles enabled in addition to 4x anti-aliasing and 16x anisotropic filtering


Roughly,  the same trend we witnessed in our 3DMark Vantage testing plays out in our custom ET:QW tests. Here, the GeForce GTX 580 is clearly the fastest GPU in the group, when tested at the "lower" 1920x1200 resolution. But with the resolution cranked up to 2560x1600, the dual-GPU powered Radeon HD 5970 is able to retake the lead.

6870 6850 5870 5850 5970  GTX 460  GTX 470  GTX 480 GTX 580
% Increase 37.9% 56.3% 34.6% 49.3% 15.4% 59.7% 58.2% 28.2% 21.7%

Once again, however, due to superior performance scaling on the part of the GeForce GTX 580 SLI configuration, it is able to hold onto first place--regardless of resolution--in our multi-GPU ET:QW benchmark tests.

FarCry 2 Performance

FarCry 2
DirectX Gaming Performance

FarCry 2

Like the original, FarCry 2 is one of the more visually impressive games to be released on the PC to date. Courtesy of the Dunia game engine developed by Ubisoft, FarCry 2's game-play is enhanced by advanced environment physics, destructible terrain, high resolution textures, complex shaders, realistic dynamic lighting, and motion-captured animations.  We benchmarked the graphics cards in this article with a fully patched version of FarCry 2, using one of the built-in demo runs recorded in the "Ranch" map.  The test results shown here were run at various resolutions with 4X AA enabled.

As we've seen previously, in comparison to other single-GPU powered cards, the new GeForce GTX 580 is a powerhouse. In the FarCry 2 benchmark, the GTX 580 easily bests all of the other single-GPU powered cards and significantly outpaces the Radeon HD 5800 and 6800 series cards. The Radeon HD 5970, however, leads the pack.

6870 6850 5870 5850 5970  GTX 460  GTX 470  GTX 480 GTX 580
% Increase 84.7% 95.8% 62% 69.8% 21.7% 77.3% 71.5% 48.1% 30.8%

Things tighten up in the multi-GPU FarCry 2 tests and the new GTX 580 SLI config is able to surpass the Radeon HD 5970 CrossFire setup at 1920x1200, but at the higher resolution, a pair of AMD's most powerful cards (at least currently), take the lead.

Left 4 Dead 2 Performance

Left 4 Dead 2
DirectX Gaming Performance

Left 4 Dead 2

Like its predecessor, Left 4 Dead 2 is a co-operative, survival horror, first-person shooter that pits four players against numerous hordes of Zombies. Like Half Life 2, the game uses the Source engine, however, the visuals in L4D 2 are far superior to anything seen in the Half Life universe to date. The game has much more realistic water and lighting effects, more expansive maps with richer detail, more complex models, and the list goes on and on. We tested the game at various resolutions with 4x anti-aliasing and 16x anisotropic filtering enabled and all in game graphical options set to their maximum values.

All of the cards we tested are able to easily handle Left 4 Dead 2, as is evidenced by the relatively high framerates across the board here. Regardless, the GeForce GTX 580, once again, puts up the strongest performance of any other single-GPU. The Radeon HD 5970, however, couldn't be touched.

6870 6850 5870 5850 5970  GTX 460  GTX 470  GTX 480 GTX 580
% Increase 39.8% 63.8% 23.9% 46.1% 1% 47.4% 26.1% 16.9% 10.1%

With the exception of the GeForce GTX 470 and 460 SLI configurations, the rest of the multi-GPU setups are CPU-bound in L4D2 and all produce framerates in the 140FPS range.
Tom Clancy's H.A.W.X. Performance

Tom Clancy's H.A.W.X.
DirectX Gaming Performance

Tom Clancy's H.A.W.X.

Tom Clancy's H.A.W.X. is an aerial warfare video game that takes place during the time of Tom Clancy's Ghost Recon Advanced Warfighter.  Players have the opportunity to take the throttle of over 50 famous aircrafts in both solo and 4-player co-op missions, and take them over real world locations and cities in photo-realistic environments created with the best commercial satellite data provided by GeoEye.  We used the built-in performance test at two resolutions with all quality settings set to their highest values, using the DX10.1 code path for both the Radeons and GeForce 400/500 series cards.

The GeForce GTX 580 engaged its afterburners in the H.A.W.X benchmark. In this test, it takes two AMD GPUs (i.e. the Radeon HD 5970) to hang with the GTX 580. All of the single-GPU powered cards in AMD's current line-up almost seem quaint in comparison to the GTX 580.

6870 6850 5870 5850 5970  GTX 460  GTX 470  GTX 480 GTX 580
% Increase 92.9% 96.1% 72.5% 72.1% 58.2% 96% 84.8% 65.2% 46.1%

The same trend we saw in the single-card tests rings true in our SLI / CrossFire testing, except this time around it takes four AMD GPUs to outpace two GeForce GTX 580s.

Just Cause 2 Performance

Just Cause 2
DX10.1 Gaming Performance

Just Cause 2

Just Cause 2 was released in March 2010, from developers Avalanche Studios and Eidos Interactive. The game makes use of the Avalanche Engine 2.0, an updated version of the similarly named original. It is set on the fictional island of Panau in southeast Asia, and you play the role of Rico Rodriquez. We benchmarked the graphics cards in this article using one of the built-in demo runs called The Concrete Jungle.  The test results shown here were run at various resolutions and settings. This game also supports a few CUDA-enabled features, but they were left disabled to keep the playing field level. 

With the sole exception being the overclocked GeForce GTX 480, no other single-GPU card comes close to the performance of the GeForce GTX 580 in the Just Cause 2 benchmark. The dual-GPU powered Radeon HD 5970 is markedly faster than the GTX 580 here, though.

6870 6850 5870 5850 5970  GTX 460  GTX 470  GTX 480 GTX 580
% Increase 89.7% 90.1% 67.4% 71% 26.7% 59.6% 50.1% 33% 20.8%

The GeForces seem to hit a wall at around the 61 FPS mark in JC2, when using SLI. We ensured that v-sync was disabled both in game and in the drivers, and tried different combinations of application control, etc., but didn't see the kind of performance boost we expected here. As such, the Radeon HD 6870 and 5870 CrossFire configurations are able to overtake the GTX 580 SLI rig.

Alien vs. Predator Performance

Alien vs. Predator
DirectX 11 Gaming Performance

Alien vs. Predator

The Alien vs. Predator benchmark makes use of the advanced tessellation, screen space ambient occlusion and high-quality shadow features, available with DirectX 11. In addition to enabling all of the aforementioned DirectX 11 related features offered by this benchmark, we also switched on 4X anti-aliasing along with 16X anisotropic filtering to more heavily tax the graphics cards being tested.

The same performance trend we've seen in the last few game tests plays out again in Alien vs. Predator. In this game, the new GeForce GTX 580 is, once again, the best performing single-GPU. The dual-GPU Radeon HD 5970, however, is clearly the fastest overall.

6870 6850 5870 5850 5970  GTX 460  GTX 470  GTX 480 GTX 580
% Increase 97.8% 97.8% 83.4% 84.8% .4% 92.6% 89.7% 90% 89.3%

Multi-GPU testing with AvP yields some interesting results. It seems this game doesn't scale past two GPUs (at least with the current driver), as the quad-GPU Radeon HD 5970 CrossFire configuration showed no performance increase when using two cards. The GeForce GTX 580, however, showed huge gains and smoked the competition.

Total System Power Consumption

Before bringing this article to a close, we'd like to cover a few final data points--namely power consumption and noise. Throughout all of our benchmarking and testing, we monitored how much power our test system was consuming using a power meter. Our goal was to give you all an idea as to how much power each configuration used while idling and while under a heavy workload. Please keep in mind that we were testing total system power consumption at the outlet here, not just the power being drawn by the graphics cards alone.

Total System Power Consumption
Tested at the Outlet

NVIDIA claimed reduced power consumption and increased performance with the new GeForce GTX 580, and that's exactly what we saw in our tests, although the differences were not dramatic. The GTX 580 used less power both at idle and under load than our reference GeForce GTX 480, which was an early sample we received when the product first launched. The newer, overclocked, non-reference GeForce GTX 480 (the Gigabyte GeForce GTX 480 SoC), however, also used less power than the reference card, despite a much higher clocked GPU and memory. It seems with some maturity and refined power delivery circuitry, the power consumption of the GeForce GTX 480 can be brought down as well. While there are improvements here, the power consumed by NVIDIA 3-billion transistor GPU is simply on another level in comparison to AMD's latest products.

While we're on the subject of power consumption, we should also talk a bit about heat and noise as well. First, let's talk about noise. To put it simply, the GeForce GTX 580 doesn't make much noise at all. The new cooler design employed on the GTX 580 is virtually silent when the card is idling at the Windows desktop and under load we'd say it is about as loud as a stock Radeon HD 5870, which is to say it is not very loud at all. In terms of noise, the GeForce GTX 580 is a clear improvement over the GTX 480. The GeForce GTX 580, however, still get's nice and hot though. Idle GPU temperatures hovered in the 45'C range, with peak temps that broke the 90'C mark.

Our Summary and Conclusion

Performance Summary: Before we get to the actual performance summary, we have to give NVIDIA's software engineers some much deserved recognition. When the GeForce GTX 480 first launched, it was clearly the fastest single-GPU available, but we found its performance advantages over the then 7-month old Radeon HD 5870 to be relatively minor (5-15%) given the GTX 480's higher price point. In these last few months, however, NVIDIA's software engineers have tuned their drivers significantly and given the GeForce GTX 480 such a measurable performance boost, that its lead over the Radeon HD 5870 has more than doubled in some games. Good job, NVIDIA.

With that said, summarizing the GeForce GTX 580's performance is simple. To put it bluntly, the GeForce GTX 580 is without question the fastest, single-GPU on the planet at this point. If we disregard the odd, CPU-bound test, the GeForce GTX 580 is between 30% and 50% faster than AMD's current flagship, single-GPU, the Radeon HD 5870 in actual game tests. Take the synthetic tests like Unigine into account and the GTX 580 can be up to twice as fast. In comparison to the Radeon HD 5970, however, the GeForce GTX 580's performance isn't quite as strong. In fact, despite being about a year old, the Radeon HD 5970 is still the fastest graphics card overall.

The GeForce GTX 580 is a clearly superior product to the GeForce GTX 480 it supplants at the top of NVIDIA's current GPU line-up. In virtually every category, the GTX 580 is preferable to the GTX 480; power consumption, performance, noise, form factor--you name it. The GeForce GTX 580 is simply a better product, period. And at this point in time there is no other single-GPU that can touch it in terms of performance.

It's not all sunshine and roses, however. While the GTX 580's power consumption is technically lower than the 480's, it still uses a lot of power relative to AMD's offerings, and as such, it still pumps out a lot of heat too. In terms of efficiency, the GF110 GPU at the heart of the GeForce GTX 580 is a definite improvement over the GF100 in the GeForce GTX 480, but it's not nearly as efficient as AMD's offerings. We're sure some of you will also be perplexed by NVIDIA's decision to name this new card the GTX 580, when it's technically a refinement of current-gen technology. Perhaps GeForce GTX 490 would have been a better choice?  NVIDIA's position is that since this card offers clear improvements in terms of performance, power, and noise, over the GTX 480 it was deserving of the 580 moniker. We're somewhat indifferent to the branding ourselves--the enthusiasts who are interested in cards like this are usually very informed about what they're buying. NVIDIA could call it the Premiere Pixel-Pushin' PCB From Palookaville and still sell a ton of 'em. Regardless, at least the GeForce GTX 580 is clearly faster than the GTX 480. We can't say the same for a Radeon HD 6870 to HD 5870 comparison.


The GeForce GTX 580 is set to drop right into the market segment currently occupied by the GeForce GTX 480 and it will be priced at around $499. (Update:  as of 10AM today, we are unable to find a card for $499 but NewEgg has them for $559.  Hopefully NVIDIA will get that cleaned very up soon.) For a time, the GeForce GTX 580 and GTX 480 will co-exist in NVIDIA's product stack, but pricing on the current crop of GTX 480s will likely be headed south. And NVIDIA plans to eventually replace some current GTX 400 series offerings with GTX 500 series parts. GTX 580 parts should be available immediately in limited quantities, while production ramps up in the coming weeks.

At $499, the GeForce GTX 580's price is about on par with the Radeon HD 5970, which can be had for approximately $469 - $599 depending on the model and brand. While we found the 5970 to be faster overall and its power consumption is lower, the GTX 580 is still attractive at this price point for a few reasons. First, it uses only a single GPU, so there are no multi-GPU scaling issues to worry about with new games. And the GeForce GTX 580 also offers support for PhysX, 3D Vision, and the array of CUDA-enabled applications out there. AMD does have support for Eyefinity configurations with a single-card though, so pick your proprietary feature of choice.

Ultimately, the release of the GeForce GTX 580 is all good. NVIDIA has a better performing, quieter, and somewhat lower-power card at the top of their product stack, that's arriving at the same price point as the GeForce GTX 480. Over the last few days, the impending arrival of the GTX 580 also resulted in lower prices for the Radeon HD 5970--another good thing for consumers. So for now, NVIDIA has extended their leadership position in the singe-GPU space, with the fastest single-GPU powered graphics card out there. NVIDIA's position may be short lived, however. It's no secret AMD is prepping the Radeon HD 6900 series, in both single-GPU (Cayman) and dual-GPU (Antilles) flavors. We can't say with any certainty just yet that Cayman will re-take the single-GPU performance crown, but it's a safe bet that the dual-GPU powered Antilles product will be the fastest single graphics card when it arrives. Time will tell.


  • Excellent Performance
  • Quieter Than GTX 480
  • Lower-Power Than GTX 480
  • PhysX, 3D Vision, CUDA Support
  • Arrives At Same Price As The GTX 480 (hopefully)
  • Still Uses A Lot Of Power
  • Doesn't Beat The 5970
  • Relatively Hot Running

Content Property of HotHardware.com