Logo   Banner   TopRight
TopUnder
Transparent
NVIDIA GeForce GTX 680 Review: Kepler Debuts
Transparent
Date: Mar 22, 2012
Section:Graphics/Sound
Author: Marco Chiappetta
Transparent
Introduction to the GeForce GTX 680
We’ve been hearing about NVIDIA’s Kepler architecture since about September 2010. It was back at that years’ NVIDIA GPU Technology Conference, that company CEO Jen-Hsun Huang first publically disclosed that Kepler would offer massive performance per watt improvements over Fermi and previous-gen architectures and that GPUs based on Kepler would arrive in 2011. Well, the launch date has obviously slipped. How Kepler’s power efficiency looks, we’ll get to a little later.

The rumor mill kicked into high gear over the last few months, and has been consistently churning out more and more Kepler scuttlebutt (whether true or false) coincident with Radeon HD 7000 series launches. Today though, we can put the rumors to rest. We’ve had a new Kepler-based GeForce GTX 680 in hand for a couple of weeks now and can finally reveal all of the juicy details.

First up, we have some specifications and a little back story. And on the pages ahead, we’ll dive a little deeper and give you all the full scoop on Kepler, its new features and capabilities, and of course the GeForce GTX 680’s features and performance.


The NVIDIA GeForce GTX 680, GK104 "Kepler" Based Graphics Card

NVIDIA GeFoce GTX 680
Specifications & Features


The GeForce GTX 680’s main features and specifications are listed in the table above. Before we get into the specifics of the card and it’s GPU, however, we want to direct your attention to a few past HotHardware articles that lay the foundation for what we’ll be showing you here.


GK104 "Kepler" GPU Die Shot

Although the GeForce GTX 680 is built around a new GPU that is based on a new architecture, the Kepler-based GK104 at the heart of the card leverages technologies first introduced on previous-generation NVIDIA products. As such, we’d recommend checking out these articles for more detailed coverage of many of NVIDIA’s existing technologies that carry over to the new GeForce GTX 680:

In our Fermi and GF100 architecture previews we discuss Fermi’s architecture and detail Fermi’s CUDA cores and Polymorph and Raster engines, among other features. In our GeForce GTX 480 coverage, we dig a little deeper into Fermi, and discuss the first graphics card based on the technology. Our GeForce GTX 580 coverage details the GF110, the more-refined re-spin of the GF100 GPU. And in our 3D Vision Surround and 3D Vision 2 articles, we cover NVIDIA’s multi-monitor and stereoscopic 3D technologies, which are both very much a part of the GeForce GTX 680.
 

Transparent
Kepler Architecture and the GK104 GPU

As we’ve mentioned, the GK104 GPU powering the GeForce GTX 680 is based on NVIDIA’s new Kepler architecture. Kepler, however, is not a complete redesign from the ground up. Although much more power efficient and higher performing than Fermi using a number of key metrics, Kepler does borrow heavily from Fermi’s design.


NVIDIA GK104 GPU Block Diagram

The high-level block diagram above shows the overall structure on the GK104. The chip has an arrangement of four Graphics Processing Clusters (GPC), each with two Streaming Multiprocessors, dubbed SMX (a Streaming Multiprocessors in Fermi is called a SM). Within each GPC, there is control logic, plus 192 CUDA cores, for a total of 1536 CUDA cores per GPU. In the previous-gen GTX 580 (Fermi), there were 32 CUDA cores per SM, which were duplicated 16 times within the chip. With the GK104, there are 192 CUDA cores per SMX, which are duplicated 8 times. The structure results in 6x the number of cores per SM(X) and 3x the total number of cores than the GeForce GTX 580.


A Close-Up Of A Single SMX In The GK104 GPU

In terms of its other features, the GK104 has a total of 128 texture units and 32 ROPs. There is 512K of L2 cache on-die, and the GPU interfaces with the GeForce GTX 680’s 2GB of GDDR5 memory over a 256-bit interface. It supports DirectX 11 (not 11.1) and features a PCI Express 3.0 host interface. There are eight geometry units in the chip (Polymorph Engine 2.0) and four raster units (one per GPC). According to NVIDIA, the Polymorph 2.0 engines offer double the primitive and tessellation performance per SM of Fermi.

In addition to having a different GPC and SM arrangement, with Kepler, NVIDIA also minimized the hardware control logic in the chip to bring the transistor count down and Kepler will also operate with a single clock domain—shaders/CUDA cores are not clocked at 2x the frequency of the rest of the chip.

With the GK104, the sum total of all of these changes is a 3.54 billion transistor chip with a die size of about 294 square mm, which is manufactured using TSMC’s 28nm process node. If you’re keeping track, that’s about 770M fewer transistors than AMD’s Tahiti GPU in the Radeon HD 7900 series and a significantly smaller die size (294mm2 vs 365mm2) as well.

Transparent
NVIDIA GeForce GTX 680

On the surface, the new GeForce GTX 680, looks much like it’s brethren in the GeForce GTX 400 and 500 series, but there are many changes introduced at the board level as well. The GeForce GTX 680’s cooler sports a number of new features too.


The NVIDIA GeForce GTX 680 Graphic Card - Front and Back

Let’s get the specifications covered first. Reference GeForce GTX 680 cards will have a base GPU clock speed of 1006MHz, with a Boost clock of 1058MHz. If you’re asking yourself what a “Boost clock” is, don’t fret, we’ll cover that on the next page—for now, just think of it as Turbo Boost for GPUs. GeForce GTX 680 cards will have 2GB of GDDR5 memory, linked to the GPU over a 256-bit interface, with an impressive 6008MHz effective data rate. The result is a peak of 192.26GB/s of memory bandwidth. And the GeForce GTX 680’s peak texture fillrate is 128.8GT/s.

Based on NVIDIA’s track record the last few years, you may think that a card that’s seemingly as powerful as the GeForce GTX 680 requires a ton of power, but that’s not the case. Reference GeForce GTX 680s have a TDP of “only” 195 watts and require a pair of 6-pin PCI Express power connectors. For reference, the GeForce GTX 580 has a TDP of 244 watts.


The GeForce GTX 680's Cooler and GPU Exposed

Despite having lower power requirements, NVIDIA still put significant resources into keeping the GeForce GTX 680 cool and quiet. The fan on the GeForce GTX 680’s cooler reportedly features acoustic dampening material which lowers its pitch and minimizes whine. The heatsink itself features a densely packed array of aluminum fins with a high-efficiency embedded heatpipe and heavy copper base to more efficiently wick heat from the GPU. And the heatsink is cut at an angle and pushed back from the case bracket to allow air to more easily pass through the heatsink and escape through the vents in the bracket. The end result is a card that’s quieter than the GeForce GTX 580, which we found to run relatively cool as well.


The GeForce GTX 680's Case Bracket and Outputs

In terms of its output configuration, the GeForce GTX 680 has two DL-DVI outputs, a single HDMI 1.4a output (with 4K monitor support), and a single DisplayPort 1.2 output. But more importantly, the cards support up for four active displays—previous GeForces could only run two displays simultaneous. Being able to power four displays means the GeForce GTX 680 can power multi-monitor 3D Vision Surround setups from a single card.
 

Transparent
New Features: TXAA, GPU Boost, Adaptive VSync and More

In addition to introducing a new graphics card based on new GPU, NVIDIA is also unveiling a number of new features and capabilities coming with the GeForce GTX 680, namely GPU Boost, Adaptive VSync, TXAA, NVENC and Bindless Textures.


NVIDIA's GPU Boost Technology

As you can probably surmise by its name, GPU Boost is somewhat akin to the Turbo Boost and Turbo Core technologies in today’s Intel and AMD processors. Like those technologies, GPU Boost automatically boosts GPU clock speed to increase performance. GPU Boost essentially looks at power being consumed by games / applications and adjusts the GPU accordingly, taking into account environmental conditions like GPU temperature.

Traditionally, GPU clock speeds were set based on the most stressful applications and conditions. But when you do that, you leave power and performance on the table for applications that don’t consume as much power. NVIDIA looked at that gap between peak power and unused power in certain applications and called it the “Boost opportunity”. With GPU boost, with those games and applications that run at a lower power profile, power / clocks can be boosted automatically and dynamically to increase performance.

As we’ve mentioned, the GeForce GTX 680 has a 1006MHz base clock. The majority of the time, however, while gaming, it will be clocked higher than that. Reference cards have a default GPU Boost clock of 1058MHz. And if you’re playing a game that’s doesn’t result in significant GPU power consumption, the GeForce GTX 680 will likely be running closer to its peak GPU Boost clock than its base clock most of the time.

We should also point out that GPU Boost doesn’t preclude overclocking. It can’t be disabled, but users will still be able to overlcock their GPU—they’ll just have to account for the Boost clock, which will always be some percentage above the base clock. NVIDIA also tells us that GPU Boost (at least initially) will not be available in mobile GeForce 600 series parts.


NVIDIA's Adaptive VSync Technology

The next new feature arriving with Kepler is dubbed Adaptive VSync. With standard VSync enabled, provided the GPU has the necessary horsepower, a game will typically run locked a 60Hz most of the time to prevent tearing on-screen. If the game, however, suddenly slows down and framerates need to drop below 60Hz, with standard VSync enabled, framerates will drop down to 30Hz (1/2 speed) and only jump back up to 60Hz when the performance is available again. The huge dips and jumps in framerates when VSync is enabled usually result in annoying stuttering.

With Adaptive VSync technology, however, when a game has to step down its framerate, VSync is automatically disabled temporarily. The result is that framerates can gradually decrease and increase, without the sharp drop-off to 30Hz, which results in a smoother overall experience.


TXAA vs. 8XAA - Click For An Enlarged View

Also coming with Kepler are some new anti-aliasing features. First off, FXAA will now available through the driver control panel, so users can override in-game options and use FXAA with most games. NVIDIA is also introducing a new anti-aliasing mode dubbed TXAA.

For now, TXAA will be available on Kepler-based cards only. Although we weren’t able to test it just yet (TXAA is coming with a future driver release), NVIDIA claims TXAA delivers roughly 8X MSAA image quality with the performance hit of roughly 2X MSAA. And TXAA2 offers image quality and jaggie reduction beyond 8X MSAA at roughly the speed of 4X MSAA.

TXAA essentially combines the benefits of FXAA and MSAA in a smaller memory footprint. TXAA applies FXAA like resolve filter, which is most effective across high-contrast edges in an image. But with TXAA NVIDIA jitters the pixel offsets to effectively provide more samples than are in memory. With TXAA2 a temporal component is added with further improves image quality.


NVENC Video Encoding Engine

NVIDIA has also incorporated a dedicated hardware video encoding engine into Kepler. The feautre is called NVENC and it is capable of encoding 1080P HD video at speeds of up to 4x to 8x real-time. NVENC can also encode H.264 high profile 4.1 video (which is the Blu-Ray standard) and suports Multi-View Video coding (MVC) for stereoscopic 3D video. What's different about NVENC versus previous GPU encoding solutions is that it doesn't leverage the shader cores. NVENC is dedicated hardware with the sole purpose of encoding video.


NVIDIA's Kepler - Bindless Textures

Which brings us to Bindless Textures. With Pre-Kepler GPU architectures, shaders were bound to using up to 128 simultaneous textures. But with Kepler, the number of unique textures available to shaders at run time is significantly increased. According to NVIDIA, over 1 million simultaneous textures can be used with Kepler, but this technology is not supported in DX11.

Transparent
Test System and Unigine Heaven v2.5

How We Configured Our Test Systems: We tested the graphics cards in this article on an Asus P9X79 Deluxe motherboard powered by a Core i7-3960X six-core processor and 16GB of G.SKILL DDR3-1600 RAM. The first thing we did when configuring the test system was enter the system UEFI and set all values to their "optimized" or "high performance" default settings and disabled any integrated peripherals that wouldn't be put to use. The hard drive was then formatted and Windows 7 Ultimate x64 was installed. When the installation was complete we fully updated the OS and installed the latest DirectX redist, along with the necessary drivers, games, and benchmark tools.

HotHardware's Test System
Intel Core i7 Powered

Hardware Used:
Intel Core i7-3960X
(3.3GHz, Six-Core)
Asus P9X79 Deluxe
(Intel X79 Express)

Radeon HD 7950
Radeon HD 7970
Radeon HD 6970
GeForce GTX 580/OC
GeForce GTX 580 3GB
GeForce GTX 590
GeForce GTX 680

16GB OCZ DDR3-1600
Western Digital Raptor 150GB
Integrated Audio
Integrated Network

Relevant Software:
Windows 7 Ultimate x64
DirectX April 2011 Redist
ATI Catalyst v12.2b
NVIDIA GeForce Drivers 300.99

Benchmarks Used:

Unigine Heaven v2.5
3DMark 11
Batman: Arkham City
Just Cause 2
Alien vs. Predator
Metro 2033
Lost Planet 2
Dirt 3

Unigine Heaven v2.5 Benchmark
Pseudo-DirectX 11 Gaming


Unigine Heaven

Unigine's Heaven Benchmark v2.5 is built around the Unigine game engine. Unigine is a cross-platform, real-time 3D engine, with support for DirectX 9, DirectX 10, DirectX 11 and OpenGL. The Heaven benchmark--when run in DX11 mode--also makes comprehensive use of tessellation technology and advanced SSAO (screen-space ambient occlusion) It also features volumetric cumulonimbus clouds generated by a physically accurate algorithm and a dynamic sky with light scattering.


The new GeForce GTX 680 kicked some major tail in the Unigine Heaven benchmark. NVIDIA's latest flagship put up scores about 23% higher than the Radeon HD 7970 and 49% higher than the GeForce GTX 580. Only the dual-GPU powered GeForce GTX 590 was able to put up a higher score and even then it was only buy a couple of percentage points.
 

Transparent
3DMark 11 Performance

Futuremark 3DMark11
Synthetic DirectX Gaming


Futuremark 3DMark11

The latest version of Futuremark's synthetic 3D gaming benchmark, 3DMark11, is specifically bound to Windows Vista and WIndows 7-based systems due to its DirectX 11 requirement, which isn't available on previous versions of Windows. 3DMark11 isn't simply a port of 3DMark Vantage to DirectX 11, though. With this latest version of the benchmark, Futuremark has incorporated four new graphics tests, a physics tests, and a new combined test. We tested the graphics cards here with 3DMark11's Extreme preset option, which uses a resolution of 1920x1080 with 4x anti-aliasing and 16x anisotropic filtering.

The GeForce GTX 680 performed very well in Futuremark's 3DMark11 benchmark as well. Here, the GeForce GTX 680 outpaced the Radeon HD 7970 by about 16.6% and the GeForce GTX 580 by over 50%. The only cards that were able to outrun the GeForce GTX 680 in 3DMark11 were the dual-GPU powered GeForce GTX 590 and Radeon HD 6990.
 

Transparent
Just Cause 2 Performance

Just Cause 2
DX10.1 Gaming Performance


Just Cause 2

Just Cause 2 was released in March '10, from developers Avalanche Studios and Eidos Interactive. The game makes use of the Avalanche Engine 2.0, an updated version of the similarly named original. It is set on the fictional island of Panau in southeast Asia, and you play the role of Rico Rodriquez. We benchmarked the graphics cards in this article using one of the built-in demo runs called Desert Sunrise. The test results shown here were run at various resolutions and settings. This game also supports a few CUDA-enabled features, but they were left disabled to keep the playing field level.

Just Cause 2 ran very well on the GeForce GTX 680. In this test, NVIDIA's new flagship beat the Radeon HD 7970 by 14.2% at 2560 and 26% at 1920, and it was about 45% faster than the GeForce GTX 580. Once again, however, the dual-GPU powered cards had somewhat of an edge.
 

Transparent
Lost Planet 2 Performance

Lost Planet 2
DirectX 11 Gaming Performance


Lost Planet 2

A follow-up to Capcom’s Lost Planet : Extreme Condition, Lost Planet 2 is a third person shooter that takes place again on E.D.N. III ten years after the story line of the first title. We ran the game’s DX11 mode which makes heavy use of DX11 Tessellation and Displacement mapping and soft shadows. There are also areas of the game that make use of DX11 DirectCompute for things like wave simulation in areas with water. This is one game engine that looks significantly different in DX11 mode when you compare certain environmental elements and character rendering in its DX9 mode versus DX11. We used the Test B option built into the benchmark tool and with all graphics options set to their High Quality values.

Lost Planet 2 has always favored NVIDIA's GPUs, so it's no surprise that the GeForce GTX 680 beats the Radeon HD 7970 by between 15.5% and 19.46% here. And versus the GeForce the GTX 580, the new GTX 680 comes out on top by between 36% and 38.6%, depending on the resolution.

The GeForce GTX 680, however, is able to clearly beat the dual-GPU Radeon HD 6990 in this test and even manages to pull ahead of the GeForce GTX 590 by a bit at the lower resolution.
 

Transparent
Metro 2033 Performance

Metro 2033
DirecX11 Gaming Performance


Metro 2033

Metro 2033 is your basic post-apocalyptic first person shooter game with a few rather unconventional twists. Unlike most FPS titles, there is no health meter to measure your level of ailment, but rather you’re left to deal with life, or lack there-of more akin to the real world with blood spatter on your visor and your heart rate and respiration level as indicators. The game is loosely based on a novel by Russian Author Dmitry Glukhovsky. Metro 2003 boasts some of the best 3D visuals on the PC platform currently including a DX11 rendering mode that makes use of advanced depth of field effects and character model tessellation for increased realism. This title also supports NVIDIA PhysX technology for impressive in-game physics effects. We tested the game resolutions of 1920x1200 and 2560x1600 with adaptive anti-aliasing and in-game image quality options set to their High Quality mode, with DOF effects disabled.

The new GeForce GTX 680 nudges past the Radeon HD 7970 by about 5% to 7% in the Metro 2033 benchmark, depending on the resolution. In comparison to the GeForce GTX 580 the more powerful GeForce GTX 680 pulls ahead by roughly 37%. It was only the dual-GPU powered Radeon HD 6990 that was clearly faster than the GeForce GTX 680 here; we'd say it was a wash versus the GeForce GTX 590.
 

Transparent
Batman: Arkham City Performance

Batman: Arkham City
DirectX Gaming Performance


Batman: Arkham City

Batman: Arkham City is a sequel to 2009’s Game of the Year winning Batman: Arkham Asylum. This recently released sequel, however, lives up to and even surpasses the original. The story takes place 18 months after the original game. Quincy Sharp, the onetime administrator of Arkham Asylum, has become mayor and convinced Gotham to create "Arkham City" by walling off the worst, most crime-ridden areas of the city and turning the area into a giant open-air prison. The game has DirectX 9 and 11 rendering paths, with support for tessellation, multi-view soft shadows, and ambient occlusion. We tested in DX11 mode with all in-game graphical options set to their maximum values, at various resolutions.

The GeForce GTX 680 continued its winning ways in the Batman: Arkham City benchmark. In this test, the GeForce GTX 680 outpaced the Radeon HD 7970 by 15.6% and 18.7% at 2560x1600 and 1920x1200, respectively. And at the same resolutions, the GeForce GTX 680 was able to beat the reference GeForce GTX 580 by between 34% and 37.6%. AMD still get CrossFire scaling right in this game, so the Radeon HD 6990 gets smoked and the GTX 680 is even able to squeak past the GeForce GTX 590.
 

Transparent
Dirt 3 Performance

Dirt 3
DirectX 11 Gaming Performance


Dirt 3

Dirt 3 is the latest in a string of great racing games from Codemasters. Like is predecessor, 2009's Dirt 2, this game sports impressive visuals with DX11 support. “Ultra” settings for shadow effects, tessellation, and post processing elements, like depth of field, then become available to the gamer, and in turn, crank up the workload on the graphics subsystem. The game engine also makes use of multi-core processors for higher performance on top-end systems. We tested the game configured with its Ultra graphics options with 4X anti-aliasing at resolutions with of 1920x1200 and 2560x1600.

We saw more of the same in the Dirt 3 benchmark. In this game, the new GeForce GTX 680 outperformed the Radeon HD 7970 by 17.6% to 23.8%, depending on the resolution. Versus the GeForce GTX 580, those deltas climbed to 43.9% to 47.6%. The GeForce GTX 680 was also able to clearly outpace the Radeon HD 6990 here, however, the dual-GPU powered GeForce GTX 590 still had an edge overall.
 

Transparent
Alien vs. Predator Performance

Alien vs. Predator
DirectX 11 Gaming Performance


Alien vs. Predator

The Alien vs. Predator benchmark makes use of the advanced Tessellation, screen space ambient occlusion and high-quality shadow features, available with DirectX 11. In addition to enabling all of the aforementioned DirectX 11 related features offered by this benchmark, we also switched on 4X anti-aliasing along with 16X anisotropic filtering to more heavily tax the graphics cards being tested.

Performance took a turn to AMD town in the Alien vs. Predator benchmark. In this test, the GeForce GTX 680 trailed the Radeon HD 7970 by about 6.5% to 9% depending on the resolution being tested, marking the first time in our testing that AMD's current flagship was able to beat the GeForce GTX 680. Versus the GeForce GTX 580, the newer, more powerful GTX 680 came out on top by up to 18%, but the dual-GPU powered cards were clearly the fastest here.
 

Transparent
Overclocking the GeForce GTX 680

NVIDIA claimed that the GeForce GTX 680 had some serious headroom for overclocking, so we fired up a beta release of EVGA’s excellent Precision performance tuning utility, which supports the GTX 680, to see just how much additional performance we could wring from the card.

During some conversations we had with a few representatives from NVIDIA, we were told that most GeForce GTX 680 cards would likely be able to hit GPU frequencies around 1.2GHz, with stock cooling. Our testing proved that to be true.


EVGA's Precision Performance Tuning Utility

By cranking up the power target by 20% and increasing the GPU Clock Offset by 100MHz, our GeForce GTX 680 would consistently boost to about 1.19GHz with perfect stability and zero visual artifacts. For giggles, we also cranked the memory clock up by another 60MHz for an additional performance boost.

Overclocking the GeForce GTX 680
Putting The Pedal to the Metal

When all was said and done, we were able to raise the GeForce GTX 680’s peak GPU clock by 184MHz, an increase of about 18.4%. And we’re sure there would be even more performance under the hood with a further increase to the power target and GPU voltage.

With that said, while we had the card overclocked, we fired up a couple of benchmarks to see how performance was affected. Ultimately we saw a mild increase in performance in Alien vs. Predator, which is very sensitive to memory bandwidth. Metro 2033 showed a much larger increase in performance, however. In that test, overclocking the GeForce GTX 680 allowed it to pull ahead of the GeForce GTX 590, whereas the stock card could not.
 

Transparent
Power Consumption, Noise, Temps

Before bringing this article to a close, we'd like to cover a few final data points--namely power consumption and noise. Throughout all of our benchmarking and testing, we monitored acoustics and tracked how much power our test system was consuming using a power meter. Our goal was to give you an idea as to how much power each configuration used while idling and while under a heavy workload. Please keep in mind that we were testing total system power consumption at the outlet here, not just the power drawn by the graphics cards alone.

Total System Power Consumption
Tested at the Outlet

The new GeForce GTX 680 proved to be quite power friendly under both idle and load conditions. With the GeForce GTX 680 idling at the Windows desktop (with the monitor displaying an image) out test machine consumed only 122 watts—11 watts fewer than the GeForce GTX 580 and 5 watts fewer than the Radeon HD 7970.

With the GeForce GTX 680 loaded up, our test system’s power consumption jumped up to only 358 watts, which was among the lowest of the bunch. When running under load conditions, the GeForce GTX 680 consumed 46 fewer watts than the GeForce GTX 580 and 17 fewer watts than the Radeon HD 7970. That’s a big win for NVIDIA after years of more power efficient AMD GPUs.

With the improvements made to the GeForce GTX 680’s PCB and cooling hardware and the power efficiencies inherent to the architecture, it should come as no surprise that the GeForce GTX 680 runs relatively cool and quiet. Our particular card idled at about 42’C and peaked at about 73’C under load according to EVGA’s Precision utility.

Noise was also a non-issue. At idle, the GeForce GTX 680 is essentially silent and couldn’t be heard above the noise produced by our CPU cooler and PSU. Under load, the card’s fan did spin up to audible levels, but we would not consider the card loud by any means.
 

Transparent
Our Summary and Conclusion

Performance Summary: NVIDIA made summarizing the GeForce GTX 680’s performance nice and easy. To put it simply, the GeForce GTX 680 is the fastest single-GPU based graphics card we have tested to date. Generally speaking, the GeForce GTX 680 was between approximately 5% and 25% faster than AMD’s Radeon HD 7970, depending on the application. Although, the Radeon HD 7970 was able to pull ahead in a couple of spots, like Alien vs. Predator. In comparison to NVIDIA’s previous single-GPU flagship, the GeForce GTX 580, the new GTX 680 is between 15% and 50% faster. Versus ultra high-end, dual-GPU powered cards like the Radeon HD 6990 and GeForce GTX 590, the GeForce GTX 680’s performance still looks good, as it was able to outrun those dual-GPU powerhouses on a few occasions.


The NVIDIA GeForce GTX 680 Reference Card

The NVIDIA GeForce GTX 680 is full of all kinds on win. Despite using fewer transistors, having a smaller die, and consuming less power, the GeForce GTX 680 is faster than the AMD Radeon HD 7970 overall. The GK104 GPU is simply more efficient than AMD’s Tahiti in terms performance per watt and performance per transistor. The design decision NVIDIA made with the Kepler architecture have clearly paid off.

The GeForce GTX 680 also offers some cool new features. GPU Boost allows the card to take advantage of performance that would have been left untapped with previous-gen architectures, Adaptive VSync smooths out the stuttering sometimes associated with standard VSync, and TXAA should increase image quality without incurring massive performance hit. The output capabilities of the GeForce GTX 680 also make it possible to run a 3D Vision Surround configuration from a single card. In terms of new features, NVIDIA has also done an excellent job in our opinion.

Now for what some of you will likely consider the best news of all. NVIDIA has set the MSRP of the GeForce GTX 680 at $499. We dinged AMD for not pushing the price vs. performance envelope much with the Radeon HD 7000 series, so it’s only fair that we give NVIDIA props for doing it with the GeForce GTX 680. While $499 isn’t cheap, it’s much more palatable than the $549+ of the Radeon HD 7970. In addition to shifting the price performance curve in consumers’ favor, the GeForce GTX 680 should also force AMD to cut the prices of their Radeon HD 7800 and 7900 series cards, another win for consumer.

In the end, we can’t help but like the GeForce GTX 680. The card has virtually everything an enthusiast could ask for at this time. It’s faster, cooler, and quieter than the competition and it offers some cool new features. When NVIDIA briefed us on Kepler and the GeForce GTX 680, they said their goals with this new architecture were to produce a product that was faster, smoother, and richer than the previous generation. We think they pulled it off.

 

  • Great Performance
  • Relatively Cool and Quiet
  • GPU Boost
  • Adaptive VSync
  • TXAA
  • 3D Surround From a Single Card

  • Expensive (Although Priced aggresively)
  • Additional Considerations for Overclockers



Content Property of HotHardware.com