Logo   Banner   TopRight
NVIDIA GeForce 8800 GT - G92 Takes Flight
Date: Oct 29, 2007
Author: Dave Altavilla
Introduction and Specifications

In the GPU business, like any other semiconductor technology, die size, cost and performance-per-watt are the name of the game.  Performance-per-watt from a design and manufacturing perspective is important on multiple levels.  Certainly, power efficient products are important to the consumer but also, generally speaking, power-efficient designs usually mean higher clock speed head-room and higher yields which, at the end of the day, drop right to bottom line profits.  In terms of very complex, highly scalable processor architectures like CPUs and GPUs, bleeding-edge manufacturing processes are critical to delivering a competitive product with reasonable cost targets, power consumption and profit margins.

Today's NVIDIA GeForce 8800 GT launch is an example of what it takes to bring a highly complex processor architecture to fruition in today's competitive arena.  The new GeForce 8800 GT is an entirely new GPU core; well sort of.  The NVIDIA G92 GPU core that is under the hood of the new GeForce 8800 GT is essentially a die-shrink and cut back of the NVIDIA's G8 architecture, with a few enhancements and optimizations.  With this migration to TSMC's 65nm process technology, the new GeForce 8800 GT is targeted at offering solid mid-range performance, lower power consumption and heat, along with a competitive price.  As we said, complex GPUs need top notch manufacturing processes to be competitive in this game.  Let's have a look at the way NVIDIA thinks it was meant to be played.


NVIDIA GeForce 8800 GT
Features & Specifications

Fabrication:  65nm

Number of Transistors: 
754 Million

Core Clock (Includes dispatch, texture units and ROPs)
:  600MHz

Shader Clock (Stream Processors): 

Stream Processors: 

Memory Clock: 
900MHz (1800 DDR)

Memory Interface: 

Memory Bandwidth: 
57.6 GBps

Frame Buffer Size: 
512 MB


Texture Fill Rate (Billions of bilinear filtered texels/s):  33.6 GT/s

HDCP Support: 

HDMI Support: 

2xDual-Link DVI-I
7-Pin TV Out


Bus Technology: 
PCI Express 2.0

Max Board Power: 
110 Watts

NVIDIA unified architecture:

Fully unified shader core dynamically allocates processing power to geometry, vertex, physics, or pixel shading operations, delivering up to 2x the gaming performance of prior generation GPUs.

Full Microsoft DirectX 10 Support:
World's first DirectX 10 GPU with full Shader Model 4.0 support delivers unparalleled levels of graphics realism and film-quality effects.

NVIDIA SLI Technology:
Delivers up to 2x the performance of a single graphics card configuration for unequaled gaming experiences by allowing two cards to run in parallel. The must-have feature for performance PCI Express graphics, SLI dramatically scales performance on today's hottest games.

NVIDIA Lumenex Engine:
Delivers stunning image quality and floating point accuracy at ultra-fast frame rates.
16x Anti-aliasing: Lightning fast, high-quality anti-aliasing at up to 16x sample rates obliterates jagged edges.

128-bit floating point High Dynamic-Range (HDR):
Twice the precision of prior generations for incredibly realistic lighting effects - now with support for anti-aliasing.

NVIDIA Quantum Effects Technology:
Advanced shader processors architected for physics computation enable a new level of physics effects to be simulated and rendered on the GPU - all while freeing the CPU to run the game engine and AI.
NVIDIA nView Multi-Display Technology:
Advanced technology provides the ultimate in viewing flexibility and control for multiple monitors.

Dual 400MHz RAMDACs:
Blazing-fast RAMDACs support dual QXGA displays with ultra-high, ergonomic refresh rates - up to 2048x1536@85Hz.

Dual Dual-link DVI Support:
Able to drive the industry's largest and highest resolution flat-panel displays up to 2560x1600.

NVIDIA PureVideo HD Technology:
The combination of high-definition video decode acceleration and post-processing that delivers unprecedented picture clarity, smooth video, accurate color, and precise image scaling for movies and video.

Discrete, Programmable Video Processor:
NVIDIA PureVideo HD is a discrete programmable processing core in NVIDIA GPUs that provides superb picture quality and ultra-smooth movies with low CPU utilization and power.

Hardware Decode Acceleration:
Provides ultra-smooth playback of H.264, VC-1, WMV and MPEG-2 HD and SD movies.

HDCP Capable:
Designed to meet the output protection management (HDCP) and security specifications of the Blu-ray Disc and HD DVD formats, allowing the playback of encrypted movie content on PCs when connected to HDCP-compliant displays.

Spatial-Temporal De-Interlacing:
Sharpens HD and standard definition interlaced content on progressive displays, delivering a crisp, clear picture that rivals high-end home-theater systems.

High-Quality Scaling:
Enlarges lower resolution movies and videos to HDTV resolutions, up to 1080i, while maintaining a clear, clean image. Also provides downscaling of videos, including high-definition, while preserving image detail.

Inverse Telecine (3:2 & 2:2 Pulldown Correction):
Recovers original film images from films-converted-to-video (DVDs, 1080i HD content), providing more accurate movie playback and superior picture quality.

Bad Edit Correction:
When videos are edited after they have been converted from 24 to 25 or 30 frames, the edits can disrupt the normal 3:2 or 2:2 pulldown cadences. PureVideo HD uses advanced processing techniques to detect poor edits, recover the original content, and display perfect picture detail frame after frame for smooth, natural looking video.

Video Color Correction:
NVIDIA's Color Correction Controls, such as Brightness, Contrast and Gamma Correction let you compensate for the different color characteristics of various RGB monitors and TVs ensuring movies are not too dark, overly bright, or washed out regardless of the video format or display type.

Integrated SD and HD TV Output:
Provides world-class TV-out functionality via Composite, S-Video, Component, or DVI connections. Supports resolutions up to 1080p depending on connection type and TV capability.

Noise Reduction:
Improves movie image quality by removing unwanted artifacts.

Edge Enhancement:
Sharpens movie images by providing higher contrast around lines and objects.

A quick take of the above specifications show this GPU has a few more transistors going for it at 754 million, versus other members of the G8 architecture weighing in at 681 million.  This is because NVDIA has enhanced a few key functional blocks within the GPU engine, which we will get into detail on shortly.  In addition, obviously a 65nm fab process has been kind to the GPU, affording the ability to scale to a robust 600MHz core clock and a 1500MHz shader clock.  Comparatively, a GeForce 8800 GTX GPU runs at 575MHz/1200MHz core and shader clocks.  Let's dig a bit deeper into the 8800 GT architecture, next. 
NVIDIA’s G92 Architecture, A New GPU For The GeForce 8800 GT

Below is an easy to digest table of specifications for various members of NVIDIA's G8 series of GPUs and their associated reference board build-outs.  Here you'll be able to more easily place where the new GeForce 8800 GT should fall, in terms of overall performance and capabilities.

GeForce 8800 Ultra GeForce 8800 GTX
GeForce 8800 GTS
GeForce 8800 GT
GPU Core Clock
 612MHz 575MHz  500MHz  600MHz 
GPU Shader Clock
 1500MHz  1350MHz 1200MHz 1500MHz
Memory Interface Clock  1080MHz
(2160MHz DDR)
(1800MHz DDR)
(1600MHz DDR)
(1800MHz DDR)
Fill Rate
 39.2 GT/s
36.8 GT/s  24 GT/s 33.6 GT/s
Memory Bandwidth
 103.6GB/s  86.4 GB/s
 64 GB/s  57.6 GB/s
Stream Processors
128 128 96 112
Memory Interface
384-bit 384-bit 320-bit 256-bit
Fab Process
90nm 90nm 90nm 65nm
Number of Transistors
681M 681M 681M 754M
Street Price or MSRP $615 street $520 street $380 street $199 - $249 MSRP

OK, we know what you're thinking, 754M transistors with goosed up core clock and memory clocks that compete a GeForce 8800 GTX but this is a "mid-range" card.  And you'd be correct.  As the saying goes, today's high-end will be tomorrow's mid-range, so to speak.  But there's more to the picture here than the numbers really show.  With 112 Stream Processors and a 256-bit memory interface, the new GeForce 8800 GT offers higher overall fill-rate than a GeForce 8800 GTS but slightly less memory bandwidth, though some third party board partners may offer higher speed variants versus the reference design.  From a pure fill-rate perspective, the new GeForce 8800 GT should come within striking distance of a GeForce 8800 GTX but at a fraction of the cost, thanks in part to that 65nm die shrink.

If the above block diagram looks vaguely familiar, it's because it is.  What you're looking at is the same basics block layout of the NVIDIA G80 architecture but with only seven groups of 16 Steam processor units.  7 times 16 equates to 112 total stream processors.  So we have our basic math in check here but why the odd number of SP units?  Good question.  The simple fact is that this new GPU core is more or less a die shrink of the existing G80 minus one SP unit, or at least one of the 8 available on the original G80 core has been disabled.  However, in the immortal words of a cheesy infomercial host, but wait, there's more!  NVIDIA brought new features and engine enhancements to the GeForce 8800 GT and we'll dig into those next.

New Features – PCI Express 2.0, Pure Video HD and TRMS AA

PCI Express 2.0 GPU Debut:

The way to PCI Express 2.0 has been paved by Intel's new X38 chipset and NVIDIA decided to follow it with the launch of this new GPU.  The PCI Express 2.0 standards specification calls for a 5 Gigabit/sec SerDes (serializer/deserializer) physical interface.  These new higher speed links will offer twice the available dedicated bi-directional bandwidth keeping the GPU fed with pre-processed rendering workload from the host "root complex" system CPU.  Though this new high speed serial interface will become more important down the road, as game engines and new platform designs begin to take advantage of it, think of this new GPU feature as an insurance policy of sorts, offering a level of future-proof

2nd Generation PureVideo HD:
If you were paying attention to our GeForce 8600 series launch back in April, you would have seen a preview of the second coming of NVIDIA's PureVideo HD video processing engine in that new GPU architecture. 

NVIDIA VP2 video processing engine offers AES128 decryption for HDCP content as well as full hardware acceleration from bitstream processing to motion compensation and decode for H.264 high def content.  A diagram from that past launch illustrates NVIDIA's GPU video processing migration over the past few releases.

GeForce 8800 series GPUs currently are only VP1 enabled but in later releases of the GeForce 8500 and 8600 series GPUs, NVIDIA brought out VP2 and has also migrated the new processing engine into the GeForce 8800 GT and will likely do so for roadmap products as well.  

TRMS AA - A New Transparency AA Sampling Algorithm:
Anti-aliasing is probably has become the darling feature enhancement of Graphics Processors over the years.  It seems like with every new GPU referesh, a new sampling algorithm is unveiled touting new found levels of image quality crispness.  With the new R167 release of NVIDIA's ForceWare drivers, capability for a new transparency AA sampling algorithm has been enabled that offers super-sample-like transparency AA quality at multi-sample AA performance levels.  That is to say that this new TRMS AA (transparency multi-sample AA) setting will have a minimal impact on performance but look as good as super sample modes.  Below we have taken screen shots from Half Life 2: EP1 in an effort to illustrate the image quality of the available modes for the various tranparency AA settings. 

No Transparency AA

TRMS AA - Multi-Sample

TRSS AA - Super Sample

Note that Half Life 2: EP2 has its own transparency AA technology built into the engine, so setting transparency AA in the driver control panel for that game engine, isn't worth doing.  However, as you can see in EP1, our chain link fence looks decidedly more defined and crisp, especially in far-reaching areas and along the bottom of these shots, with TRMS AA enabled versus the shot without it.  Finally, as  you can see here, the super sample AA mode doesn't offer much if any additional image quality enhancement over TRMS.  Examples of current game engines that can benefit from TRMS are Half Life 2 and HL2 EP1, Oblivion, FarCry and Serious Sam 2.  Examples of games that have built-in transparency AA are Half Life 2: EP2, Company of Heros, Team Fortress 2, Battlefield 2 and Battlefield 2142. 

The GeForce 8800 GT Graphics Card

The GeForce 8800 GT itself is diminutive in size and due to the GPU's 65nm process technology, comes built on a single slot design 9" PCB that just begs for a multi-GPU SLI configuration.



Though the slim form-factor is certainly a big plus, no pun intended, the small turbine style the reference design employs can definitely get a bit whiney, especially on initial boot up, where the card is downright annoying.  However, once the card's drivers kick into gear in the OS, the card throttles way back and the fan isn't audible unless the card is under heavy load or thermally challenged due to tight chassis space constraints.  Under serious loads and when things get warm, expect the fan speed to spin up to tolerable though decidedly noticeable levels.


MSI's retail offering is shown here with virtually no difference in board design versus NVIDIA's reference design.  The card however does come factory overclocked to a 660 MHz core and 1900MHz memory speeds.  Don't expect the acoustics to be any better with this card but again, the GeForce 8800 GT's whiney little fan isn't what we would call a show-stopper.

Test Setup And 3DMark 06

HOW WE CONFIGURED THE TEST SYSTEMS: We tested all of the graphics cards used in this article on either an Asus Striker Extreme nForce 680i SLI motherboard (NVIDIA GPUs) or an Asus P5E3 Deluxe (ATI GPUs) powered by a Core 2 Extreme QX6850 quad-core processor and 2GB of low-latency Corsair RAM. The first thing we did when configuring the test systems was enter their BIOS and set all values to their "optimized" default settings. Then we manually configured the memory timings and disabled any integrated peripherals that wouldn't be put to use. The hard drive was then formatted, and Windows Vista Ultimate and the latest DX10 update was installed. When the installation was complete, we then installed the latest chipset drivers available, and installed all of the other drivers necessary for the rest of our components.  Auto-Updating and System Restore were also disabled and the hard drive was defragmented.

The HotHardware Test System
Core 2 Extreme Powered


Motherboard -

Video Cards -

Memory -

Audio -

Hard Drive

Hardware Used:
Core 2 Extreme QX6850 (3GHz) 

Asus Striker Extreme
nForce 680i SLI chipset

Asus P5E3 Deluxe
X38 Express 

Viper Radeon HD 2900 XT 1GB (2)
GeForce 8800 GTS (2)
GeForce 8800 GT (2)  

Radeon HD 2900 XT (2)

2048MB Corsair PC2-6400C3
2 X 1GB
2048MB Corsair DDR3 1333 C9
2 X 1GB

Integrated on board

Western Digital "Raptor"

74GB - 10,000RPM - SATA

OS - 

DirectX -

Video Drivers

Synthetic (DX) -
DirectX -
DirectX -
DirectX -
DirectX -
DirectX -
OpenGL -

Relevant Software:

Windows Vista Ultimate

DirectX 10

NVIDIA Forceware v169.1
ATI Catalyst v7.10

Benchmarks Used:
3DMark06 v1.0.2
World In Conflict - DX10
Lost Planet - DX10
Bioshock - DX10
Half Life 2: Episode 2*
Enemy Territory: Quake Wars*

* - Custom Test
(HH Exclusive demo)

Performance Comparisons with 3DMark06 v1.0.2
Details: www.futuremark.com/products/3dmark06

3DMark06 is the most recent addition to the 3DMark franchise. This version differs from 3Dmark05 in a number of ways, and includes not only Shader Model 2.0 tests, but Shader Model 3.0 and HDR tests as well. Some of the assets from 3DMark05 have been re-used, but the scenes are now rendered with much more geometric detail and the shader complexity is vastly increased as well. Max shader length in 3DMark05 was 96 instructions, while 3DMark06 ups that number to 512. 3DMark06 also employs much more lighting and there is extensive use of soft shadows. With 3DMark06, Futuremark has also updated how the final score is tabulated. In this latest version of the benchmark, SM 2.0 and HDR / SM3.0 tests are weighted and the CPU score is factored into the final tally as well.

If we look at the individual Shader Model 2.0 and Shader Model 3.0 / HDR tests built into 3DMark06, we see that the new GeForce 8800 GT offers significantly stronger performance than the GTS and Radeon HD 2900 XT in the Shader Model 2.0 test.  In the SM 3.0 test, the 8800 GT finished just behind the 2900 XT, however.

 The GeForce 8800 GT's strong performance in the Shader Model 2.0 test propel it to the head of the pack according to 3DMark06's overall score.

In a multi-GPU configuration, the new GeForce 8800 GT performs well ahead of a pair of GeForce 8800 GTS card and right on par with a Radeon HD 2900 XT CrossFire configuration.  We've also thrown in the result from a single 8800 GTX, to show you how a pair of these relatively inexpensive GTs compares to the more expensive GTX.

Half Life 2: EP2 – Single GPU and SLI

Performance Comparisons with Half-Life 2: Episode 2

Details: www.half-life2.com

Half Life 2:

Episode 2
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  And thanks to an updated game engine, gorgeous visual, and intelligent weapon and level design, Half Life 2 became just as popular.  Episode 2 offers a few minor visual enhancements including better looking transparent texture anti-aliasing. These tests were run at resolutions of 1,280 x 1,024, 1,600 x 1,200 and 1920 x 1200 with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently.  Color correction and HDR rendering were also enabled in the game engine as well.  We used a custom recorded timedemo file to benchmark all cards in this test.

In a single card configuration, the new GeForce 8800 GT significantly outpaced both the GeForce 8800 GTS and the Radeon HD 2900 XT in our custom Half Life 2: Episode 2 benchmark.  At the higher resolutions, the GT performed very well in comparison to the 2900 XT in particular, where the 8800 GT was upwards of 26% faster in this game.

 Running in a multi-GPU configuration boosts performance across the board, but the performance trend remains relatively unchanged.  The new GeForce 8800 GT was still faster than the GTS and Radeon, and the single GTX, at the higher resolutions, but the deltas got a little smaller.

ET: Quake Wars – Single GPU and SLI
Performance Comparisons with ET: Quake Wars
Details: www.enemyterritory.com
Enemy Territory: 
Quake Wars
Based on ID's radically enhanced Doom 3 engine and is viewed by many as Battlefield 2 meets the Strogg and then some.  In fact, we'd venture to say that ID took EA's team-based warfare genre up a notch or two.  ET: Quake Wars also marks the introduction of John Carmack's "Megatexture" technology that employs extremely large environment and terrain textures covering vast areas of map levels without the need to repeat and tile many small textures.  The beauty of the megatexture is that each unit only takes up a maximum of 8MB of frame buffer memory.  Add to that HDR-like bloom lighting and leading edge shadowing effects and Enemy Territory: Quake Wars looks great, plays hot and works high end graphics cards vigorously.

In our custom Enemy Territory: Quake Wars benchmark, the GeForce 8800 GT continued its winning ways for the most part.  The two GeForce cards were neck and neck with the GT getting the better of the GTS for the most part.  The Radeon HD 2900 XT, however, finished well behind.

The Radeon HD 2900 XT CrossFire configuration had significant corruption in this game, so we ommited the results from our graph.  We weren't confident everything was being rendered correctly and don't want to post innaccurate results.  If we focus on the GeForce comparison, however, we see a similar trend with the new GT finishing just ahead of the GTS.
World In Conflict – Single GPU and SLI
Performance Comparisons with World In Conflict
Details: www.worldinconflict.com
World In Conflict
The real-time tactical game World In Conflict, from Massive Entertainment, is easily one of the most impressive looking game engines in our battery of graphics benchmarks.  Unfortunately it also puts a huge load on processor system resources and as such is mostly CPU-limited more than anything else.  Regardless, we can't help but include it as a graphics performance metric because the game's visuals are just too impressive.  This game doesn't require blisteringly fast frame-rates to look and play smoothly, however. In fact, frame-rates in the 30 fps range are more than playable and will look fluid.  We set all available image quality features to their highest settings and enabled the game's DX10 features as well as 4X AA and 16X AF at three resolutions, 1280X1024, 1600X1200 and a 1920X1200 wide-screen 16:10 format.

The performance trend we've seen up to this point shifted with our custom World In Conflict benchmark. In this game, the GeForce 8800 GTS took the lead, followed closely behind by the new GeForce 8800 GT.  The Radeon HD 2900 XT, however, had trouble here and wasn't even competitive.

When we switched to a multi-GPU configuration, the performance trend in World in Conflicted shifted yet again.  This time around, the GeForce 8800 GT SLI configuration outpaced a pair of GTS cards by a couple of frames per second.  The single GTX card was still a bit faster and while the Radeons' performance improved in CrossFire mode they still faultered in this game.

Bioshock – Single GPU and SLI

Performance Comparisons with Bioshock
Details: www.bioshock.com

The latest all-the-rage first-person shooter game Bioshock, by 2K Boston was first released on the Xbox 360 with a PC version that soon followed.  The game engine is built on a modified Unreal Engine 2.5 and has a DX10 rendering mode that employs some level of image quality enhancement in areas like volumetric fog and smoke.  The game currently doesn't support anti-aliasing in its graphics option control panel, though you can enable AA in your graphics driver option, so long as the game is set to run in its DX9 path.  We chose to test with DX10 enabled, with no AA and all other graphics options at their highest settings.

Bioshock proved to be a bit of a strong point for the 8800 GT. In this test, the single GeForce 8800 GTS finished a few frames per second ahead of the GeForce 8800 GTS at all resolutions and well in front of the Radeon HD 2900 XT.

In a multi-GPU configuration, the Radeon HD 2900 XT CrossFire configuration showed better scaling than the GeForce cards and the Radeons actually finished in front of the GeForces at the higher resolutions.  The new GT outpaced the older GTS yet again though.

Crysis SP Demo - Single GPU and SLI - 11/7/07 Update

If you're at all into enthusiast Computing, the highly anticipated single player demo of hot new upcoming FPS smash-hit Crysis, should require no introduction.  Crytek's game engine visuals are easily the most impressive 3D renderings we've seen on the computer screen to date.  The engine employs some of the latest techniques in 3D rendering like Parallax Occlusion Mapping, Subsurface Scattering, Motion Blur and Depth-of-Field effects, as well as some of the most impressive use of Shader technology we've seen yet.  In short, for those of you that want to skip the technical jib-jab, Crysis is HOT.

Performance Comparisons with Crysis SP Demo
Looks that kill...

Crytek's Crysis in all it's glory...


Even with our image quality settings configured without Anti-Aliasing, Crysis serves up an absolutely punishing workload for any graphics card on the market today and certainly amongst the cards we tested here.  We tested with game engine settings all set to "high" instead of "very high", which reduces some of the image post-processing effects workload like motion blur and makes use of slightly less complex lighting and shadowing effects, just to name a few of the compromises.  Regardless, the changes are hardly noticeable in action and as you can see, our graphics test-bed still had a bit of work to do, keeping up.

Obviously, our pair of GeForce 8800 GTs in SLI outpaced the single Big Daddy GeForce 8800 GTX card by about 10% in round numbers.  A single GeForce 8800 GT was about 10 - 15% slower than a GeForce 8800 GTX but the GT ran complete circles around the AMD-ATI Radeon HD 2900 XT.

Pure Video HD Performance

Finally, we set out to stress the new GeForce 8800 GT's PureVideo HD video processing engine, in terms of both image quality and CPU utilization with our HQV and H.264 playback tests.

HQV - DVD Video Decoding And Playback Quality
Pure Video HD, The Second Coming

HQV is comprised of a sampling of SD video clips and test patterns that have been specifically designed to evaluate a variety of interlaced video signal processing tasks, including decoding, de-interlacing, motion correction, noise reduction, film cadence detection, and detail enhancement. As each clip is played, the viewer is required to "score" the image based on a predetermined set of criteria. The numbers listed below are the sum of the scores for each section. We played the HQV DVD using the latest version of Cyberlink's PowerDVD, with hardware acceleration for AMD AVIVO and NVIDIA PureVideo extensions enabled.

The HQV benchmark has lost much of its luster with us.  Not only because both ATI's and NVIDIA's respective architectures produce near perfect scores (130 points is a perfect score here), but because this benchmark leaves a portion of the scoring open to interpretation and depending who you talk to (Silicon Optics, Intel, NVIDIA, ATI, etc.) you'll likely get a different answer as to how to score a particular product.  Regardless, thanks to AVIVO and PureVideo, the PC is a solid platform for playing most SD content.  You'd have to spend thousands of dollars to get a DVD player that can produce similar results in this benchmark.  Overall, we'd tip our collective hats toward NVIDIA's PureVideo HD output versus AMD's AVIVO but we've seen the tables turn from launch to launch and unless you're really a pixel snob, the differences are rather subtle.

H.264 Video Decoding And Playback Performance
Sipping CPU Cycles

We then conducted a test using an H.264 encoded movie trailer clip for "Evan Almighty" which is available for download on Apple's QuickTime website.  The CPU utilization data gathered during these tests was taken from Windows Vista Performance Monitor. The graphs shows the CPU utilization for a GeForce 8800 GT and a Radeon HD 2900 XT using PowerDVD for the QuickTime clip.  We'd caution you to focus on the red color-coded processor utilization line in the following graphs. The other green, yellow and blue lines for disk IO and memory utilization can vary greatly from run to run.

Radeon HD 2900 XT

GeForce 8800 GT

Here we see NVIDIA's PureVideo HD enabled system with the GeForce 8800 GT, on par mostly with AMD's AVIVO technology running on the Radeon HD 2900 XT.  Though the peaks and valleys are significantly more dramatic with the 2900 XT, overall CPU utilization recorded was 14% on average for either system, which is pretty spectacular when you consider the processing requirements for playing back such high quality digital video content.

Overclocking, Power Consumption and Heat


With any new GPU architecture, especially one built on a new, leading-edge 65nm manufacturing process, we'll always try to push the envelope a bit in an effort to see just how much headroom the card has above and beyond factory specifications.

Overclocking NVIDIA's GeForce 8800 GT
Fast 3D Video Card + Overclocking = Even Faster Card

Ultimately we managed to crank up our GeForce 8800 GT reference card to a blistering 720MHz core speed and a full 2GHz memory interface speed.  The net result is a 20% core speed increase and an 11% memory interface speed increase.  Here are the results.

In both our Lost Planet and ET: Quake Wars tests we realized a 14 - 15% frame rate increase, which loosely correlates to our collective core and memory speed boosts that we achieved while overclocking the product.  Overall we are impressed with the available margin in the GeForce 8800 GT, especially when you consider its rather modest thermal management solution.  More on this next.

Total System Power Consumption, Operating Temps & Acoustics
It's All About the Watts and Decibels

Below we've graphed total power consumption for the systems we used for testing, along with their associated graphics subsystems.  Our goal was to give you all an idea as to how much power each configuration used. Please keep in mind that we were testing total system power consumption at the outlet here, not just the power being drawn by the video cards alone.

The first notable you should observe here is that obviously the nForce 680i-based systems draw significantly more power than the Intel 975X-based system.  This is mostly due to the larger number of PCI Express lanes available in the 680i chipset.  Beyond that, the area to focus on is the variance in power consumption from idle to load conditions, which shows worst-case power consumption with the GPUs under full load.  We simply looped 3DMark 06's shader model 3 test and recorded power consumption with a power analyzer drawing from the wall outlet.  The GeForce 8800 GT drew an extra 73 watts under under a graphics workload while in stark contrast, the Radeon HD 2900 XT drew 168 watts more under the same load condition.  One word describes the performance-per-watt variance here; ouch.

Acoustics and Heat:
As we noted earlier, the GeForce 8800 GT's heatsink and fan solution is certainly svelt, with its single slot design but unfortunately that comes at the cost of heat and noise.  In fact, the 8800 GT is probably the noisiest product to come out of NVIDIA in a long time.  The card will likely only annoy users that are looking for exceptionally quiet computing solutions but for the average usage model, these new cards will be tolerable.  We're hopeful however, that some third-party board partners will come up with more innovative, quiet cooling solutions. 

In terms of its thermal profile, the GeForce 8800 GT runs relatively hot.  We measured 155ºF on the exterior of the heatsink face plate and 145ºF on the backside of the board directly under the GPU, with a digital infrared thermometer.

Performance Analysis and Conclusion
Performance Summary:
Virtually across the board NVIDIA's new GeForce 8800 GT is as fast or faster than a 640MB GeForce 8800 GTS card and significantly faster than a Radeon 2900 XT, save for a couple of CrossFire/SLI enabled tests like 3DMark and Bioshock, where driver maturity is likely the limiting factor for the new NVIDIA card. From a purely a DX10 gaming perspective, the GeForce 8800 GT ran roughshod over the significantly more expensive Radeon HD 2900 XT and even outscored a GeForce 8800 GTS in more than a few spots as with our Bioshock single card testing.  Finally, in terms of HD digital video processing and image quality, the GeForce 8800 GT, with its second gen PureVideo HD engine, is roughly on par with AMD's AVIVO solution currently.

NVIDIA's second foray into 65nm GPU technology is a resounding sucess in our humble opinion.  And how could anyone argue a Graphics product that consumes less power, costs less and performs better than the previous genertion?  The proverbial hat-trick of high tech electronics is what we're looking at when we consider all of the performance metrics we showed you today against the backdrop of price.  NVIDIA's GeForce 8800 GT will be available in retail and e-tail immediately at an MSRP ranging from $199 to $249.  Likely 256MB cards will weigh in at the lower cost price point with 512MB cards closer to the $249 tag.  About the only thing we would mildly take issues with relative to this new product, is its thermal management solution that occasionally reminds you of its whiney existance.
Beyond that, for a reasonable investment, you get a product that will run just about any game engine at high resolution with support for DirectX 10 and HD digital video processing offload.  In fact, for less than the cost of a single GeForce 8880 GTX, you could buy two of these cards and run them in SLI for even better performance.  The real question mark is how the GeForce 8800 GT will perform versus AMD's soon to be released RV670 mid-range GPU.  We're hearing stories of new 55nm, 700M transistor Radeon GPUs dancing in like sugarplums just in time for the holiday buying season.  We'll have to see how this all unfolds in the weeks ahead, but for today, its all NVIDIA limelight shining down on the new cost and power-efficient GeForce 8800 GT.

  • Solid Mid-Range  Performance
  • Decent Overclocker
  • Great Image Quality 
  • Great Price/Performance Ratio 
  • Runs Relatively Hot
  • Fan Can Whine

Talk About This Article In HotHardware's Forum, Today!

Content Property of HotHardware.com