Logo   Banner   TopRight
TopUnder
Transparent
NVIDIA's GeForce 7 Update: Introducing the 7900 GTX, 7900 GT & 7600 GT
Transparent
Date: Mar 09, 2006
Section:Graphics/Sound
Author: Marco Chiappetta and Dave Altavilla
Transparent
Introduction, Specifications & Related Info

Product refreshes in the Graphics world have tended to become rather casual, commonplace occurrences.  A little tweak here, a clock speed bump there and presto, there's a new SKU to fill the store shelves and something new to look at in the benchmarks.  Historically, refreshes or "kickers" as they are sometimes known, often are just the result of manufacturing process enhancements, a different bond-out option on the GPU, or even a software modification that unlocks or locks, as the case may be, certain features within the architecture.  NVIDIA's launch of new additions to their GeForce 7 line-up is by no means this sort of refresh, however.  In fact, this launch is about as dramatic a change that one can make to the architecture, without completely ripping it up.

Once upon a time, NVIDIA pioneered TSMC's .13 micron manufacturing process and took a few lumps along the way, working out the kinks in their architecture for what was then a leading edge process geometry. That was a long time ago however, relatively speaking, and the painful days of the NV30 seem to be but a distant memory now, with more than a few very successful product launches since then. In addition, the company has significantly improved execution on delivery of retail product, and even more recently a move to .11 micron manufacturing, with introduction of the GeForce 7800 series, proved to be wildly successful, catching ATI flat-footed until they recently launched their Radeon X1900 product . The Radeon X1900 weighs in at 48 pixel shader units strong versus NVIDIA's 24 in the 7800 GTX. So what's NVIDIA's next move?

Logically, one would think the natural migration for NVIDIA would be a more powerful GPU with more rendering resources on board, like more shader pipes or perhaps a few more raster operators to balance things out a bit.  But there's also another way to skin the proverbial cat.  Rather than going back to the drawing board NVIDIA instead decided to make a move based on manufacturing process technology once again; bold to be sure, when you can consider their trials and tribulations of days gone by.  From here the move would be from .11 micron to .09 or 90 nanometer technology, once again no small feat in and of itself. 

Today's launch is the fruits of NVIDIA's labor at 90nm.  We have full details and performance profiles for you here on three new NVIDIA 90nm-based graphics cards, a new flagship called the GeForce 7900 GTX, a new performance mid-range card dubbed GeForce 7900 GT and a new mainstream 12-pipe machine, the GeForce 7600 GT.

NVIDIA's GeForce 7 Series - Now in 90nm
Features & Specifications
NVIDIA CineFX 4.0 Shading Architecture
_Vertex Shaders
·
_Support for Microsoft DirectX 9.0 Vertex Shader 3.0
·
_Displacement mapping
·
_Geometry instancing
·
_Infinite length vertex programs

_Pixel Shaders
·
_Support for DirectX 9.0 Pixel Shader 3.0
·
_Full pixel branching support
·
_Support for Multiple Render Targets (MRTs)
·
_Infinite length pixel programs

_Next-Generation Texture Engine
·
_Accelerated texture access
·
_Up to 16 textures per rendering pass
·
_Support for 16-bit floating point format and 32-bit floating point format
·
_Support for non-power of two textures
·
_Support for sRGB texture format for gamma textures
·
_DirectX and S3TC texture compression

_Full 128-bit studio-quality floating point precision through the entire rendering pipeline with native hardware support for 32bpp, 64bpp, and 128bpp rendering modes

API Support
_Complete DirectX support, including the latest version of Microsoft DirectX 9.0 Shader Model 3.0
_Full OpenGL support, including OpenGL 2.0

64-Bit Texture Filtering and Blending
_Full floating point support throughout entire pipeline
_Floating point filtering improves the quality of images in motion
_Floating point texturing drives new levels of clarity and image detail
_Floating point frame buffer blending gives detail to special effects like motion blur and explosions

NVIDIA Intellisample 4.0 Technology
_Advanced 16x anisotropic filtering (with up to 128 Taps)
_Blistering- fast antialiasing and compression performance
_Gamma-adjusted rotated-grid antialiasing removes jagged edges for incredible image quality
_Transparent multisampling and transparent supersampling modes boost antialiasing quality to new levels
_Support for normal map compression
_Support for advanced lossless compression algorithms for color, texture, and z-data at even higher resolutions and frame rates
_Fast z-clear
NVIDIA Digital Vibrance Control (DVC) 3.0 Technology
_DVC color controls
_DVC image sharpening controls

NVIDIA SLI Technology
_Patented hardware and software technology allows two GPUs to run in parallel to scale performance
_Scales performance on over 60 top PC games and applications

NVIDIA UltraShadow II Technology
_Designed to enhance the performance of shadow-intensive games

NVIDIA PureVideo Technology
_Adaptable programmable video processor
_High-definition MPEG-2 and WMV9 hardware acceleration
_Spatial-temporal de- interlacing
_Inverse 2:2 and 3:2 pull-down (Inverse Telecine)
_4-tap horizontal, 5-tap vertical scaling
_Overlay color temperature correction
_Microsoft Video Mixing Renderer (VMR) supports multiple video windows with full video quality and features in each window
_Integrated HDTV output

Composited Desktop Hardware Engine
_Video post-processing
_Real-time desktop compositing
_Accelerated antialiased text rendering
_Pixel shader-driven special effects and animation

Advanced Display Functionality
_Dual integrated 400MHz RAMDACs for display resolutions up to and including
2048x1536 at 85Hz
_Dual DVO ports for interfacing to external TMDS transmitters and external TV encoders
_Full NVIDIA nView multi-display technology capability

Advanced Engineering
_Designed for PCI Express x16
_Designed for high-speed GDDR3 memory



The feature set hasn't changed with the introduction of these new GPUs, it just takes up less die real-estate in .09 micron geometries.  For a more comprehensive look at the main features of the GeForce 7 series, and some details regarding NVIDIA's multi-GPU SLI platform as a while, we recommend taking a look at a few of our recent articles...

That's a lot of reading, but the information and performance data in those articles will give you all of the background and architectural details necessary to fully understand the new products being announced today. If something on the proceeding pages doesn't click, look back to these articles for more detail.

Transparent
The GeForce 7900 GTX & 7900 GT

A quick glance at the architecture of the new GeForce 7900 series products shows that little has changed in terms of the general block diagram.  That's not to say, however, that the G71 chip itself isn't radically different from its predecessor, the G70, which became known as the GeForce 7800 GTX.  Although parts of the G71 have been re-architected for efficiency and transistor saving, the GeForce 7900 does borrow heavily from the G70.

NVIDIA GeForce 7900 Architecture Quick-Take
The Big Iron

 

GeForce 7900 Series GPU Block Diagram
90nm - 278M transistors

For the GeForce 7900 we're still looking at a 24 pixel pipeline machine that comes equipped with 16 Raster Ops and 8with 8 vertex shader units and 16 ROPs.  The GPU also still derives its memory buffer bandwidth over a 256-bit interface as well.  This new GeForce 7 series GPU is built on TSMC's new 90nm fabrication process, which is definitely leading edge by today's standards, but perhaps just not "bleeding edge" where it's oh-so easy to get cut trying to catch a falling knife, so to speak.  We're sure 65nm is on the horizon for the major graphics vendors as well, but for now 90nm process geometries are more than adequate for bringing these highly complex Graphics Processor cores to market.

Speaking of which, the new GeForce 7900 series has actually lightened its load a bit by dropping a few transistors as a result of design optimizations and tweaks, made to the Shader Engines, ROPs, SLI interface, and Z-Cull logic, in this spin of the chip.  The ROPs specifically have been re-architected to improve blending performance, while other portions of the chip were tweaked for efficiency. The result is that the new GeForce 7900 series has a 278 million transistor count versus roughly 302 million transistors in the GeForce 7800 GTX.  Looking comparatively at the die size versus ATI's new Radeon X1900 series product, you'll also note that the GeForce 7900 series is significantly smaller, 196mm2 versus 352mm2 to be exact.  This was part of NVIDIA's approach on this architecture refresh, one we feel that should certainly bode very well for them, from an overall cost standpoint.

The GeForce 7900 still only has 24 pixel shader engines, which almost sounds a bit meager when you consider the Radeon X1900 XT series comes equipped with 48 pixel shaders but those 48 shader engines are only supported by 16 Raster Operators in the X1900 and balanced resources in any architecture, are always critical.  Adding in additional Raster Ops along with Shader Engines would most definitely blow out die size even more, so instead NVIDIA chose to drive the manufacturing process to 90nm, which offers obvious gains in both clock speeds, die size, thermals and power consumption.  When you consider how well the GeForce 7800 performs with its 430 - 450MHz core, it doesn't take a rocket scientist to know that a 200MHz increase in clock speed and a bit more head-room on that memory interface speed, could likely be all that's needed to take the current architecture to the next level.  And imagine what 90nm could do for costs, with the same clock speeds.  It becomes clear that NVIDIA may have played this one masterfully, as long as TSMC's process can churn out the volume.

A Closer Look At The New GeForce 7900s
One Big and Bad And One Lean and Mean

      

      
NVIDIA GeForce 7900 GTX:
24-Pixel Shader Units / 8 Vertex Shader Units / 16-ROPs /
256-Bit Memory Interface
650MHz Core / 700MHz Vertex / 800MHz (1.6GHz DDR) Memory (256MB) - Pricing $499 - $649

As you can see, the new GeForce 7900 GTX looks just like the 512MB 7800 GTX it usurps as NVIDIA's flagship desktop part. A quick visual inspection did not reveal any differences in the PCB design, and both cards share the same dual-slot cooler with heat-pipes, which first appeared on NVIDIA's high-end Quadro workstation graphics cards. There are some major differences between the 7900 GTX and 7800 GTX, though.  First, the 7900 GTX now features a pair of dual-link DVI outputs, thanks to the two dual-links TMDSs within the G71 GPU. As we mentioned earlier, the GPU is also manufactured on a more advanced 90nm processes which allowed NVIDIA to crank the core clock speed way up on the 7900 GTX. The reference spec calls for a GPU clocked at 650MHz, with a 700MHz geometry clock. And the card's 512MB frame buffer is clocked at 800MHz (1.6GHz DDR), which is actually a bit lower than the 512MB GeForce 7800 GTX. The GeForce 7900 GTX also features a new 7-pin HDTV output, as opposed to the 9-pin output on previous cards, that supports direct connection of S-Video cables or an HD Component output adapter. NVIDIA expects pricing on the 7900 GTX to fall within the $499 to $649 range, depending on the specific card configuration and bundle offered by their board partners. That's a wide range, for sure, but in-line with what the market has supported the past couple of product cycles.

The GeForce 7900 GTX also fully supports NVIDIA's relatively new Quad-SLI technology, which links four GPUs together for increased performance and potentially better image quality when using SLIAA -- up to 32X AA in fact. At CES we saw what was essentially four GeForce 7800 GTX cards running in Dell's high-end XPS Renegade rig, but at a recent event, NVIDIA was showing of Quad-SLI GeForce 7900 rigs from a handful of strategic partners, including Alienware, Falcon and Voodoo. We hope to bring you a comprehensive look at a Quad-SLI rig in the near future and plan to go more in-depth then. What we can tell you now is that NVIDIA made true quad-SLI possible by designing a 48-lane, 3-port PCI Express switch, which offers full PCI Express x16 bandwidth to all four GPUs when installed on an nForce 4 SLIX16 chipset based motherboard. And to fully harness the power of four GPUs, NVIDIA is introducing a new rendering mode called "AFR of SFR". To put it simply, with AFR of SFR rendering, alternate frames are then slit in two parts with each GPU only working on roughly half, or every other frame.

      

      
NVIDIA GeForce 7900 GT:
24-Pixel Shader Units / 8 Vertex Shader Units / 16-ROPs / 256-Bit Memory Interface
450MHz Core / 470MHz Vertex / 660MHz (1.32GHz DDR) Memory (256MB) - MSRP $299 - $399

We suspect this next addition to the GeForce 7 series line-up will be very popular amongst the Enthusiast and Gaming communities. The new GeForce 7900 GT is essentially a scaled-down 7900 GTX, that differs from the new flagship in only a few ways. The 7900 GT and 7900 GTX share the exact same GPU with dual dual-link DVI outputs, but it is clocked at a more conservative 450MHz as per the reference spec. The 7900 GT also sports a 256MB frame buffer clocked at 1.32GHz, half that of the GTX, a smaller, less elaborate PCB, and a much smaller, single-slot copper cooler. The specifications put the 7900 GT just above the 256MB 7800 GTX in NVIDIA's product line-up, but the pricing scale for the 7900 GT is much more attractive. NVIDIA expects cards to be available for prices between $299 to $399, depending on the frame buffer configuration, clock speeds, and bundles offered by their partners.

Transparent
The GeForce 7600 GT

 

Take that same leading edge 90nm process technology, crank your clocks speeds a bit and cut down to a 12-pipe machine and you're looking at a compelling product for the mainstream $200 graphics card market.  You'd also be looking at the new GeForce 7600 GT.

NVIDIA GeForce 7600 Architecture Quick-Take
Mainstream Muscle

GeForce 7600 Series GPU Block Diagram
90nm - 177M transistors

The new GeForce 7600 architecture is setup to fall smack-dab in the sweet spot of the market for volume.  This GPUwith 12 Pixel Shader engines, 5 Vertex Shaders and 8 ROPs.  Its core clock speed is nicely goosed up to 560MHz and it sports 1.4GHz GDDR3 memory (700MHz DDR) over its 128-bit memory interface.  An interesting note here is that while virtually all other resources were cut in half in comparison to the GeForce 7900 series (shader units and ROPs for example), NVIDIA decided to leave an extra Vertex unit intact, 5 total, versus what may have been perceived as a standard chop-down to 4 in this architecture.  This should most definitely bring performance benefits to this lower cost architecture, in the area of Geometry processing when there are high polygon-count models being rendered in a scene.  Beyond these simple cut-downs of the core, the architecture is essentially unchanged and the GeForce 7600 series will offer all of the basic features set of the high end 7900 series, including full support for Shader Model 3.0 rendering and NVIDIA's PureVideo technology.

Finally, take note of the GeForce 7600's die size captured above.  The chip is obviously highly optimized and we would not be surprised if these cards approached the $150 price range later this year, in certain configurations from the various AIBs.

A Closer Look At The New GeForce 7600
A New Welter-Weight Champ?

      

      
NVIDIA GeForce 7600 GT:
12-Pixel Shader Units / 5 Vertex Shader Units / 8-ROPs /
128-Bit Memory Interface
560MHz Core / 700MHz (1.4GHz GDDR3) Memory (256MB) - Pricing $179 - $229

The GeForce 7600 GT pictured here is a 256MB card, but NVIDIA informed us that 128MB models will likely be available as well. The card sports the same single-slot cooler found on the 7900 GT, however, the similarities basically stop there. The 7600 GT GPU, formerly codenamed G73, does share the same core architecture and overall feature set as its high-powered counterparts, but as we mentioned above the number of shader and vertex units, and ROPs have been cut down. The card also differs from the 7900 series in that it features only one dual-link DVI output in conjunction with one single-link DVI output. The 7600 GT though, doesn't require supplemental power. NVIDIA claims the card requires no more than 70w, which is in-line with the 75w a typical PCI Express x16 slot can supply. As is the case with the new 7900s, pricing on the 7600 GT will vary depending on the configuration, clock speeds, and bundle offered by NVIDIA's board partners.

Transparent
Image Quality: Anti-Aliasing

Prior to benchmarking the new GeForce 7900 and 7600 series cards, we spent some time quickly analyzing their in-game image quality versus a GeForce 7800 GTX and an ATI Radeon X1900 XTX.  First, we used the "background 1" map in Half-Life 2 to get a feel for how each card's anti-aliasing algorithm's affected the scene.

Image Quality Analysis: Standard Anti-Aliasing Modes
NVIDIA vs. ATI
NVIDIA GeForce 7900 GTX Screenshots

GeForce 7900 GTX
1280x1024 - No AA

GeForce 7900 GTX
1280x1024 - 2x AA

GeForce 7900 GTX
1280x1024 - 4X AA

GeForce 7900 GTX
1280x1024 - 8xS AA

NVIDIA GeForce 7800 GTX Screenshots

GeForce 7800 GTX
1280x1024 - No AA

GeForce 7800 GTX
1280x1024 - 2x AA

GeForce 7800 GTX
1280x1024 - 4X AA

GeForce 7800 GTX
1280x1024 - 8xS AA

ATI Radeon X1900 XTX Screenshots

Radeon X1900 XTX
1280x1024 - No AA

Radeon X1900 XTX
1280x1024 - 2x AA

Radeon X1900 XTX
1280x1024 - 4x AA

Radeon X1900 XTX
1280x1024 - 6x AA

Overall, not much has changed with the GeForce 7900 GTX in terms of in-game image quality. The G71 supports the same multi-sample anti-aliasing sampling patterns as the G70, so there are no discernable image quality differences between the GeForce 7900 GTX and GeForce 7800 GTX in-game, other than the slight differences introduced by moving objects in the scene above.

When compared to ATI's current flagship though, there is a little more to talk about. When the Radeon X1900XTX launched, we pointed out a problem with regard to its anti-aliasing quality as it related to micro-detail in the scene. Specifically, micro-detail in the fine lines, like in the trees in the shots above, would just disappear. We suspected this was a problem with the initial driver release though, and we were right. With the latest Catalyst drivers, micro-detail is just fine with the Radeon X1900XTX. With each successive level of anti-aliasing up to 4XAA enabled, it is extremely difficult to pick out differences between the competing architectures. Differenced between 8xS AA and 6XAA, the highest levels offered by NVIDIA and ATI respectively with a single card, are much more pronounced though. 8xS AA is a combination of 2x super-sampling and 4x multi-sampling which removes jaggies from within transparent textures and along polygon edges, at the cost of more performance, whereas ATI's 6XAA is strictly multi-sampled.

Both companies also offer transparency and adaptive anti-aliasing for removing jaggies from textures with transparent areas, as well as higher AA modes when a pair of cards are used together in SLI or CrossFire configurations. To see how things stack-up with those modes, we recommend taking a look at our Radeon X1900 coverage from a few weeks back.

Transparent
Image Quality: Anisotropic Filtering

With this next set of screen shots, we followed a similar procedure outlined on the previous page to evaluate the effect of the anisotropic filtering techniques on a given scene. The screen shots below are from Half-Life 2's "background 4" map. We've again compared similar settings using the GeForce 7900 GTX, a GeForce 7800 GTX and a Radeon X1900 XTX. For this set of screen shots, anti-aliasing was disabled to isolate the effect each card's respective anisotropic filtering algorithms had on the images. Pay no mind to all those nasty jaggies!

Image Quality Analysis: Anisotropic Filtering
Standard Quality Aniso
NVIDIA GeForce 7900 GTX Screenshots

GeForce 7900 GTX
1280x1024 - No Aniso

GeForce 7900 GTX
1280x1024 - 4x Aniso

GeForce 7900 GTX
1280x1024 - 8x Aniso

GeForce 7900 GTX
1280x1024 - 16x Aniso

NVIDIA GeForce 7800 GTX Screenshots

GeForce 7800 GTX
1280x1024 - No Aniso

GeForce 7800 GTX
1280x1024 - 4x Aniso

GeForce 7800 GTX
1280x1024 - 8x Aniso

GeForce 7800 GTX
1280x1024 - 16x Aniso

ATI Radeon X1900 XTX Aniso Screenshots

Radeon X1900 XTX
1280x1024 - No Aniso

Radeon X1900 XTX
1280x1024 - 4x HQ Aniso

Radeon X1800 X1900 XTX
1280x1024 - 8x HQ Aniso

Radeon X1900 XTX
1280x1024 - 16x HQ Aniso

When perusing the images above, pay special attention to the road and the hill to the lower right, as these areas are where anisotropic filtering has the most impact on in-game image quality. In the first row of screenshots labeled "No Aniso", only trilinear filtering is enabled, which results in the ugly blurring in the road and on the hill. With each successive level of anisotropic filtering applied though, the blurring goes away, and more detail is visible in the textures. Again, there is no perceptible difference between the images produced on the GeForce 7900 GTX and GeForce 7800 GTX because of the similarities in the architecture, but there are some subtle differences between NVIDIA's and ATI's output. The differences are visible at the furthest point in the road at about in the middle of the slope in the hill. Saying which one is "better" is a difficult, but we'd give an edge to ATI here. We should also note that ATI offers a High-Quality aniso mode that is not angular dependant, which offers even better quality than the images shown here.

Transparent
PureVideo: Features & Performance

Next we took another look at Digital Video processing capabilities between the two competing core GPU architectures, "PureVideo" technology at work for NVIDIA and "AVIVO" driving ATI.

WMV-HD Decode Acceleration
PureVideo Performance Explored

To characterize CPU utilization when playing back WMV HD content, we used the Performance Monitor built into Windows XP. Using the data provided by Windows Performance Monitor, we created a log file that sampled the percent of CPU utilization every second, while playing back the 1080p version of the "The Living Sea" video available on Microsoft's WMVHD site. The data was then imported into Excel to create the graph below. The graph shows the CPU utilization for a GeForce 7900 GTX, a GeForce 7800 GTX, and a Radeon X1900 XTX using Windows Media Player 10, patched using the DXVA updates posted on Microsoft's web site (Updates Available Here).



Average CPU Utilization (Athlon 64 FX-55 @ 2.6GHz)

GeForce 7900 GTX GeForce 7800 GTX Radeon X1900 XT
38.11% 37.84% 41.48%

These flagship GPU architectures are roughly on par, CPU utilization-wise, when playing High-Def content in Windows Media Player 10.  The two NVIDIA GeForce 7 series cards put up slightly lower utilization levels than the Radeon X1900 XTX but as you'll see in the following section, this variance is minor and it's not all about low CPU overhead. And depending on which video is being played, the utilization can change to favor either architecture.

DVD Video Quality: HQV Benchmark
http://www.hqv.com/benchmark.cfm

Next up, we have a new addition to the HotHardware testing arsenal, the HQV DVD video benchmark from Silicon Optics. HQV is comprised of a sampling of video clips and test patterns that have been specifically designed to evaluate a variety of interlaced video signal processing tasks, including decoding, de-interlacing, motion correction, noise reduction, film cadence detection, and detail enhancement. As each clip is played, the viewer is required to "score" the image based on a predetermined set of criteria. The numbers listed below are the sum of the scores for each section. We played the HQV DVD using the latest version of NVIDIA's PureVideo Decoder on the GeForce 7900 GTX, and as recommended by ATI, we played the DVD on the ATI hardware using Intervideo's WinDVD 7 Platinum, with hardware acceleration enabled.

When the ATI's X1K family of products first launched, their score in this benchmark was below 40 points. With the latest set of Catalyst 6.2 drivers though, video playback quality is vastly improved. And even more improvements are on the way in the Catalyst 6.4 drivers. The biggest boost to ATI's score comes by way of the eight individual film cadence test and the noise reduction tests.   As things stand today, NVIDIA's PureVideo technology, from an image quality perspective, doesn't compete favorably to ATI.  For playing back DVDs, or similar digital video files, ATI's X1K family of cards and the new X1900 XTX are the yardsticks to measure by. 

However, we would offer the caveat that you should watch this space closely, as both companies will be releasing driver revisions in the coming months that will reportedly offer significant improvements in image quality. We saw a future NVIDIA driver release up and running, which supported all of the cadence tests, and also did a great job with noise reduction and sharpening in the film detail tests. ATI too is improving their noise reduction algorithm and introducing image sharpening in a future release. We suspect both companies will offer similar performance in this benchmark by the late-April to May time frame.

We should also mention that NVIDIA has been working with partners to enable full PureVideo support with video playback applications that are DXVA compliant. That means applications like WinDVD, PowerDVD, and Nero will benefit from PureVideo and end-users will no longer have to purchase NVIDIA's decoder separately to experience the benefits of the video-engine built into their cards. NVIDIA also offers H.264 acceleration on all cards in the GeForce 7 and GeForce 6 series of products (excluding early NV40s with the first-gen PureVideo engine), which will accelerate HD videos when Blue-Ray and HD DVDs hit the scene in the coming months.

Transparent
Test Systems & 7600 GT w/ 3DMark06

HOW WE CONFIGURED THE TEST SYSTEMS: We tested our NVIDIA cards on an Asus A8N32-SLI nForce 4 SLIX16 chipset based motherboard. However, the ATI cards were tested on an A8R32-MVP motherboard based on the CrossFire Xpress 3200 chipset. Both systems used the same AMD Athlon 64 FX-55 processor and 1GB of low-latency Corsair XMS RAM, though. The first thing we did when configuring these test systems was enter each BIOS and loaded the "High Performance Defaults."  The hard drive was then formatted, and Windows XP Professional with SP2 was installed. When the installation was complete, we installed the latest chipset drivers available, installed all of the other necessary drivers for the rest of our components, and removed Windows Messenger from the system. Auto-Updating and System Restore were also disabled, the hard drive was defragmented, and a 768MB permanent page file was created on the same partition as the Windows installation. Lastly, we set Windows XP's Visual Effects to "best performance," installed all of the benchmarking software, and ran the tests.

The HotHardware Test Systems
AMD Athlon 64 FX Powered

Processor -

Motherboard -





Video Cards -










Memory -


Audio -

Hard Driv
e -

 

Hardware Used:
AMD Athlon 64 FX-55 (2.6GHz)

Asus A8N32-SLI
nForce4 SLIX16 chipset

Asus A8R32-MVP
ATI CrossFire Xpress 3200

GeForce 7900 GTX (x2)

GeForce 7900 GT (x2)
GeForce 7600 GT (x2)
512MB GeForce 7800 GTX (x2)
256MB GeForce 7800 GTX (x2)
GeForce 7800 GT
Radeon X1900 XTX (x2)
Radeon X1600 Pro (x2)
Radeon X1600 XT


1024MB Corsair XMS PC3200 RAM
CAS 2

Integrated on board

Western Digital "Raptor"

36GB - 10,000RPM - SATA

Operating System -
Chipset Drivers -
DirectX -

Video Drivers
-




Synthetic (DX) -
Synthetic (DX) -
DirectX -

DirectX -
DirectX -
DirectX -
DirectX -
OpenGL -
Relevant Software:
Windows XP Professional SP2
nForce Drivers v6.85
DirectX 9.0c (March Redist)

NVIDIA Forceware v84.17

ATI Catalyst v6.2


Benchmarks Used:
3DMark06 v1.0.2
Shadermark v2.1 (b130a)
X3 Reunion
Splinter Cell: Chaos Theory
FarCry v1.33*
F.E.A.R.
Half Life 2*
Quake 4*

* - Custom Test (HH Exclusive demo)

GeForce 7600 GT Performance Comparisons with 3DMark06 v1.0.2
Details: http://www.futuremark.com/products/3dmark06/

3DMark06
Futuremark recently launched a brand-new version of their popular benchmark, 3DMark06. The new version of the benchmark is updated in a number of ways, and now includes not only Shader Model 2.0 tests, but Shader Model 3.0 and HDR tests as well. Some of the assets from 3DMark05 have been re-used, but the scenes are now rendered with much more geometric detail and the shader complexity is vastly increased as well. Max shader length in 3DMark05 was 96 instructions, while 3DMark06 ups the number of instructions to 512. 3DMark06 also employs much more lighting, and there is extensive use of soft shadows. With 3DMark06, Futuremark has also updated how the final score is tabulated. In this latest version of the benchmark, SM 2.0 and HDR / SM3.0 tests are weighted and the CPU score is factored into the final tally as well.

We broke up the testing in this article into two sections. On the next few pages we'll report on the performance of the new GeForce 7600 GT as it compares to a couple of other cards in roughly the same price range. Later on, we'll focus on the performance of the new GeForce 7900s relative to competing cards from ATI and the GeForce 7800 series. We apologize for not having Radeon X1600 XT CrossFire scores here, but we were unable to secure a second card in time for this article.

As you can see, whether running in a single-card or SLI configuration, the GeForce 7600 GT significantly outpaced either of the X1600s we tested. 7600 GT SLI versus X1600 Pro CrossFire yielded an almost 2000 point advantage for NVIDIA, and in the single card tests the GeForce 7600 GT finished with scores approximately 700 to 1200 points higher than the X1600s.

Transparent
GeForce 7600 GT Performance: HL2

GeForce 7600 GT Performance Comparisons with Half-Life 2
Details: http://www.half-life2.com/

Half Life 2
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait until November '04 to get our hands on this classic. We benchmarked Half-Life 2 with a long, custom-recorded timedemo in the "Canals" map, that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without any anti-aliasing or anisotropic filtering and with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently.

 

Like the 3DMark06 results on the previous page, the GeForce 7600 GT swept all of the Half Life 2 benchmarks, regardless of the test configuration. A single GeForce 7600 GT handily outpaced the Radeon X1600 Pro and X1600 XT, and was even faster than X1600 Pro CrossFire by a wide margin. A pair of GeForce 7600 GTs running in SLI mode helps NVIDIA's cause, further outpacing all of the other cards and configurations we tested.

Transparent
GeForce 7600 GT Performance: FarCry

GeForce 7600 GT Performance Comparisons with FarCry v1.33
Details: http://www.farcry.ubi.com/

FarCry
If you've been on top of the gaming scene for some time, you probably know that FarCry was one of the most visually impressive games to be released on the PC last year. Courtesy of its proprietary engine, dubbed "CryEngine" by its developers, FarCry's game-play is enhanced by Polybump mapping, advanced environment physics, destructible terrain, dynamic lighting, motion-captured animation, and surround sound. Before titles such as Half-Life 2 and Doom 3 hit the scene, FarCry gave us a taste of what was to come in next-generation 3D gaming on the PC. We benchmarked the graphics cards in this article with a custom-recorded demo run taken in the "Catacombs" area checkpoint, at various resolutions without anti-aliasing or anisotropic filtering enabled, and then again with 4X AA and 16X aniso enabled concurrently.

 

It was another clean sweep for the GeForce 7600 GT in our custom FarCry benchmark. Here, regardless of resolution or whether or not any additional pixel processing was used, the GeForce 7600 GT posted framerates much higher than any of the ATI powered cards, sometimes more than doubling the performance. A single GeForce 7600 GT was again able to outpace a pair of X1600 Pros running in a CrossFire configuration, and a pair of 7600 GTs running in SLI mode can't be touched by the competing boards we tested.

Transparent
GeForce 7600 GT Performance: F.E.A.R.

GeForce 7600 GT Performance Comparisons with F.E.A.R
More Info: http://www.whatisfear.com/us/

F.E.A.R
One of the most highly anticipated titles of 2005 was Monolith's paranormal thriller F.E.A.R. Taking a look at the minimum system requirements, we see that you will need at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card, that is a Radeon 9000 or GeForce4 Ti-class or better, to adequately run the game. Using the full retail release of the game patched to v1.02, we put the graphics cards in this review through their paces to see how they fared with a popular title. Here, all graphics settings within the game were set to the maximum values, but with soft shadows disabled (Soft shadows and anti-aliasing do not work together currently). Benchmark runs were then completed at resolutions of 1152x864 and 1600x1200, with and without anti-aliasing and anisotropic filtering enabled.

 

Once again, NVIDIA's new budget-minded 12 pipe card takes ATI's counterpart, the Radeon X1600 XT to task blowing by it by as much as 55% without AA and Ansio enabled and roughly 18 - 30% with 4X16 turned on.  Especially at higher 1600X1200 resolutions, the 7600 GT's lead was more pronounced.  Drop in another card for an SLI GeForce 7600 GT setup and even F.E.A.R.'s demanding game engine is playable at 1600X1200 with 4X AA and 16X Aniso enabled.  A pair of 7600 GTs approaches 40 fps in this test scenario.

Transparent
GeForce 7600 GT Performance: Quake 4

GeForce 7600 GT Performance Comparisons with Quake 4
Details: http://www.quake4game.com/

Quake 4
id Software, in conjunction with developer Raven, recently released the latest addition to the wildly popular Quake franchise, Quake 4. Quake 4 is based upon an updated and slightly modified version of the Doom 3 engine, and as such performance characteristics between the two titles are very similar.  Like Doom 3, Quake 4 is also an OpenGL game that uses extremely high-detailed textures and a ton of dynamic lighting and shadows, but unlike Doom3, Quake 4 features some outdoor environments as well. We ran this these Quake 4 benchmarks using a custom demo with the game set to its "High-Quality" mode, at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without anti-aliasing enabled and then again with 4X AA and 8X aniso enabled simultaneously.

 

If you thought the GeForce 7600 GT's dominance was impressive with F.E.A.R., you'll be even more-so impressed to see our Quake 4 benchmarks here.  It's no mystery that NVIDIA hardware simply runs Id Quake and Doom engine games faster.  It has been that way since seemingly the beginning of time.  Blame it on inferior OpenGL drivers from ATI or an NVIDIA optimized game engine from Id, regardless the Green Team's advantage is striking and a single GeForce 7600 GT puts the hurt on even a pair of Radeon X1600 Pros in CrossFire.  Comparing single card numbers, apples to apples, the GeForce 7600 GT doubles the performance of a Radeon X1600 XT and then some.

Transparent
Shadermark v2.1 - GeForce 7900 Series

From this point forward in this article, we'll be focusing on the performance of NVIDIA's new GeForce 7900 GTX and 7900 GT versus ATI's Radeon X1900 XTX, in both single card and SLI or CrossFire configurations.  As with any new GPU architecture, we like to take a quick look at Shader engine performance with ShaderMark before dig into more serious benchmaking.

Performance Comparisons with ShaderMark v2.1 (Build 130a)
Strict High-Level Shading Language

Shadermark v2.1
For most of our recent video card-related articles, we've stuck to using games, or benchmarks based on actual game engines, to gauge overall performance. The problem with using this approach exclusively is that some advanced 3D features may not be fully tested, because the game engines currently in use tend not to use the absolute latest features available within cutting-edge graphics hardware. In an effort to reveal raw shader performance, which is nearly impossible to do using only the games on the market today, we've incorporated ToMMTi-System's ShaderMark v2.1 into our benchmarking suite for this article. ShaderMark is a Direct 9.0 pixel shader benchmark that exclusively uses code written in Microsoft's High Level Shading Language (HLSL) to produce its imagery.

ShaderMark shows these two flagship cards in a virtual dead-heat with the GeForce 7900 GTX taking the lead ever so slightly pulling down 13 victories to the Radeon X1900 XTX's 12 (highest scores are listed in BOLD print).  Each GPU seems to have its strong points, however, with ATI seemingly more adept at HDR Shader operations as well as Edge Detection and Hatching, while the GeForce 7900 GTX excels in Shadow Mapping, Lighting and Bump Mapping, at least as far as this test is concerned.

Transparent
3DMark06 v1.02 - GeForce 7900 Series

Performance Comparisons with 3DMark06 v1.0.2
Details: http://www.futuremark.com/products/3dmark06/

3DMark06
Futuremark recently launched a brand-new version of their popular benchmark, 3DMark06. The new version of the benchmark is updated in a number of ways, and now includes not only Shader Model 2.0 tests, but Shader Model 3.0 and HDR tests as well. Some of the assets from 3DMark05 have been re-used, but the scenes are now rendered with much more geometric detail and the shader complexity is vastly increased as well. Max shader length in 3DMark05 was 96 instructions, while 3DMark06 ups the number of instructions to 512. 3DMark06 also employs much more lighting, and there is extensive use of soft shadows. With 3DMark06, Futuremark has also updated how the final score is tabulated. In this latest version of the benchmark, SM 2.0 and HDR / SM3.0 tests are weighted and the CPU score is factored into the final tally as well.

There is a lot of data to cover from this point forward, so we're going to let the results to most of the talking. As you can see here, the GeForce 7900 GTX and 7900 GT are marginally faster than their GeForce 7800 series counterparts, but they fall a bit shy of the mark set my the Radeon X1900XTX CrossFire Rig. In a single card configuration, the GeForce 7900 GTX is the top dog, but in a dual-card setup, the CrossFire system came out on top.

Breaking down the 3DMark06 results reveals that the GeForce 7900 GTX came in with the highest score in the shader model 2.0 tests, in both single and dual card configurations. The 7900 GT also performed well, falling somewhere in between the 256MB and 512MB 7800 GTXs.

The tables turned slightly in 3DMark06's HDR / Shader Model 3.0 tests. This time around, the Radeon X1900 XTX posted the best scored in both single and dual-card configurations, followed by the GeForce 7900 GTX, and the succession of remaining NVIDIA powered cards.

Transparent
Splinter Cell: CT - GeForce 7900 Series

Performance Comparisons with Splinter Cell: Chaos Theory v1.05
Details: http://www.splintercell3.com/us/

SC: Chaos Theory
Based on a heavily modified version of the Unreal Engine, enhanced with a slew of DX9 shaders, lighting and mapping effects, Splinter Cell: Chaos Theory is gorgeous with its very immersive, albeit dark, environment. The game engine has a shader model 3.0 code path that allows the GeForce 6 & 7 Series of cards, and the new X1000 family of cards, to really shine, and a recent patch has implemented a shader model 2.0 path for ATI's X8x0 generation of graphics hardware. For these tests we enabled the SM 3.0 path on all of the cards we tested. However, High Dynamic Range rendering was disabled so that we could test the game with anti-aliasing enabled (a future patch should enable AA with HDR on the X1K family). We benchmarked the game at resolutions of 1,280 x 1024 and 1,600 x 1,200, both with and without anti-aliasing and anisotropic filtering.

 

Performance was a mixed bag in the Splinter Cell: Chaos Theory benchmark. At a resolution of 1280x1024, the GeForce 7900 GTX put up the highest scores in every test configuration, followed closely behind by the Radon X1900 XTX. The GeForce 7900 GT continues to perform very well, besting the 256MB GeForce 7800 GTX. At the higher resolution though, the Radeon X1900 XTX was able to comeback, and outpace NVIDIA's newest flagship by a couple of frames per second. Clearly, performance between each company's top of the line product up to this point is quite similar, which bodes very well for NVIDIA considering their chip is much smaller and less expensive to manufacture.

Transparent
X3: Reunion Demo - GeForce 7900 Series

Performance Comparisons with X3: Reunion
Details:http://www.egosoft.com/games/x3/info_en.php

X3: Reunion
Many of today's popular benchmarks are based on First Person Shooters (FPS), so in an effort to mix things up a bit, we have some performance data recorded using Egosoft's X3: Reunion demo. The X3 game engine makes use of DirectX 9.0 class pixel shader and lighting effecting to produce the images in the game world. Egosoft recommends a 1.7GHz or higher speed CPU be used with the game, in conjunction with at least 512MB of RAM and a video card with at least a 128MB frame buffer. Gameplay is somewhat reminiscent of Origin's Privateer / Wing Commander games, which is a welcome break from the myriad shooters on the market today.

 

Performance hardly changes from setting to setting and resolution to resolution on all of the 512MB video cards we tested. The GeForce 7900 GTX technically finished with the highest scores here, but only a few frames per second separates NVIDIA's flagship from ATI's in the X3: Reunion benchmark. The GeForce 7900 GT with its smaller 256MB frame buffer and lower clock speeds took much larger performance hits moving to higher resolutions and with anti-aliasing and anisotropic filtering enabled but its stall able to outpace everything other than the 512MB GTXs and X1900XTX. Also notice the good scaling of SLI here versus CrossFire. SLI significantly helped performance in some test configurations here, whereas enabling CrossFire actually hindered performance in a couple of tests.

Transparent
FarCry v1.33 - GeForce 7900 Series

Performance Comparisons with FarCry v1.33
Details: http://www.farcry.ubi.com/

FarCry
If you've been on top of the gaming scene for some time, you probably know that FarCry was one of the most visually impressive games to be released on the PC last year. Courtesy of its proprietary engine, dubbed "CryEngine" by its developers, FarCry's game-play is enhanced by Polybump mapping, advanced environment physics, destructible terrain, dynamic lighting, motion-captured animation, and surround sound. Before titles such as Half-Life 2 and Doom 3 hit the scene, FarCry gave us a taste of what was to come in next-generation 3D gaming on the PC. We benchmarked the graphics cards in this article with a custom-recorded demo run taken in the "Catacombs" area checkpoint, at various resolutions without anti-aliasing or anisotropic filtering enabled, and then again with 4X AA and 16X aniso enabled concurrently.

 

FarCry is essentially CPU limited in almost every test configuration when run on a dual-card SLI or CrossFire enabled system. Only a couple of frames per second separate the multi-GPU configurations from each other, regardless of resolution or whether or not any additional pixel processing is used. If we want to nit-pick, the ATI Radeon X1900 XTX technically posted the best scores versus the 7900 GTX, but its margins of victory were tiny at best. The 7900 GT continues to put up some good numbers, especially in SLI mode, hanging right alongside a pair of X1900XTXs for roughly half the price.

Transparent
Half Life 2 - GeForce 7900 Series

Performance Comparisons with Half-Life 2
Details: http://www.half-life2.com/

Half Life 2
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait until November '04 to get our hands on this classic. We benchmarked Half-Life 2 with a long, custom-recorded timedemo in the "Canals" map, that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without any anti-aliasing or anisotropic filtering and with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently.

 

Our custom Half Life 2 benchmark results basically mirror what we saw with FarCry. When testing high-end video cards such as these, Half Life 2 remains CPU limited in just about every test configuration. Game performance goes up with a 512MB video card installed, but the 256MB cards are still able to post 120+ frame per second framerates, even with 4X anti-aliasing and 16X anisotropic filtering enabled at a resolution of 1600x1200. The GeForce 7900 GTX came in just behind the Radeon X1900XTX in both single and dual-card configurations, but again ATI's margin of victory is miniscule at best. The GeForce 7900 GT continued to be strong, out pacing the 7800 GTX across the board.

Transparent
F.E.A.R. v1.03 - GeForce 7900 Series

Performance Comparisons with F.E.A.R
More Info: http://www.whatisfear.com/us/

F.E.A.R
One of the most highly anticipated titles of 2005 was Monolith's paranormal thriller F.E.A.R. Taking a look at the minimum system requirements, we see that you will need at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card, that is a Radeon 9000 or GeForce4 Ti-class or better, to adequately run the game. Using the full retail release of the game patched to v1.03, we put the graphics cards in this review through their paces to see how they fared with a popular title. Here, all graphics settings within the game were set to the maximum values, but with soft shadows disabled (Soft shadows and anti-aliasing do not work together currently). Benchmark runs were then completed at resolutions of 1152x864 and 1600x1200, with and without anti-aliasing and anisotropic filtering enabled.

 

With the latest v1.03 patch installed, F.E.A.R. no longer offered a resolution of 1280x960, and manually editing the game's CFG file for custom resolutions didn't seem to work, so we just dropped the lower resolution down to 1152x864 for the lower-resolution tests here.  Performance in F.E.A.R. is all over the map, but overall NVIDIA's new GeForce 7900s armed with the latest drivers put up better performance than ATI's current top-of-the-line X1900 XTX. In dual-GPU configurations, the 7900 GTX and 7900 GT are both able to outpace the Radeon X1900XTX CrossFire rig at the higher resolution when anti-aliasing and anisotropic filtering were used. In a single card configuration, the Radeon's performance is much more competitive, falling somewhere between the 7900 GTX and GT depending on the test configuration, but SLI seemed to scale better in F.E.A.R.

Transparent
Quake 4 v1.0.5.2 - GeForce 7900 Series

Performance Comparisons with Quake 4
Details: http://www.quake4game.com/

Quake 4
id Software, in conjunction with developer Raven, recently released the latest addition to the wildly popular Quake franchise, Quake 4. Quake 4 is based upon an updated and slightly modified version of the Doom 3 engine, and as such performance characteristics between the two titles are very similar.  Like Doom 3, Quake 4 is also an OpenGL game that uses extremely high-detailed textures and a ton of dynamic lighting and shadows, but unlike Doom3, Quake 4 features some outdoor environments as well. We ran this these Quake 4 benchmarks using a custom demo with the game set to its "High-Quality" mode, at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without anti-aliasing enabled and then again with 4X AA and 8X aniso enabled simultaneously.

 

Our custom Quake 4 benchmark was all about the GeForce 7900 GTX. In every test configuration but one, the GeForce 7900 GTX posted the highest scores, sometimes outpacing the rival Radeon X1900 XTX by double digit margins depending on the test configuration and whether or not SLI was enabled. The GeForce 7900 GT was also able to outpace the Radeon X1900 XTX in the default tests, but once anti-aliasing was enabled in conjunction with anisotropic filtering, the XTX came out ahead of the GT.

Transparent
SLI Anti-Aliasing Performance

We didn't perform a comprehensive test routine to assess the performance of all of NVIDIA's SLI AA modes with the GeForce 7900 series, but we did run a couple of tests to get a general idea as to how the various modes perform. NVIDIA informed us that some data is now transmitted across the SLI bridge when SLIAA is enabled on the 7900s, freeing up some bandwidth across the PCI Express interface. Upon first hearing this news, we assumed SLIAA performance could be significantly enhanced, but that did not turn out to be the case.

SLI AA Performance: Half Life 2 & FarCry
Upping the Number of Samples

 

 

SLIAA performance improved by between 4% and 9% with the GeForce 7900 GTXs, versus a pair of 512MB GeForce 7800 GTX cards. The biggest gain came in Half Life 2 when enabling SLI 16XAA, where a pair for GeForce 7900 GTX cards were able to outperform a pair of 512MB 7800 GTXs by almost 9 frames per second. ATI's "Super AA" implementation with CrossFire clearly offers higher performance though because all of the frame blending is handled by the compositing engine, which allows each Radeon GPU to run at full speed. With NVIDIA's architecture, the GPU itself handled the offset and blends necessary for SLI AA to function.

Transparent
Performance at 1920x1200

Next we kicked things up a notch, or three, to coin a phrase.  Super high res testing is next.  We didn't want out fortunate readers with the capability of gaming at 1920x1200 res, to go away feeling neglected.

Super High-Resolution Performance - 1920x1200
Extreme HD on Dell's Widescreen LCD

With 3DMark06, we see much of the same performance scaling between the cards at 1920 resolution, as we did at 3DMark06's default 1280 res.  However, there is one somewhat interesting observation to make.  Each single card configuration gives up around 1K 3DMarks at 1920 resolution but the high end dual 512MB graphics card setups, whether SLI or Crossfire, only give up about 600 3DMarks at 1920 res.  This better high resolution performance scaling is due to the larger available frame buffers on each card discretely, versus the combined frame buffer between two cards.

Super high res gaming with X3:Reunion, shows an almost identical scaling versus 1600X1200, with each card giving up a few frame to accommodate the increase in resolution. Once again however, ATI cards take less of a hit in super high res with AA, than NVIDIA cards do and the GeForce 7900GTX is bested comfortably by the Radeon X1900 XTX with 4X AA turned on.

Transparent
7900 vs. 7800 with Similar Clocks

Next we did a quick study on how the new GeForce 7900 would perform if we clocked them down to GeForce 7800 series levels.  Have there been changes made in hardware such that there is any appreciable performance increase clock-for-clock in the new GeForce 7900 series architecture?  We'll see if we can answer this next.

Underclocking the New GeForce 7900s
Faster than 7800s clock-for-clock?

 

 

In short, the answer is no. Downclock a GeForce 7900GTX to GeForce 7800 GTX 512 speeds and you get roughly the same performance.  The same holds true for downclocking a GeForce 7900GT to 256MB GeForce 7800 GTX levels.  The GeForce 7900 series is obviously all about the die shrink, higher core clock speeds and faster memory.   That's no surprise really.  Where NVIDIA was able to trim transistors in silicon, they didn't gain or lose performance, looking at it from a clock-for-clock tick perspective.

Transparent
Overclocking the new GeForces

As we neared the end of our testing, we spent a little time overclocking the new GeForce 7600 GT and 7900s using the clock frequency slider available within NVIDIA's Forceware drivers, after enabling the "Coolbits" registry tweak. To find the card's peak core and memory frequencies, we slowly raised their respective sliders until we begun to see visual artifacts on-screen while running a game or benchmark. (Note: Forceware v84.11 used with the GeForce 7900 GTX due to a bug in the v84.17 drivers.)

Overclocking the GeForce 7900 GTX, 7900 GT, and 7600 GT
(Fast 3D Video Cards) + Overclocking = Even Faster Cards

 

The GeForce 7900GTX is obviously close to it's limit with a 650MHz core clock but who's to say what a few months of manufacturing maturity couldn't bring.  We were able to overclock the 7900GTX to 691MHz core and 1.72GHz Memory (864MHz DDR), which may not seem like much versus the card's stock specs of 659/1.6GHz.  However, we'd argue that you recall the number of transistors switching here, at just under 300 million or so, which is a very complex chip obviously, so 690Mhz was certainly impressive to us.

More impressive perhaps though, was the wonderful headroom we found in the GeForce 7900GT, which hit 580MHz core and 1.5GHz+ memory speeds.  With these gains, 10fps more frame rate was available to us, on average in our Quake 4 test.  This card has enthusiast written all over it and we're almost certain NVIDIA's board partners will be taking advantage of this, differentiating products with higher tested out-of-the-box, clock speeds.

Transparent
Power Consumption

Total System Power Consumption & Acoustics
It's All About the Watts and Decibels

We have a few final data points to cover before bringing this article to a close. Throughout all of our benchmarking, we monitored how much power our NVIDIA based test system was consuming using a power meter, and also took some notes regarding its noise output. Our goal was to give you all an idea as to how much power each configuration used and to explain how loud the configurations were under load. Please keep in mind that we were testing total system power consumption here, not just the power being drawn by the video cards alone.

There are some interesting things to note with regard to power consumption. At idle, the 7600 GT used the least amount of power, as expected considing the card's relatively small die and modest specifcation versus NVIDIA's 24-pipe parts. And under load it was clearly the least demanding card of the lot.

The GeForce 7900 GT also proved to be much more "green" than the similarly configured 256MB GeForce 7800 GTX. The 7900 GT consumed marginally less power at idle than a 256MB 7800 GTX, but under load the 7900 GT was far more energy efficient. Although clocked a bit higher than the 7800 GTX, the new 7900 GT used 41 fewer watts of power under load. NVIDIA's new GeForce 7900 GT is in a class of its own in the performance per watt category.

Thing aren't quite as rosy with the GeForce 7900 GTX, however. At idle, the GeForce 7900 GTX actually consumed more power than a 512MB GeForce 7800 GTX. We suspect the 7900 GTX's higher clock speed, in conjunction with the fact that there is likely more leakage with the 90nm G71 core, results in its high power consumption while idling. Under load though, the GeForce 7900 GTX consumed almost 20 fewer watts than the 512MB GeForce 7800 GTX.

We don't have too much to talk about in regard to each card's acoustic properties, because the sum total of noise produced by our test rig's PSU and CPU cooling fans was louder than the new GeForce cards we tested. The test system's acoustic signature, from only about 6 inches away with the side panel removed, hovered between 65db and 68db depending on which card was installed in the system at the time. We did however, get the impression that the larger fans on the 512MB GeForce 7800 GTX produced a more subdued, lower-tone, than the smaller fans found on the 7900 and 7600 GTs. When the fans on the 7600 GT and 7900 GT spun up, there was a somewhat noticable higher-pitched whine emanating from our test system, but by no means would be consider any of these new cards 'loud'.

Transparent
Our Summary & Conclusion

We're sure your heads are spinning now (as are ours quite frankly) with the level of information overload we've presented you with in the preceding pages.  So we'll break it down for you in more digestible chunks of juicy data-bits.

GeForce 7600GT Performance Summary:
This new budget-minded mid-range card from NVIDIA thoroughly trounced ATI's Radeon X1600 series card, under every test condition, offering significantly better performance in both Direct 3D based game engines and of course NVIDIA's strong suit with Quake 4 in OpenGL.  Games like FarCry and Half Life 2, where ATI has historically held strong, were a complete loss for ATI's nearest competitor in this price range, often times at deficits that were hard to believe, comparing these two competing 12-pixel pipe architectures. In some cases, like FarCry and Half Life 2, the performance advantage of the GeForce 7600 GT was on the order of 25-50%.  In others like Quake 4, the 7600GT more than doubles the performance of the Radeon X1600 XT and in SLI versus CrossFire, ATI's pain is only greater.

Unfortunately, we'll have to refer you here for a reference on how the GeForce 7600GT's performance matches up versus that of the newly released and more expensive Radeon X1800 GTO.  This new card from ATI came in late to our test process, so it couldn't be included in this article.  However, even with roughly a $50 premium, this new chop-down X1800 offering isn't able to diffuse the strength the GeForce 7600GT obviously has and definitely not its next of kin, the GeForce 7900GT, which we'll cover next.

GeForce 7900GT and GeForce 7900GTX Performance Summary:
Our personal favorite card in this showcase, the performance mainstream GeForce 7900GT, actually out-performed NVIDIA's own legacy high-end card, the GeForce 7800 GTX in every test. It also put up a good showing versus a ATI's Radeon X1900 XTX,  which has an MSRP today of over $250 more versus the GeForce 7900GT's expected $299 street price.  And in game engines like Quake 4 and F.E.A.R., the 7900GT was actually nip-n-tuck with a Radeon X1900 XTX. We'll speak more on price / performance metrics in a bit but you simply can't beat the GeForce 7900GT for the money.

The GeForce 7900GTX was able to take the lead more often than not versus a Radeon X1900 XTX, with the exception of Half Life 2 and Far Cry, where ATI had a slight edge.  Then of course we have to consider the flip side advantage for NVIDIA in Quake 4, which is much more dramatic actually, with a 15 - 35% spread in favor of the GF 7900GTX.  Across the board however, NVIDIA still has work to do on Anti-Aliasing performance, where clearly ATI suffers much less performance degradation at roughly equivalent sample settings.  Counter-balance this situation once again with the enormous advantage in performance NVIDIA has with their SLI configurations, as was obvious in our testing.  In F.E.A.R., for example, a pair of much lower cost GeForce 7900GT's can actually best a pair of high-priced Radeon X1900 XTX cards and a pair of GeForce 7900GTX's simply lays all competitors to waste.  The same situation largely holds true in Quake 4 testing but Half Life 2 and the now aging FarCry do still show good promise for ATI CrossFire, with slightly better performance even over a pair of these new monster NVIDIA GeForce 7900GTXs. 

So what's the bottom line here? Well, the bottom line is that NVIDIA has taken their already strong G70 core GPU architecture first introduced with the GeForce 7800 GTX, tweaked each block within the GPU to lower the transistor count, incorporated dual-link DVI outputs, and moved the manufacturing to a more advanced 90nm process. The end result are three new cards in the GeForce 7 series that offer excellent performance in each of their respective segments of the market, at lower, more reasonable power levels than the previous generation.

If we focus specifically on each new part, the GeForce 7900 GTX offers performance that's clearly superior to NVIDIA's previous flagship and somewhat better than ATI's Radeon X1900 XTX in certain situations. It would be unfair to say either card is the definitive performance leader, however. The cards trade victories depending on the application or specific test configuration. But this situation bodes very well for NVIDIA because the G71 die is significantly smaller than the R580. Even with a much smaller die, comprised of significantly fewer transistors, the G71 competes very favorably with the best ATI currently has to offer. So, not only will NVIDIA likely have higher yields on the G71, but it'll also be less expensive for them to produce G71 relative to R580.  This is physics and you just can't defy the laws. This is a very good position for NVIDIA to be in. And it appears that NVIDIA may pass some of these savings onto consumers considering some GeForce 7900 GTX cards are expected to sell for about $500.

Things just get better for NVIDIA with the GeForce 7900GT. This card is sure to be popular amongst performance enthusiasts. If street prices settle around $299 as we suspect they will, the GeForce 7900 GT is going to be immensely popular. The card offers excellent performance with a single slot design, at a very aggressive price point. Performance is marginally better than a GeForce 7800 GTX 256MB, and significantly better than the more expensive Radeon X1800XL. In fact, we would suggest that if you're looking at dropping down over $500 for a 3D Graphics solution, rather than going with one Radeon X1900 XTX, a pair of GeForce 7900GTs for slightly more has a better frame rate per dollar ratio.

The GeForce 7600GT is also in an excellent position. This card easily outpaced ATI's Radeon X1600XT to the point where ATI wasn't even competitive. The GeForce 7600GT is the reason you'll also be reading about the new Radeon X1800 GTO today. ATI simply had to react to this part. Again though, the miniscule die and excellent price performance ratio will surely make the 7600GT a success. And all three of these new cards, of course, fully support NVIDIA's SLI multi-GPU technology, which continually gets better with each new driver release.

      
GeForce 7900s and 7600s in-stock...

NVIDIA is hard-launching all three of the new cards in the GeForce 7 series today, and as you can see they are already in-stock and waiting to be sold at numerous popular retailers. NVIDIA continues to execute very well, and the company shows no sign of slowing down. We have to commend NVIDIA for successfully introducing three exciting new products simultaneously.  It's looking a lot like a hat-trick for the Green Team.

• Higher Performance @ Lower Power
•  Aggressive pricing in each segment
•  Another Hard Launch!
•  Dual Link DVI
•  Highly Overclockable GTs
•  SLI Keeps getting better
•  7900 GT Price / Performance
• Still pricey at the high-end

Get into HotHardware's PC Hardware Forum Right Now!



Content Property of HotHardware.com