Logo   Banner   TopRight
ATI Radeon HD 2900 XT - R600 Has Arrived
Date: May 14, 2007
Author: Marco Chiappetta
Intro, Specs and Related Info


Take a look at the red-head standing over there.  If you're a regular reader here at HotHardware, you know exactly who she is.  That's ATI's adventurous and oh-so-curvaceous front-woman, Ruby.  And she's holding that 'Perfect 10' sign for a very good reason.  It's probably not the reason you're thinking of, however.  For a digital personality, she's definitely pretty darn hot; maybe not a 10 in our book, but pretty darn close nonetheless.  No, she's holding that sign not as proclamation of her hotness, but rather to give you all a hint as to what ATI has in store for the PC in the coming days, weeks, and months.

Today is the day many PC enthusiasts have been waiting for.  And we say this with some hard data to reference.  In January we ran a poll and nearly 40% of over 4,000 respondents said they were waiting to see ATI's next-gen R600 architecture before passing judgment on the already-released GeForce 8 Series.  Despite the arrival of a clearly more powerful and significantly more feature-rich GPU architecture, an almost equal number of you decided to wait to see ATI's hand before betting on NVIDIA's G80. Today, we can finally tell you what ATI's been working on for the past few years and that 'Perfect 10' sign reveals part of the story.

ATI has chosen today, May 14, 2007 - my second wedding anniversary, incidentally - to reveal a line-up of 10 desktop and mobile GPUs, all derived from their R600 architecture. The line-up consists of sub-$100 entry level graphics cards to a $399 high-end part that's designed to do battle with NVIDIA's GeForce 8800 GTS.  What about taking on the GeForce 8800 GTX and Ultra, you ask?  Well, the new, AMD-owned ATI is moving in a somewhat different direction now, and at least currently, they don't plan to produce a low-volume, ultra-high performing part that only a fraction of the enthusiast crowd can afford.  We know, some of you are crestfallen right now; we were too at first.  But don't sweat it.  The arrival of the R600 and its derivatives is a very good thing.  We'll try to better explain on the pages ahead.  For now, here are the specification of ATI's new flagship graphics card, officially named the Radeon HD 2900 XT.

ATI Radeon HD 2900 XT
Features & Specifications
700 million transistors on 80nm HS fabrication process

512-bit 8-channel GDDR3/4 memory interface

Ring Bus Memory Controller

  • Fully distributed design with 1024-bit internal ring bus for memory reads and writes
  • Optimized for high performance HDR (High Dynamic Range) rendering at high display resolutions

Unified Superscalar Shader Architecture

  • 320 stream processing units
    • Dynamic load balancing and resource allocation for vertex, geometry, and pixel shaders
    • Common instruction set and texture unit access supported for all types of shaders
    • Dedicated branch execution units and texture address processors
  • 128-bit floating point precision for all operations
  • Command processor for reduced CPU overhead
  • Shader instruction and constant caches
  • Up to 80 texture fetches per clock cycle
  • Up to 128 textures per pixel
  • Fully associative multi-level texture cache design
  • DXTC and 3Dc+ texture compression
  • High resolution texture support (up to 8192 x 8192)
  • Fully associative texture Z/stencil cache designs
  • Double-sided hierarchical Z/stencil buffer
  • Early Z test, Re-Z, Z Range optimization, and Fast Z Clear
  • Lossless Z & stencil compression (up to 128:1)
  • Lossless color compression (up to 8:1)
  • 8 render targets (MRTs) with anti-aliasing support
  • Physics processing support

Full support for Microsoft DirectX 10.0

  • Shader Model 4.0
  • Geometry Shaders
  • Stream Output
  • Integer and Bitwise Operations
  • Alpha to Coverage
  • Constant Buffers
  • State Objects
  • Texture Arrays

Dynamic Geometry Acceleration

  • High performance vertex cache
  • Programmable tessellation unit
  • Accelerated geometry shader path for geometry amplification
  • Memory read/write cache for improved stream output performance

Anti-aliasing features

  • Multi-sample anti-aliasing (up to 8 samples per pixel)
  • Up to 24x Custom Filter Anti-Aliasing (CFAA) for improved quality
  • Adaptive super-sampling and multi-sampling
  • Temporal anti-aliasing
  • Gamma correct
  • Super AA (CrossFire configurations only)
  • All anti-aliasing features compatible with HDR rendering

CrossFire Multi-GPU Technology

  • Scale up rendering performance and image quality with 2 or more GPUs
  • Integrated compositing engine
  • High performance dual channel interconnect
Texture filtering features
  • 2x/4x/8x/16x high quality adaptive anisotropic filtering modes (up to 128 taps per pixel)
  • 128-bit floating point HDR texture filtering
  • Bicubic filtering
  • sRGB filtering (gamma/degamma)
  • Percentage Closer Filtering (PCF)
  • Depth & stencil texture (DST) format support
  • Shared exponent HDR (RGBE 9:9:9:5) texture format support

ATI Avivo HD Video and Display Platform

  • Two independent display controllers
    • Drive two displays simultaneously with independent resolutions, refresh rates, color controls and video overlays for each display
    • Full 30-bit display processing
    • Programmable piecewise linear gamma correction, color correction, and color space conversion
    • Spatial/temporal dithering provides 30-bit color quality on 24-bit and 18-bit displays
    • High quality pre- and post-scaling engines, with underscan support for all display outputs
    • Content-adaptive de-flicker filtering for interlaced displays
    • Fast, glitch-free mode switching
    • Hardware cursor
  • Two integrated dual-link DVI display outputs
    • Each supports 18-, 24-, and 30-bit digital displays at all resolutions up to 1920x1200 (single-link DVI) or 2560x1600 (dual-link DVI)
    • Each includes a dual-link HDCP encoder with on-chip key storage for high resolution playback of protected content
  • Two integrated 400 MHz 30-bit RAMDACs
    • Each supports analog displays connected by VGA at all resolutions up to 2048x1536
  • HDMI output support
    • Supports all display resolutions up to 1920x1080
    • Integrated HD audio controller with multi-channel (5.1) AC3 support, enabling a plug-and-play cable-less audio solution
  • Integrated Xilleon HDTV encoder
    • Provides high quality analog TV output (component / S-video / composite)
    • Supports SDTV and HDTV resolutions
    • Underscan and overscan compensation
  • HD decode for H.264/AVC, VC-1, DivX and MPEG-2 video formats
    • Flawless DVD, HD DVD, and Blu-Ray playback
    • Motion compensation and IDCT (Inverse Discrete Cosine Transformation)
  • HD video processing
    • Advanced vector adaptive per-pixel de-interlacing
    • De-blocking and noise reduction filtering
    • Edge enhancement
    • Inverse telecine (2:2 and 3:2 pull-down correction)
    • Bad edit correction
    • High fidelity gamma correction, color correction, color space conversion, and scaling
  • MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264/AVC encoding and transcoding
  • Seamless integration of pixel shaders with video in real time
  • VGA mode support on all display outputs

PCI Express x16 bus interface

OpenGL 2.0 support



We have a plethora of information related to today's launch available on our site that will help you get familiar with ATI's previous GPU architectures and their key features. The Radeon HD 2900 XT and its derivates in the Radeon HD 2000 family are totally new, but they do have a number of features in common with some members of the Radeon X1K family of products.

If you haven't already done so, we recommend scanning through our CrossFire Multi-GPU technology preview, the Radeon X1950 Pro with Native CrossFire article, the X1K family review, and our NVIDIA GeForce 8800 GTX and GTS launch coverage. In those four pieces, we cover a large number of the features offered by the new Radeon HD 2000 series and explain many of benefits of DirectX 10. We recommended reading these articles because there is quite a bit of background information in them that'll lay the foundation for what we're going to showcase here today.

Architectural Overview


The new flagship in ATI's line-up is the GPU formerly codenamed R600, found at the heart of the Radeon HD 2900 XT.  Like NVIDIA's G80 and its derivatives, the R600 has a unified architecture that replaces specialized pixel and vertex shaders with an array of stream processors that can dynamically allocated to handle pixel or vertex shader workloads, in addition to geometry shaders, physics, or any number of things.


This high-level block diagram will give you a bird's eye view of what the R600 has under its hood. The GPU is comprised of approximately 700+ million transistors and is manufactured on an enhanced version of TSMC's 80nm node, dubbed 80 HS, that, according to ATI, allowed them to crank up the R600's frequency to levels they couldn't hit with TSMC's standard 80nm process.

On some levels, the R600 borrows technology from the Radeon X1000 series and the Xenos GPU found in the Xbox360, but there is plenty of new technology employed in this GPU as well.  The R600 has a new command processor that processes command streams from the graphics driver and can reduce overhaed by as much as 30%. There is also a new setup engine that more efficiently prepares data for processing by the stream processing units.  In addition, there are also 320 Stream Processing Units, beefed up texture units and render back-ends, or ROPs if you prefer.


The 320 individual stream processing units in R600 are arranged in 4 groups of 80 SIMD arrays and each functional unit is arranged as a 5-way superscalar shader processor. In contrast, NVIDIA's G80 has up to 8 groups of 16 (128 total) fully generalized, fully decoupled, scalar, stream processors, but keep in mind the SPs in G80 run in a separate domain and can be clocked as high as 1.5GHz.  In ATI's R600, each functional SP unit can handle 5 scalar floating point MAD instructions per clock. And one of the five shader processors (the fatter one in the image above) can also handle transcendentals as well.  In each shader processor, there is also a branch execution unit that handles flow control and conditional operations and a number of general purpose registers to store input data, temporary values, and output data.

The ring-bus memory controller introduced with the X1000 series of GPUs returns in the Radeon HD 2000 series, but in the high-end model used on the Radeon HD 2900 the internal ring bus width has been increased to 1-kilobit.  If you remember, the Radeon X1800 and X1900 families of GPUs were outfitted with 512-bit internal ring-bus memory controllers.



Externally, the Radeon HD 2900 XT features a 512-bit memory interface comprised of eight, 64-bit memory channels. We'll talk more about the actual card a little later, but what we will say now is that in its stock configuration with 512MB of GDDR3 RAM running at 800MHz (1.6GHz DDR), the Radeon HD 2900 XT has 106GB/s of memory bandwidth at its disposal.  That's a lot of bits.

Architectural Overview (Cont.)


Like ATI's previous flagship Radeon X1950 XTX, the new Radeon HD 2900 HD is equipped with 16 total texture units and 16 ROPs.  Both the texture units and ROPs have been enhanced over the last generation to increase performance and precision. The Radeon HD 2900 XT has 4 groups of 4 texture units.  Each group has 8 Texture Address Processors each (32 total), and 20 Texture Samplers each (80 total) that can fetch a single data value per clock.  Each group also has 4 FP Texture Filter Units a piece (16 total) that can bilinear filter one 64-bit color value per clock, or one 128-bit color value in 2 clocks.  We should also note that the Radeon HD 2600 and HD 2400-based products have texture units with the very same functionality - they just have less of them.




The Radeon HD 2900 and 2600 series' texture units feature a new multi-level texture cache design as well.  The units' shared L2 cache stores data retrieved on L1 cache misses (256kB in the HD 2900, 128kB in the HD2600).  The Radeon HD 2400, however, is equipped with only a single level vertex / texture cache.  We should also note that all texture units can access both vertex cache and L1 texture cache as well.

The texture units in the Radeon HD 2000 series can bilinear filter 64-bit HDR textures at full speed (~7x faster than Radeon X1000 series), while 128-bit floating point textures are filtered at half speed.  Trilinear and anisotropic filtering is supported for all formats and the high quality anisotropic filtering mode that returns from the X1000 series has been enhanced to better handle problematic texture filtering cases. Performance and compatibility have been improved to the point that the high quality aniso mode is now the default setting.

There is also a new shared exponent texture format available (RGBE 9:9:9:5) for 32-bit HDR and texture resolution of up to 67 megatexels (8192 x 8192) are supported.  ATI's new texture units can perform up to two texture fetches per clock, per texture unit (1 filtered + 1 unfiltered), with the option to grab 4 unfiltered fetches in place of 1 filtered fetch (Fetch4).

The Render Back-Ends, or ROPs, used in the Radeon HD 2000 series can handle 32 pixels per clock on the HD 2900 XT.  On the Radeon HD 2600 and HD 2400 they can handle 8. The ROPs can render-to-texture more efficiently than previous ATI GPUs and new MSAA resolve functionality makes Custom Filter AA, or CFAA, possible.  The ROPs also allow for new 128-bit FP and 11:11:10 FP DX10 formats to be displayable, they support up to 8 MRTs, which is double that the Radeon X1000 series, and they have improved stencil and Z compression (up to 16:1 in standard mode / 128:1 with 8X MSAA) and an improved hierchical Z buffer.




Another feature found in the Radeon HD 2000 series is borrowed from the Xbox360's Xenos GPU.  Like Xenos, the HD 2000 series has built-in hardware support for tessellation. Tessellation works by taking a basic polygon mesh and recursively applying a subdivision rule to create a more complex mesh on the fly.  It's best used for amplification of animation data, morph targets, or deformation models.  And it gives developers the ability to provide data to the GPU at coarser resolution.  This saves artists the time it would normally take to create more complex polygonal meshes and reduced the data's memory footprint.  Please note, however, that the HD 2000 series' tessellator functionality is proprietary and requires developers to code for it specifically.  It is already used in some Xbox360 titles though, like Viva Piñata for example, so developers may be more inclined to use this feature than some other proprietary ones.

Tech Demos, GPGPU, Physics and Bundle Info


With the launch of every new GPU inevitably comes an influx of new demos designed to showcase the new hardware's functionality.  With the launch of the Radeon HD 2000 series, ATI has created a number of new tech demos, two of which we'll outline below.





Before you are six images captured from the brand new Ruby: Whiteout demo. By now most of you have probably seen the older Ruby demos, like 'The Assassin'. The new Whiteout demo differs from The Assassin in that Ruby is now much more complex.  In The Assassin, Ruby was comprised of roughly 80K polys and her face had only 4 morph targets. In Whiteout, however, Ruby is now comprised of approximately 200K polys and her face has 129 morph targets. The scenes themselves have been beefed up as well, up from about 1m total polys in The Assassin to 2m in Whiteout and with higher resolution textures.  The complexity of the mountains in the Ruby: Whiteout demo can also be altered on the fly thanks to the Radeon HD 2000 series' built-in tessellator, a custom shader was programmed for realistic snow accumulation, and the height maps are all procedurally generated.

Another of ATI's new demos was called Stereo Block Builder.  It worked by using a pair of Radeon HD 2900 XT cards to gather data recorded by a pair of cameras and from that data 3D models were constructed from small blocks, similar to Legos.




The images captured by the cameras were rendered in real time on the tiny individual blocks and the blocks could be realistically manipulated in the environment.  The physics calculations necessary for the blocks to move and interact with each other were also performed on the GPU.


ATI also showed us a medical imaging demonstration and Mike Houston from Stanford University's Folding @ Home project was on hand to praise the R600's ability to churn through work units at a rate of up to 470 GigaFLOPs.  If you remember, the R600's 320 stream processors, when running at 740MHz, have a theoretical peak math processing rate of 475 GigaFLOPs. And Mike measured a Radeon HD 2900 XT at up to 470 GigaFLOPs, which is very close to its theoretical maximum.

Half Life 2: Episode 2


Team Fortress 2


As cool at ATI's tech demos are, this last batch of screenshots is sure to please many more of you.  If you remember back a few years, ATI reamed up with Valve and offered vouchers for a free copy of Half Life 2 with some Radeon purchases.  With the Radeon HD 2000 XT, ATI and Valve have teamed up once again to offer not one, but three upcoming titles.  ATI has built the price of Valve's 'Black Box' game bundle into the price of their R600 GPU and all board partners will be including vouchers for the full version of The Black Box, which is comprised of Half Life 2: Episode 2, Portal, and Team Fortress 2.  All three of these games look great and the inclusion of the Black Box voucher ads significant value to the Radeon HD 2900 XT.

The Radeon HD 2900 XT


By now, we're sure many of you are itching to see what the Radeon HD 2900 XT looks like, so without further ado we bring you ATI's latest flagship...





The Radeon HD 2900 XT features a dual-slot cooling solution that covers almost the entire front side of the card. It's built almost completely from copper and features multiple heat-pipes connected to its base plate and heatsink fins.  The cooling unit's barrel fan draws air into the shroud, where it is forced through the heatsink and exhausted from the system through vents in the card's mounting plate.

In a stock configuration, the GPU on the Radeon HD 2900 XT is clocked at 743MHz and its 512MB of GDDR3 RAM is clocked at 825MHz (1.66GHz DDR).  At these frequencies that card has a theoretical peak pixel processing rate of 47.5 Gigapixels/sec and a peak triangle processing rate of 742M triangles/s.  And as we've already mentioned the Radeon HD 2900 XT also has up to 106GB/s of memory bandwidth thanks to its 512-bit memory interface.

Radeon HD 2900 XT cards are outfitted with two supplemental power connectors, one common 6-pin PCI Express connector and one 8-pin PCI Express power connector.  The card will function properly with a pair of 6-pin power feeds, but users won't be able to overclock their cards unless an 8-pin feed is connected. This supplemental power connector configuration is necessary because a single Radeon HD 2900 XT can consume upwards of 215 watts.

Like the Radeon X1650 XT and X1950 Pro, cards in the Radeon HD 2000 series have native CrossFire support and don't require an external dongle. Updates to ATI's drivers further enhance CrossFire by eliminating the need for application specific profiles.  According to ATI, upcoming Catalyst drivers with default to AFR mode to accelerate an application right out of the gate.  But the drivers also have a built-in mechanism to detect whether or not an application is AFR compatible, and if it isn't, a different mode will be used.

Radeon HD 2900 XT cards are equipped with dual, dual-link DVI outputs with HDCP support and cards outfitted with a Theater 200 chip with feature ViVo functionality as well.  We'll talk about another new feature a little later that gives cards in the Radeon HD 2000 series the ability to output digital audio over HDMI though the use of a special adapter.






We'd also like to note that ATI's board partner, Sapphire is ready in time for launch with their standard Radeon HD 2900 XT card.  In fact, we used Sapphire's model for all of our single-GPU benchmarks.  Their Radeon HD 2900 XT includes the Valve Black Box game-pack voucher we talked about earlier, in addition to 3DMark06, PowerDVD and PowerDirector, and a drive / utility CD.  There was also a user's manual in the box, along with a ViVo connector, a pair of DVI to VGA adapters, a DVI to HDMI adapter, a CrossFire bridge connector, and an HD component output dongle.

The bigger news coming out of Sapphire today, however, is about their upcoming Toxic Radeon HD 2900 XT.  Sapphire's Toxic Radeon HD 2900 XT features a self-contained water cooling unit that mounts in two 5.25" drive bays.  Toxic cards will be clocked higher than standard cards and be equipped with 1GB of frame buffer memory.  The water cooling unit is setup to accommodate a pair of cards operating in a CrossFire configuration and can also work with a CPU water block to cool a processor as well.  The Toxic Radeon HD 2900 XT won't be available until after Computex next month.  We hope to bring you a more detailed look at this card when it becomes available.

The Radeon HD 2600 and 2400


Below we have some pictures of ATI's upcoming Radeon HD 2600 and 2400 Pro and XT cards. If you're counting, including the Radeon XD 2900 XT, that makes 5 different Radeon so far...





First, let's talk about the Radeon HD 2600 XT and Pro.  As we've already mentioned, the GPU at the heart of the Radeon HD 2600 family of cards is derived from the R600 and has essentially the same feature set. The Radeon HD 2600, however, has only 120 stream processing units, 8 texture units and 4 ROPs.  Radeon HD 2600 cards are outfitted with a 128-bit memory interface and they'll be equipped with 256MB of DDR2, GDDR3, or GDDR4 memory.  Memory clock speeds with range from 400MHz to 1.1GHz and GPU clocks will be between 600MHz and 800MHz depending on the model. As you can see, all of the cards are adorned with single-slot coolers.

Another noteworthy aspect to the Radeon HD 2600 series of GPUs is that they are manufactured on TSMC's 65nm process node. The GPU itself is comprised of roughly 390 million transistors and cards will consume approximately 45 watts, hence the lack of supplemental power connectors on the boards pictured here.




The Radeon HD 2400 family of cards is also comes in XT and Pro flavors.  The Radeon HD 2400 GPU is made up of approximately 180 million transistors and it too is manufactured on TSMC's 65nm node.  The GPU features 40 stream processing units with 4 texture units and 4 render back-ends, and depending on the model it will be clocked between 525MHz and 700MHz. All Radeon HD 2400 series cards also have single-slot coolers, and some will be passively cooled.  Products in the Radeon HD 2400 family also feature a 64-bit memory interface and will be configured with between 128MB to 256MB of GDDR3 or DDR2 RAM clocked between 400MHz and 800MHz. Power consumption will be in the 25w range.

Both the Radeon HD 2400 and HD 2600 cards can output audio via HDMI using the same special adaptor mentioned earlier. 2400 series cards, however, will be equipped with only a single dual-link DVI output in conjunction with standard VGA and HD video outputs. The HD 2600 will have a similar dual, dual-link DVI plus HD video output configuration to the Radeon HD 2900 XT.

R600 Mobile Derivatives


The remaining five members of the Radeon HD 2000 series being announced today are all mobile variants designed for various laptop and notebook form factors. Only four of the designs are derived from the R600 architecture though, the low-end HD 2300 is a DX 9.0c compliant part built at 90nm.




All of the main features and specifications of the ATI Mobility Radeon HD 2300, 2400 and 2600 mobile GPUs are listed in the table above.  As you probably expect, the less powerful models are relegated to mainstream thin and light notebooks, while the more powerful units will find their way into performance thin and gaming enthusiast, DTR type notebooks.




As they've done in the past with each new mobile GPU, ATI has improved upon their PowerPlay technology to reduce GPU power consumption to prolong battery life. The latest version of PowerPlay at work in the Mobility Radeon HD 2400 and 2600 series feature 100% backbias voltage and new performance-on-demand functionality that disables unused portions of the GPU until they are needed to handle an actual workload.

AVIVO HD and the new UVD


The Radeon HD 2400 and 2600 families of GPUs feature a new AVIVO HD video engine with better performance and more functionality than the version found in the Radeon X1K family of products.

Update: The R600 used in the Radeon HD 2900 XT does not feature the new video engine, but rather the same engine that's found in R580.  As such, it is not cabable of bitstream processing or entropy decoding, and will have higher CPU utilization during playback of encoded titles than the other members of the Radeon HD 2000 family.  With the HD 2900, the GPU's stream processors are used to assist in HD video playback functions.



The new AVIVO HD engine features AMD's Universal Video Decoder technology, which basically offloads every major aspect of HD video processing from the host CPU and performs the functions on the GPU.  In the last gen AVIVO engine, bitstream processing and entropy decoding was not performed on the GPU, bit rather was handled by the system's host CPU.  Now though, AVIVO HD with UVD handles every major step in the process which makes them better suited to handle HD DVD and Blu-Ray video disc playback.  The UVD supports full 40Mbps bit-rates for HD optical disk playback and decode data is handled internally, removing passes to memory between the decode stages.

Another improvement to the way the Radeon HD 2000 series handled HD content has to do with audio output.  Cards in the Radeon HD 2000 series now feature an integrated HD audio controller that supports 32kHz, 44.1kHz, 48kHz 16-bit PCM stereo and AC3 (5.1) Compressed multi-channel audio streams such as Dolby Digital & DTS. The digital audio it actually output through the cards' DVI connectors and can be accesses through the use of a new DVI to HDMI adapter that also carries audio signals. 

AVIVO Video Performance


For our first round of performance related tests we took a look at Digital Video processing performance of the two competing core GPU architectures. "PureVideo HD" technology is at work here for NVIDIA and "AVIVO HD" for ATI. 

HD Decode Acceleration
AVIVO Performance Explored

To illustrate CPU utilization while playing back WMV HD content, we used the Performance Monitor built into Windows XP. Using the data provided by Windows Performance Monitor, we created a log file that sampled the percent of CPU utilization every second, while playing back the 1080p versions of a couple of short video clips. One test was conducted using the "Amazing Caves" video that's available for download on Microsoft's WMVHD site. Another test was conducted using the "Evan Almighty" trailer available for download on Apple's QuickTime website. The Amazing Caves video is compressed using MPEG2, while the Evan Almightily clip uses H.264. The CPU utilization data gathered during these tests was then imported into Excel to create the graph below. The graph shows the CPU utilization for a GeForce 8800 GTS and a Radeon HD 2900 XT using Windows Media Player 10 to playback the WMVHD clip and PowerDVD for the QuickTime clip.

Both the Radeon HD 2900 XT and GeForce 8800 GTS had no trouble playing back the Amazing Caves video.  The Radeon used somewhat less CPU horsepower, but both offloaded the vast majority of work from the host processor and produced smooth, fluid playback of the video.

The tides turned while playing back the H.264 encoded Evan Almighty trailer. Here, the GeForce 8800 GTS dropped host CPU utilization to about 16.3% while the Radeon HD 2900 XT dropped it to about 20%.  Playing back the video without hardware acceleration resulted in about 85% of the CPU's resources to be used, however. So, while the GeForce did a slightly better job here, both are a huge improvement over software-only players.

** Update: Please note that H.264 encoded Blu-Ray or HD DVD titles will likely have higher-CPU utilization on the Radeon HD 2900 XT because it does not feature the new UVD.  Unencrypted clips like the trailer we used here, aren't as demanding to play on a PC.

DVD Video Quality: HQV Benchmark with PowerDVD 7

Next up, we have the HQV DVD video benchmark from Silicon Optics. HQV is comprised of a sampling of SD video clips and test patterns that have been specifically designed to evaluate a variety of interlaced video signal processing tasks, including decoding, de-interlacing, motion correction, noise reduction, film cadence detection, and detail enhancement. As each clip is played, the viewer is required to "score" the image based on a predetermined set of criteria. The numbers listed below are the sum of the scores for each section. We played the HQV DVD using the latest version of Cyberlink's PowerDVD, with hardware acceleration for AVIVO and PureVideo extensions enabled.

The HQV benchmark has lost much of its luster with us.  Not only because both ATI's and NVIDIA's respective architectures produce near perfect scores (130 points is a perfect score here), but because this benchmark leaves a portion of the scoring open to interpretation and depending who you talk to (Silicon Optics, Intel, NVIDIA, ATI, etc.) you'll likely get a different answer as to how to score a particular product.  Regardless, thanks to AVIVO and PureVideo, the PC is the premiere platform for playing most SD content.  You've got to spend thousands of dollars to get a DVD player than can product similar results in this benchmark.

New Anti-Aliasing Modes


As we mentioned earlier, new multi-sample anti-aliasing resolve functionality in the Radeon HD 2000 series GPUs makes another new feature, Custom Filter AA, or CFAA, possible.




In addition to all of the AA features in the Radeon X1000 series GPUs (multisampling, programmable sample patterns, gamma correct resolve, temporal AA, adaptive supersampling / multisampling, and super AA), Radeon HD 2000 series GPUs support up to 8x multisampling and can now take samples from outside pixel boundaries, apply non-uniform sample weights, and filter kernels that adapt to the characteristics of each pixel.  An adaptive edge detect filter is also available that performs an edge detection pass on a rendered image.  Edge pixels are resolved using more samples along the direction of the edge.




Using the various custom filters results in an increase in the effective number of samples per pixel without increasing the memory footprint. CFAA filters are also software upgradeable and can be used to enhance in-game AA settings for most DirectX 9 titles.  CFAA works in conjunction with all other AA features and is compatible with HDR and stencil shadows as well.

AA Image Quality Comparisons



To see how the anti-aliasing modes available on the new Radeon HD 2000 series compared to ATI's previous flagship Radeon X1950 XTX and NVIDIA's GeForce 8800 GTS, we snapped a few screenshots using Half Life 2 at various settings with the game configured to run at a resolution of 1280x960 with 16x anisotropic filtering enabled.

Image Quality Analysis: Anti-Aliasing
2900 vs. x1950 vs. 8800
Radeon HD 2900 XT

Radeon HD 2900 XT

Radeon HD 2900 XT 

 javascript:winopen('http://www.hothardware.com/image_popup.aspx?image=big_2900_16x_wt.jpg&articleid=966&t=a','hothardwareimage', 600,600);
Radeon HD 2900 XT
16X (WT) AA

 javascript:winopen('http://www.hothardware.com/image_popup.aspx?image=big_2900_24x.jpg&articleid=966&t=a','hothardwareimage', 600,600);
Radeon HD 2900 XT 
24X (ED) AA

 javascript:winopen('http://www.hothardware.com/image_popup.aspx?image=big_8800_4xaa.jpg&articleid=966&t=a','hothardwareimage', 600,600);
GeForce 8800 GTS

 javascript:winopen('http://www.hothardware.com/image_popup.aspx?image=big_8800_8xaa.jpg&articleid=966&t=a','hothardwareimage', 600,600);
GeForce 8800 GTS

GeForce 8800 GTS
8xQ AA

GeForce 8800 GTS
16X AA

GeForce 8800 GTS
16xQ AA

Radeon X1950 XTX

Radeon X1950 XTX

Radeon X1950 XTX

Radeon X1950 XTX


When flipping through the screenshots above, please pay special attention to the labels and the file names listed in your browser's address bar, as this will help you keep track of the settings used to capture the image.  Also note that only the 4X anti-aliasing screenshots above will represent a true apples-to-apples-to-apples comparison between the three cards.

If you look through the shots, the first thing you're likely to notice is a slight rendering bug on the HD 2900 that causes a problem with the lighting on some of the buildings and trees. We're confident this will be fixed in a future driver release so we won't dwell on it. What's more important to focus on are the gradients on the cables that span the top of the screen, and the fine details in the antennas atop the buildings. As the AA levels are increased, the all of the cards do a great job of reducing the jaggies.  But if you focus your attention on the lower left portion of the screen you'll also see that ATI's new CFAA modes also blur the scene slightly.

AA Image Quality Comparisons (Cont.)


In addition to the comparisons on the previous page, we thought we'd snap off a few more screenshots with the Radeon HD 2900 XT using Half Life 2 to illustrate most of the other new anti-aliasing modes available with ATI's new flagship GPU architecture.

Image Quality Analysis: Radeon HD Anti-Aliasing
Every Mode Available - Almost





2X AA + Narrow Tent
(Samples - 4X)


4X AA + Narrow Tent
(Samples - 6X)


2X AA + Wide Tent
(Samples - 6X)



4X AA + Wide Tent
(Samples - 8X)

8X AA + Narrow Tent
(Samples - 12X)


8X AA + Wide Tent
(Samples - 16X)

8X AA + Edge Detection
(Samples - 24X)

The sampling of images above represent almost all of the standard anti-aliasing modes available with the new Radeon HD 2900 XT. We say "almost" because these images don't take into account the temporal anti-aliasing modes available or the adaptive modes that also help smooth the jaggies visible in partially transparent textures.

We present these images to you to illustrate the effects of the different modes on a nearly identical frame. If you flip through the images, you'll see that ATI's new modes definitely do a better job as the number of overall samples is increased. Focus your attention on the antennas on top of the building at the upper left of the scene and you'll see what we mean.

But if you also focus your attention on the cobblestones in the street, particularly in the lower left portion of the scene, you'll also see that ATI's new custom filtered AA modes also blur the textures a bit, even if 16x anisotropic filtering is enabled as we have here.  Image quality Nazis or anisotropic filtering aficionados may be put off by the slightly blurred textures in some of these images, but for the most part the affect is minimal.

While inspecting all of these new anti-aliasing modes, we also ran some benchmarks using Half Life 2: Episode 1 to see just how much of an effect they had on in-game performance.

As you probably expected, as the effective number of samples increases, performance decreases in the game. For these tests, we had EP1 set to a resolution of 1280x1024 with 16X anisotropic filtering enabled. At these settings, this particular game's framerate drops to a point where it becomes noticeably slower (at least to us) once the number of samples hits the 12x mark. This data point is dependant on the game being played, however.

Anisotropic Filtering Modes and Performance


The Radeon HD 2000 series of graphics card have essentially the same anisotropic filtering abilities as the Radeon X1900 series. The texture units in the HD 2000 series have been tweaked to better handle some problematic texture filtering cases, but real-world performance shouldn't differ significantly from ATI's previous generation.

Image Quality Analysis: Anisotropic Filtering
2900 vs. x1950 vs. 8800

Radeon HD 2900 XT
No Aniso


Radeon HD 2900 XT
2X Aniso


Radeon HD 2900 XT
4X Aniso


Radeon HD 2900 XT
8X Aniso


Radeon HD 2900 XT
16X Aniso

Radeon X1950 XTX
No Aniso

Radeon X1950 XTX
2X Aniso

Radeon X1950 XTX
4X Aniso

Radeon X1950 XTX
8X Aniso

Radeon X1950 XTX
16X Aniso

GeForce 8800 GTS
No Aniso

GeForce 8800 GTS
2X Aniso

GeForce 8800 GTS
4X Aniso

GeForce 8800 GTS
8X Aniso

GeForce 8800 GTS
16X Aniso

As you can see in the screen-shots above, as the level of anisotropic filtering is increased, the clarity and sharpness of the ground texture is enhanced. If we compare the quality of the images produced with each card, it's difficult the pick one that is clearly superior the another but there are definitely more subtle detail in the captures grabbed with the GeForce 8800 GTS. If you focus your attention on the cracks in the ground in the distance about 1/3 of the way up the scene, you'll be able to pick up some of the differences.

The images captured with D3D AF Tester also show the GeForce 8800 GTS' strengths. The 8800 GTX has almost no angular dependency and produces smooth transitions, in an almost circular pattern. The Radeon X1950 XTX and HD 2900 XT also do a great job with anisotropic filtering, but if you open the 16X aniso shots taken with the D3D Tester side-by-side you'll see the 8800 produces the superior pattern.

To see how using anisotropic filtering affected performance with the Radeon HD 2900 XT, we fired up Half Life 2: Episode 1 and ran though a series of benchmarks with different levels of anisotropic filtering being applied. Only about four frames per second separated the highest and lowest scores here, so consider anisotropic filtering a 'gimme' with a card of the Radeon HD 2900 XT's caliber.

Our Test Systems and 3DMark06


HOW WE CONFIGURED THE TEST SYSTEMS: We tested all of the graphics cards used in this article on either an EVGA nForce 680i SLI motherboard (NVIDIA GPUs) or an Intel D975XBX2 board (ATI GPUs) powered by a Core 2 Extreme X6800 dual-core processor and 2GB of low-latency Corsair RAM. The first thing we did when configuring the test system was enter the BIOS and set all values to their "optimized" default settings. Then we manually configured the memory timings and disabled any integrated peripherals that wouldn't be put to use. The hard drive was then formatted, and Windows XP Pro with SP2 and the April '07 DX9 update was installed. When the installation was complete, we then installed the latest chipset drivers available, installed all of the other drivers necessary for the rest of our components, and removed Windows Messenger from the system.  Auto-Updating and System Restore were also disabled, the hard drive was defragmented, and a 1024MB permanent page file was created on the same partition as the Windows installation. Lastly, we set Windows XP's Visual Effects to "best performance," installed all of the benchmarking software, and ran the tests.

The HotHardware Test System
Core 2 Extreme Powered

Processor -

Motherboard -

Video Cards -

Memory -

Audio -

Hard Drive

Hardware Used:
Core 2 Extreme X6800 (2.93GHz)

EVGA nForce 680i SLI
nForce 680i SLI chipset

Intel D975XBX2
975X Express 

GeForce 88oo Ultra (2)
GeForce 8800 GTX (2)
GeForce 8800 GTS (2)
EVGA GeForce 8800 GTS (2)  

Radeon X1950 XTX (2)
Radeon HD 2900 XT (2)

2048MB Corsair PC2-6400C3
2 X 1GB

Integrated on board

Western Digital "Raptor"

74GB - 10,000RPM - SATA

OS -
Chipset Drivers -
DirectX -

Video Drivers

Synthetic (DX) -
DirectX -
DirectX -
DirectX -
DirectX -
OpenGL -

OpenGL -
Relevant Software:
Windows XP Pro SP2
nForce Drivers v9.53
DirectX 9.0c (April '07 Redist.)

NVIDIA Forceware v158.19
ATI Catalyst v7.4 / 8.37

Benchmarks Used:
3DMark06 v1.0.2
S.T.A.L.K.E.R. *
F.E.A.R. v1.08
Half Life 2: Episode 1*
Prey v1.2*
Quake 4 v1.3*

* - Custom Test (HH Exclusive demo)

Performance Comparisons with 3DMark06 v1.0.2
Details: www.futuremark.com/products/3dmark06

3DMark06 is the most recent addition to the 3DMark franchise. This version differs from 3Dmark05 in a number of ways, and includes not only Shader Model 2.0 tests, but Shader Model 3.0 and HDR tests as well. Some of the assets from 3DMark05 have been re-used, but the scenes are now rendered with much more geometric detail and the shader complexity is vastly increased as well. Max shader length in 3DMark05 was 96 instructions, while 3DMark06 ups that number to 512. 3DMark06 also employs much more lighting and there is extensive use of soft shadows. With 3DMark06, Futuremark has also updated how the final score is tabulated. In this latest version of the benchmark, SM 2.0 and HDR / SM3.0 tests are weighted and the CPU score is factored into the final tally as well.

We've broken up all of our graphs into two different sections in an effort to make them easier to read. The top portion of each graph it made up of all of the single-GPU configurations, while the bottom half consists of all of the dual-GPU SLI or CrossFire setups.

We would also like to point out the reason that we have two sets of GeForce 8800 GTS numbers listed here.  The EVGA GeForce 8800 GTS cards listed in our graphs are "factory overclocked" models that are available for about $370-$400 a piece. The vanilla GeForce 8800 GTS cards listed here are standard models clocked at NVIDIA's reference specifications.  We thought it best to include both sets of numbers to illustrate exactly how the Radeon HD 2900 XT compare to both types of GTS.

With that out of the way, according to 3DMark06, the new Radeon HD 2900 XT performs slightly better than the factory overclocked 8800 GTS card in a single card configuration and virtually on the same level (alibiet slightly below) two factory overclocked GTS cards running in SLI mode. The HD 2900 also turned out to be significantly faster than the Radeon X1950 XTX, but it couldn't catch NVIDIA's high-end 8800 GTX or Ultra cards.


If we drill down into the 3DMark06 results and look at the individual shader model 2.0 and shader model 3.0 / HDR tests, we can see just how the Radeon HD 2900 XT was able to compete with NVIDIA's similarly priced offerings. In a single card configuration the HD 2900 XT finished just ahead of the overclocked GTS card in the SM 2.0 test, but its lead dwindled when running in CrossFire mode.  The see-saw battle in the SM 2.0 test was overshadowed by the HD 2900 XT's much stronger performance in the SM 3.0 / HDR test, however. In the SM 3.0 / HD test, in both single and dual-card configurations, ATI's new flagship put up scores that came close to matching an 8800 GTX, which put it well out in front of either GeForce 8800 GTS configuration.
Half Life 2: Episode 1


Performance Comparisons with Half-Life 2: Episode 1
Details: www.half-life2.com

Half Life 2:
Episode 1
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  And thanks to an updated game engine, gorgeous visual, and intelligent weapong and level design, Half Life 2 became almost as popular.  Armed with the latest episodic update to HL2, Episode 1, we benchmarked the game with a long, custom-recorded timedemo that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,280 x 1,024 through 2,560 x 1,600 with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently, and with color correction and HDR rendering enabled in the game engine as well.

In a single-card configuration at these standard resolutions, the new Radeon HD 2900 XT trails either flavor of GeForce 8800 GTS in our custom Half Life 2: Episode 1 benchmark. The GTS cards out up framerates between 9% and 18% higher than the Radeon HD 2900 XT.  In the dual-card configurations, however, the system was CPU bound at these resolutions and all of the graphics cards performed at similar levels.

With the resolutions cranked up to 1920x1200 or 2560x1600, the performance deltas separating the GeForce 8800 GTS cards from the Radeon HD 2900 got smaller in the single cards tests, but the NVIDIA powered dual-GPU rigs once again pulled ahead.

F.E.A.R. v1.08


Performance Comparisons with F.E.A.R
More Info: www.whatisfear.com/us

One of the most highly anticipated titles of recent years was Monolith's paranormal thriller F.E.A.R. According to the game's minimum system requirements, it needs at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card in the Radeon 9x00 or GeForce4 Ti-classes or better, to adequately run. Using the full retail release of the game patched to v1.08, we put the graphics cards in this article through their paces to see how they fared with a popular title. Here, all graphics settings within the game were set to their maximum values, but with soft shadows disabled (Soft shadows and anti-aliasing do not work together currently). Benchmark runs were then completed at resolutions of 1,280x1,024 through 2,560 x 1,600, with 4x anti-aliasing and 16x anisotropic filtering enabled.

When running in a single-card configuration at 1280x1024 or 1600x1200, the new Radeon HD 2900 XT and standard GeForce 8800 GTS performed at nearly identical levels. The factory-overclocked GTS was markedly faster, however, as were the more expensive GTX and Ultra. In a dual-card configuration though, both of the GeForce 8800 GTS rigs pulled out in front.

At the more demanding XHD resolutions, the HD 2900 XT couldn't quite catch the pre-overclocked 8800 GTS in either a single-GPU or a dual-GPU configuration at 1920x1200.  But at 2560x1600, the Radeon HD 2900 XT jumped out to a huge lead and crushed the GTS cards by as much as 41%.

Prey v1.3


Performance Comparisons with Prey
Details: www.prey.com

After many years of development, Take-Two Interactive recently released the highly anticipated game Prey. Prey is based upon an updated and modified version of the Doom 3 engine, and as such performance characteristics between it and Doom 3 are quite similar.  Also like Doom 3, Prey is an OpenGL-based game that uses extremely high-detailed textures and a plethora of dynamic lighting and shadows.  But unlike Doom, Prey features a fare share of outdoor environments as well.  We ran these Prey benchmarks using a custom recorded timedemo with the game set to its "High-Quality" graphics mode, at resolutions of 1,280 x 1,024 through 2,560 x 1,600 with 4X AA and 16X anisotropic filtering enabled simultaneously.


At the lower resolutions, in both single and dual-card configurations, the new Radeon HD 2900 XT and the factory-overclocked GeForce 8800 GTS cards performed at very similar levels in our custom Prey benchmark. The HD 2900 XT is measurably faster than the standard GTS, but either platform is able to product perfectly playable framerates in this game at these resolutions.

Crank things up to XHD levels, and the landscape changes a bit. As the resolution increases so too does the Radeon HD 2900 XT's lead in this game. At the highest resolution, the Radeon HD 2900 XT's performance in Prey falls somewhere in between a pre-overclocked GeForce 8800 GTS and the faster 8800 GTX.

S.T.A.L.K.E.R. v1.0001


Performance Comparisons with S.T.A.L.K.E.R.
Details: www.stalker-game.com
The highly anticipated game S.T.A.L.K.E.R. makes use of a proprietary DX9 game engine, dubbed "X-Ray" by its developers. It features an advanced DX9 renderer with Defferred Shading capabilities, which allows the engine to draw a vast amount of dynamic light sources with correct materials and light ‘feedback’. Becuase S.T.A.L.K.E.R. does not have a built-in benchmarking tool, we tested the game using FRAPS at resolutions of 1,280x1,024 through 2,560 x 1,600 with the in-game anti-aliasing and anisotropic filtering options set to their maximum values, and with full dynamic lighting enabled. We should also note that 'grass shadows' were disabled due to the dramatic slowdowns this feature causes during gameplay.

At 1280x1024 and 1600x1200, the new Radeon HD 2900 XT simply got beat up by both types of GeForce 8800 GTS in our custom S.T.A.L.K.E.R. benchmark when running in single card mode. Even the slower of the two GTS cards was up to 30% faster the HD 2900 XT here.

An interesting thing happens in the dual-GPU tests, however. Running the HD 2900 XT is CrossFire mode results in much better scaling than SLI and the cards are able to blow past even the mighty GeForce 8800 Ultra SLI configuration. Looking at how the delta separating the platforms gets smaller as the resolution increases though, leads us to believe NVIDIA's got plenty of headroom left for this game, once their drivers are tweaked.  Then again, so does ATI.  There's no way their software team has been able to wring all they can from the R600 at this early stage of its existence.

At the XHD resolutions, the Radeon HD 2900 XT's earlier lead while running in CrossFire mode disappears as the pre-overclocked GTS is able to squeak past it at the highest res.  And in a single-card configuration the HD 2900 XT continued to get pummeled here. We should also note that this game was considerably more jerky on the ATI hardware. Anytime the game had to load new data, the Radeons dropped into single-digit framerate territory, which significanly hampered the experience.

Overclocking The Radeon HD 2900 XT


For our next set of performance metrics, we spent some time overclocking the Radeon HD 2900 XT using the clock frequency sliders available within ATI's Catalyst drivers, under the "Overdrive" menu.

Please note, that overclocking is NOT supported in the Radeon HD 2900 XT with the Catalyst drivers unless an 8-Pin PCI Express supplemental power connector is plugged into the card, along with a secondary 6-Pin feed. When asked why they made the 8-Pin connector a requirement for overclocking, representative from ATI said it was due to major variations in power consumption between not only between stock and overclocked configurations, but between different HD 2900 XT GPUs. It seems TSMC's 80nm HS process allowed ATI to clock the R600 relatively high, but once the upper limits of the GPU's clock speed are hit, power consumption can vary considerable from card to card. Having the 8-Pin supplemental feed available means there should always be ample power available to the card.

With this in mind, we monitored power consumption throughout our overclocking experiments and didn't witness any major changes in consumption at all, despite running the memory and GPU at frequencies 100MHz and 85MHz higher than stock. More on power consumption on the next page...

Overclocking the Radeon HD 2900 XT
(Fast 3D Video Card) + Overclocking = Even Faster Card

To find the Radeon HD 2900 XT's peak core and memory frequencies, we slowly raised their respective sliders until we begun to see visual artifacts on-screen while running a game or benchmark, or until our test system was no longer stable. Because the Overdrive tab only allowed for a peak memory frequency of 900MHz, we were able to max that one out. The GPU peaked at 828MHz - a healthy increase of 85MHz.

While we had the card overclocked, we re-ran the F.E.A.R. and S.T.A.L.K.E.R. benchmark at 1600x1200 to see just how much performance had increased. As you can see, the HD 2900 XT's performance in the F.E.A.R. test jumped by 4 frames per second, or roughly 6.4%. And in S.T.A.L.K.E.R. its performance went up by a little over 3 frames per second, or about 8.1%.

Power Consumption and Noise


Before we bring this article to a close, we'd like to cover a few final data points. Throughout all of our benchmarking and testing, we monitored how much power our test system was consuming using a power meter and also took some notes regarding its noise output as reported by our digital sound level meter. Our goal was to give you all an idea as to how much power each configuration used and to explain how loud the configurations were under load. Please keep in mind that we were testing total system power consumption at the outlet here, not just the power being drawn by the video cards alone.

Total System Power Consumption & Acoustics
It's All About the Watts and Decibels

The Radeon HD 2900 XT has an interesting power consumption profile. ATI seems to have done a good job at keeping idle power consumption relatively low.  In both single and dual-card configuration, the Radeon HD 2900 XT consumed less power than any of NVIDIA's high-end solutions including the 8800 GTS. With the card running under a heavy 3D workload, however, the HD 2900 XT's power consumption skyrockets.  Despite having less memory and generally fewer components on its PCB, not to mention being significantly slower, the HD 2900 XT consumes more power than a GeForce 8800 Ultra. The same holds true in the dual-card setup as well. ATI clearly won't be winning any awards in the "performance per watt" category with the 2900 XT in its current form.

We also want to talk a little bit about temperatures and noise. During our testing, we found that the Radeon HD 2900 XT - according to the data presented on the Overdrive tab - ran at temperatures between approximately 55°C and 88°C during idle / load situations. We also used an infrared thermometer to check the temperature of the PCB just behind the GPU while the card was running under load and found that went as high as 69°C. Cool running, the ATI Radeon HD 2900 XT is not.

In regard to its acoustic signature, we found the Radeon HD 2900 XT to be somewhat irritating, but not necessarily very loud. According to a digital sound level meter placed about 1 foot away from our test system (which had its side-panel removed) the Radeon HD 2900 XT generated about 55dB - 59dB of sound pressure when its fans were spun up in a typical gaming scenario. At idle, however, the cards were nice and quiet and weren't audible over the PSU and CPU fans, and hard drive. Conversely, a GeForce 8800 GTS SLI rig generated a more palatable 53dB under load.

Perhaps more noticeable than the actual sound level, however, was the pitch of the Radeon's fan. It definitely produced a much more audible whine when spinning at high-speed than the GTS, which some users may find distracting.

Our Summary and Conclusion


Performance Summary: Throughout our entire battery of gaming related benchmarks, the new Radeon HD 2900 XT performed at roughly the same level as a standard GeForce 8800 GTS. Depending on the test configuration, the Radeon HD 2900 XT and standard GeForce 8800 GTS traded victories, but more often than not the GTS pulled ahead. In comparison to a factory-overclocked GTS, the Radeon HD 2900 XT obviously fares a bit worse and it simply cannot hold a candle to the more powerful, and of course more expensive GeForce 8800 GTX or Ultra.

The performance landscape is largely unchanged as it relates to Radeon HD 2900 XT CrossFire versus GeForce 8800 GTS SLI as well.  Somewhat surprisingly, due to superior scaling in a few games, HD 2900 XT CrossFire outperforms 8800 GTS SLI in quite a few tests, especially at XHD resolutions.

With regard to video playback, ATI's and NVIDIA's respective video engines produced comparable results in the tests we ran.  Keep in mind, however, that despite similar performance to a standard GeForce 8800 GTS, the Radeon HD 2900 XT consumes considerably more power, produces more heat, and generates more noise as well.   Finally, despite some new anti-aliasing modes, ATI hasn't done much to enhance image quality over their previous generation.

As we bring this article to a close, we're sure many of you are left scratching your head and wondering what happened.  Has ATI forgotten about enthusiasts now that they are part of AMD and don't care about the high-end any longer?  The answer to that question is, no.  Even though you may hear different, we think ATI would have loved nothing more than to release a halo product that shines above all others.  We just think they rolled the dice with R600 and built it on TSMC's 80nm node hoping it would be able to hit higher clock speeds and have much more manageable power consumption.  Unfortunately, that just didn't happen.  If the GPUs ran reliably at 900MHz - 1GHz with better power characteristics, you can bet ATI would have released a much faster Radeon HD 2900 today with more RAM and much more fanfare. Again, that's not the way it turned out, so instead ATI is offering their latest flagship at a lower-than usual price point and throwing in an awesome game bundle to enhance its value.  At least it's going to be awesome, when all of the games are eventually released.

To summarize, the $399 Radeon HD 2900 XT is a feature rich, DX 10 class graphics card that competes relatively well with a standard GeForce 8800 GTS in many aspects, except power consumption.  We wished it was more, but a company can pull only so many Radeon 9700-type launches out of their hat.  Over time, the Radeon HD 2900 XT is likely to become a somewhat stronger product due to more finely tuned drivers and continued tweaks made to TSMC's manufacturing process, but for now it is what it is. And who knows what ATI has in store over the next few months.  If I was a betting man, I'd say a new high-end GPU manufactured at 65nm will arrive sooner rather than later.

The mainstream Radeon HD 2400 and 2600 series of cards and the new Mobility Radeons we mentioned, however, look very promising.  We look forward to seeing how they perform in the coming months when they are released.

  • Black Box Bundle
  • DX10
  • Native CrossFire
  • Power Consumption
  • Heat & Noise
  • Late to the Game
  • Not Competition for 8800 GTX

Discuss This Article in HotHardware's PC Hardware Forum

Content Property of HotHardware.com