Logo   Banner   TopRight
TopUnder
Transparent
ATI Radeon X1K Refresh: X1950 XTX, X1900 XT 256MB, X1650 Pro, and X1300 XT
Transparent
Date: Aug 23, 2006
Section:Graphics/Sound
Author: Marco Chiappetta
Transparent
Introduction and Related Information

Although there has obviously been a lot going on behind the scenes at ATI these last few months, i.e. the proposed AMD merger, the company has remained relatively quiet.  The last major graphics card launch was for the Radeon X1900 family back in January.  And since then, the company has released only a couple of mid-range graphics cards that utilized existing GPU technology.  Today, however, ATI is announcing five new products targeting virtually every segment of the graphics market.

At the high-end ATI is announcing the Radeon X1950 XTX and Radeon X1950 XTX CrossFire Edition cards. The new flagship features essentially the same GPU found on the older Radeon X1900 XTX, but the X1950 XTX is the first consumer level graphics card equipped with ultra-fast GDDR4 RAM. The RAM on the Radeon X1950 XTX is actually clocked at 1GHz, for an effective clock speed of 2GHz.

For performance enthusiasts on a budget, ATI is announcing a new 256MB Radeon X1900 XT and for the mainstream and entry-level market segments, the company has the Radeon X1650 Pro and Radeon X1300 XT on tap.  We've got details regarding all of these cards on the proceeding pages, but for now let's lay some groundwork and refer you to some previous HotHardware articles relevant to today's announcement. 


There is a myriad of information related to today's launch already available on our site that will help you get familiar with the GPU architectures and key features employed on the cards we'll be showcasing here. The Radeon X1950 XTX has a number of features in common with other cards in the Radeon X1K family of products, and we've detailed the features of the Radeon Xpress 3200 chipset and ATI's CrossFire multi-GPU technology in a few past articles as well.

At a minimum, if you haven't already done so, we recommend scanning through our CrossFire Multi-GPU technology preview, the CrossFire Xpress 3200 chipset evaluation, the X1K family review, and the Radeon X1900 CrossFire evaluation. In those four articles, we cover the vast majority of the features offered by the Radeon X1950 XTX and the other cards being announced today. There is quite a bit of background information in those articles that laid the foundation for what we're going to showcase here today.

Digg it

Transparent
Radeon X1950 XTX: The New Flagship

ATI's brand new flagship consumer level graphics card is the Radeon X1950 XTX. The Radeon X1950 XTX is technically a mixture of old and new technology, but the end result is one of the most powerful graphics cards to ever come out of ATI.


Two Radeon X1950 XTX Cards
In a CrossFire Configuration

ATI Radeon X1950 XTX
Features & Specifications
Features
- 384 million transistors on 90nm fabrication process
- Up to 48 pixel shader processors
- 8 vertex shader processors
- Up to 256-bit 8-channel GDDR4 memory interface
- Native PCI Express x16 bus interface

Ring Bus Memory Controller
- Up to 512-bit internal ring bus for memory reads
- Fully associative texture, color, and Z/stencil cache designs
- Hierarchical Z-buffer with Early Z test
- Lossless Z Compression (up to 48:1)
- Fast Z-Buffer Clear
- Optimized for performance at high display resolutions, including widescreen HDTV resolutions

Ultra-Threaded Shader Engine
- Support for Microsoft DirectX 9.0 Shader Model 3.0 programmable vertex and pixel shaders in hardware
- Full speed 128-bit floating point processing for all shader operations
- Up to 512 simultaneous pixel threads
- Dedicated branch execution units for high performance dynamic branching and flow control
- Dedicated texture address units for improved efficiency
- 3Dc+ texture compression o High quality 4:1 compression for normal maps and two channel data formats
- High quality 2:1 compression for luminance maps and single-channel data formats
- Complete feature set also supported in OpenGL 2.0

Advanced Image Quality Features
- 64-bit floating point HDR rendering supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 32-bit integer HDR (10:10:10:2) format supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 2x/4x/6x Anti-Aliasing modes
o Multi-sample algorithm with gamma correction, programmable sparse sample
patterns, and centroid sampling
o New Adaptive Anti-Aliasing feature with Performance and Quality modes
o Temporal Anti-Aliasing mode
o Lossless Color Compression (up to 6:1) at all resolutions, including widescreen HDTV resolutions
- 2x/4x/8x/16x Anisotropic Filtering modes
o Up to 128-tap texture filtering
o Adaptive algorithm with Performance and Quality options
- High resolution texture support (up to 4k x 4k)
Avivo Video and Display Platform
- High performance programmable video processor
o Accelerated MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264 decoding and
transcoding
o DXVA support
o De-blocking and noise reduction filtering
o Motion compensation, IDCT, DCT and color space conversion
o Vector adaptive per-pixel de-interlacing
o 3:2 pulldown (frame rate conversion)
- Seamless integration of pixel shaders with video in real time
- HDR tone mapping acceleration
o Maps any input format to 10 bit per channel output
- Flexible display support
o Dual integrated dual-link DVI transmitters
o Dual integrated 10 bit per channel 400 MHz DACs
o 16 bit per channel floating point HDR and 10 bit per channel DVI output
o Programmable piecewise linear gamma correction, color correction, and color space
conversion (10 bits per color)
o Complete, independent color controls and video overlays for each display
o High quality pre- and post-scaling engines, with underscan support for all outputs
o Content-adaptive de-flicker filtering for interlaced displays
o Xilleon TV encoder for high quality analog output
o YPrPb component output for direct drive of HDTV displays
o Spatial/temporal dithering enables 10-bit color quality on 8-bit and 6-bit displays
o Fast, glitch-free mode switching
o VGA mode support on all outputs
o Drive two displays simultaneously with independent resolutions and refresh rates
- Compatible with ATI TV/Video encoder products, including Theater 550

CrossFire
- Multi-GPU technology
- Four modes of operation:
o Alternate Frame Rendering (maximum performance)
o Supertiling (optimal load-balancing)
o Scissor (compatibility)
o Super AA 8x/10x/12x/14x (maximum image quality)
 


      

      
Radeon X1950 XTX

The GPU at the heart of the Radeon X1950 XTX is essentially the same R580 found on the Radeon X1900 XTX. However, ATI informs us that the chip was re-spun to properly support the ultra-fast GDDR4 memory used on the card. As we've mentioned in the past, the R580 GPU comprised of approximately 380 million transistors and is built using a .09 micron manufacturing process. The GPU is outfitted with 48-pixel shader processors, 8-vertex shader processors, and a 256-bit 8-channel GDDR3/GDDR4 memory interface. The chip's internal Ring Bus memory controller operates at 512-bits, however.

Like the Radeon X1900 XTX, the X1950 XTX's core is clocked at 650MHz but the 512MB of GDDR4 memory on the X1950 XTX is clocked at an impressive 1GHz (2GHz DDR). This boost to the memory clock results in a healthy 64GB/s of memory bandwidth.  To help sustain these high clock speeds, the Radeon X1950 XTX sports a brand new segmented, copper cooler that is much quieter than previous models. The new cooler draws air in through a fan at the rear, blows air across the individual GPU and memory heatsinks, and then exhausts it out from the system.  The X1950's PCB is similar to the X1900 XTX's, but some obvious changes were made to implement the GDDR4 memory and to accommodate for the new cooler's fan.  And unlike previous CrossFire Edition cards, the Radeon X1950 XTX Master Card is configured in the exact same way.

Digg it

Transparent
Radeon X1900 XT 256MB: High Performance

 

ATI is also announcing a new Radeon X1900 card today. Pictured below the list of features and specifications, is the new 256MB Radeon X1900 XT. As its name implies, the 256MB Radeon X1900 XT is much like its 512MB counterpart. The only major different is that the 256MB card has half the amount of on-board frame buffer memory.

ATI Radeon X1900 XT 256MB
Features & Specifications
Features
- 384 million transistors on 90nm fabrication process
- Up to 48 pixel shader processors
- 8 vertex shader processors
- Up to 256-bit 8-channel GDDR4 memory interface
- Native PCI Express x16 bus interface

Ring Bus Memory Controller
- Up to 512-bit internal ring bus for memory reads
- Fully associative texture, color, and Z/stencil cache designs
- Hierarchical Z-buffer with Early Z test
- Lossless Z Compression (up to 48:1)
- Fast Z-Buffer Clear
- Optimized for performance at high display resolutions, including widescreen HDTV resolutions

Ultra-Threaded Shader Engine
- Support for Microsoft DirectX 9.0 Shader Model 3.0 programmable vertex and pixel shaders in hardware
- Full speed 128-bit floating point processing for all shader operations
- Up to 512 simultaneous pixel threads
- Dedicated branch execution units for high performance dynamic branching and flow control
- Dedicated texture address units for improved efficiency
- 3Dc+ texture compression o High quality 4:1 compression for normal maps and two channel data formats
- High quality 2:1 compression for luminance maps and single-channel data formats
- Complete feature set also supported in OpenGL 2.0

Advanced Image Quality Features
- 64-bit floating point HDR rendering supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 32-bit integer HDR (10:10:10:2) format supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 2x/4x/6x Anti-Aliasing modes
o Multi-sample algorithm with gamma correction, programmable sparse sample
patterns, and centroid sampling
o New Adaptive Anti-Aliasing feature with Performance and Quality modes
o Temporal Anti-Aliasing mode
o Lossless Color Compression (up to 6:1) at all resolutions, including widescreen HDTV resolutions
- 2x/4x/8x/16x Anisotropic Filtering modes
o Up to 128-tap texture filtering
o Adaptive algorithm with Performance and Quality options
- High resolution texture support (up to 4k x 4k)
Avivo Video and Display Platform
- High performance programmable video processor
o Accelerated MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264 decoding and
transcoding
o DXVA support
o De-blocking and noise reduction filtering
o Motion compensation, IDCT, DCT and color space conversion
o Vector adaptive per-pixel de-interlacing
o 3:2 pulldown (frame rate conversion)
- Seamless integration of pixel shaders with video in real time
- HDR tone mapping acceleration
o Maps any input format to 10 bit per channel output
- Flexible display support
o Dual integrated dual-link DVI transmitters
o Dual integrated 10 bit per channel 400 MHz DACs
o 16 bit per channel floating point HDR and 10 bit per channel DVI output
o Programmable piecewise linear gamma correction, color correction, and color space
conversion (10 bits per color)
o Complete, independent color controls and video overlays for each display
o High quality pre- and post-scaling engines, with underscan support for all outputs
o Content-adaptive de-flicker filtering for interlaced displays
o Xilleon TV encoder for high quality analog output
o YPrPb component output for direct drive of HDTV displays
o Spatial/temporal dithering enables 10-bit color quality on 8-bit and 6-bit displays
o Fast, glitch-free mode switching
o VGA mode support on all outputs
o Drive two displays simultaneously with independent resolutions and refresh rates
- Compatible with ATI TV/Video encoder products, including Theater 550

CrossFire
- Multi-GPU technology
- Four modes of operation:
o Alternate Frame Rendering (maximum performance)
o Supertiling (optimal load-balancing)
o Scissor (compatibility)
o Super AA 8x/10x/12x/14x (maximum image quality)
 


    

    
256MB Radeon X1900 XT

The card pictured here is a 256MB Radeon X1900 XT. At its heart is the very same R580 GPU found on the Radeon X1900 XTX with 48-pixel shader processors, 8-vertex shader processors, and a 256-bit 8-channel GDDR3/GDDR4 memory interface.  Like the 512Mb variant, the 256MB Radeon X1900 XT's core is clocked at 625MHz and its memory is running at a respectable 1.45GHz. To 256MB Radeon X1900 XT sports the same beefy dual-slot cooler used on older members of the Radeon X1800 and X1900 family, but hey, Ruby is striking a different pose on the stock cooler.

Digg it

Transparent
Radeon X1650 Pro: Mainstream

ATI is also announcing a new graphics card today targeted at mainstream users, the Radeon X1650 Pro.  The new Radeon X1650 Pro is basically an evolution of the Radeon X1600 XT, but with slightly higher clock speeds and a lower introductory price-point.

   
Sapphire Radeon X1650 Pro

ATI Radeon X1650 Pro
Features & Specifications
Features
- 157 million transistors on 90nm fabrication process
- Dual-link DVI
- Twelve pixel shader processors
- Five vertex shader processors
- 128-bit 4-channel DDR/DDR2/GDDR3 memory interface
- Native PCI Express x16 bus interface
o AGP 8x configurations also supported with AGP-PCI-E external bridge chip
- Dynamic Voltage Control
 
Ring Bus Memory Controller
- 256-bit internal ring bus for memory reads
- Programmable intelligent arbitration logic
- Fully associative texture, color, and Z/stencil cache designs
- Hierarchical Z-buffer with Early Z test
- Lossless Z Compression (up to 48:1)
- Fast Z-Buffer Clear
- Z/stencil cache optimized for real-time shadow rendering
 
Ultra-Threaded Shader Engine
- Support for Microsoft DirectX 9.0 Shader Model 3.0 programmable vertex and pixel shaders in hardware
- Full speed 128-bit floating point processing for all shader operations
- Up to 128 simultaneous pixel threads
- Dedicated branch execution units for high performance dynamic branching and flow
control
- Dedicated texture address units for improved efficiency
- 3Dc+ texture compression
o High quality 4:1 compression for normal maps and two-channel data formats
o High quality 2:1 compression for luminance maps and single-channel data formats
- Multiple Render Target (MRT) support
- Render to vertex buffer support
- Complete feature set also supported in OpenGL 2.0
 
Advanced Image Quality Features
- 64-bit floating point HDR rendering supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 32-bit integer HDR (10:10:10:2) format supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 2x/4x/6x Anti-Aliasing modes
- Multi-sample algorithm with gamma correction, programmable sparse sample patterns,
and centroid sampling
o New Adaptive Anti-Aliasing feature with Performance and Quality modes
o Temporal Anti-Aliasing mode
- Lossless Color Compression (up to 6:1) at all resolutions, including widescreen HDTV
resolutions
- 2x/4x/8x/16x Anisotropic Filtering modes
o Up to 128-tap texture filtering
o Adaptive algorithm with Performance and Quality options
- High resolution texture support (up to 4k x 4k)
Avivo Video and Display Platform
- High performance programmable video processor
- Accelerated MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264 decoding and transcoding
o DXVA support
o De-blocking and noise reduction filtering
o Motion compensation, IDCT, DCT and color space conversion
o Vector adaptive per-pixel de-interlacing
o 3:2 pulldown (frame rate conversion)
- Seamless integration of pixel shaders with video in real time
- HDR tone mapping acceleration
o Maps any input format to 10 bit per channel output
- Flexible display support
o DVI 1.0 compliant / HDMI interoperable
o Dual integrated 10 bit per channel 400 MHz DACs
o 16 bit per channel floating point HDR and 10 bit per channel DVI output
o Programmable piecewise linear gamma correction, color correction, and color space
conversion (10 bits per color)
o Complete, independent color controls and video overlays for each display
o High quality pre- and post-scaling engines, with underscan support for all outputs
o Content-adaptive de-flicker filtering for interlaced displays
o Xilleon TV encoder for high quality analog output
o YPrPb component output for direct drive of HDTV displays*
o Spatial/temporal dithering enables 10-bit color quality on 8-bit and 6-bit displays
o Fast, glitch-free mode switching
o VGA mode support on all outputs
o Drive two displays simultaneously with independent resolutions and refresh rates
- Compatible with ATI TV/Video encoder products, including Theater 550
 
CrossFire
- Multi-GPU technology
- Four modes of operation:
o Alternate Frame Rendering (maximum performance)
o Supertiling (optimal load-balancing)
o Scissor (compatibility)
o Super AA 8x/10x/12x/14x (maximum image quality)

 







ATI Radeon X1650 Pro


      

   

The Radeon X1650 Pro will be the fastest card in the Radeon X1600 family. The GPU on this card is comprised of roughly 157 million transistors, and features 12-pixel shader processors, 5-vertex shader processors, two dual-link DVI outputs, a single-slot cooler, and a 128-bit 4-channel memory interface.  The Radeon X1650 Pro sports 256MB of RAM clocked at 1.4GHz and a GPU core clocked at 600MHz.  The Radeon X1600 XT - formerly the fastest of the X1600-based cards - shipped with its GPU clocked at 590MHz and its RAM clocked at 1.38GHz. The X1650 Pro's slight clock speed boost give it a slight edge in performance over the XT.

   

The Radeon X1650 Pro we tested came by way of Sapphire Tech.  For all intents and purposes, the Sapphire Radeon X1650 Pro will be identical to a "Built-by-ATI" card in just about every way except for its bundle. Sapphire ships their card with a complete user's manual, a driver / utility CD, an HD component output dongle, a composite video cable, an S-Video to RCA adapter, and a DVI-to-DB15 monitor adapter.  ATI-built versions of the card will likely feature a more subdued bundle if history is any indicator.

Digg it

Transparent
Radeon X1300 XT: Entry-Level

The last addition to ATI's Radeon X1K family of card being announced today is the Radeon X1300 XT.  Although this card's naming convention puts it in the same league as the X1300 Pro, its actually based on the same GPU used on the Radeon X1600 family of cards.

   
Sapphire Radeon X1300 XT

ATI Radeon X1300 XT
Features & Specifications
Features
- 157 million transistors on 90nm fabrication process
- Dual-link DVI
- Twelve pixel shader processors
- Five vertex shader processors
- 128-bit 4-channel DDR/DDR2/GDDR3 memory interface
- Native PCI Express x16 bus interface
o AGP 8x configurations also supported with AGP-PCI-E external bridge chip
- Dynamic Voltage Control
 
Ring Bus Memory Controller
- 256-bit internal ring bus for memory reads
- Programmable intelligent arbitration logic
- Fully associative texture, color, and Z/stencil cache designs
- Hierarchical Z-buffer with Early Z test
- Lossless Z Compression (up to 48:1)
- Fast Z-Buffer Clear
- Z/stencil cache optimized for real-time shadow rendering
 
Ultra-Threaded Shader Engine
- Support for Microsoft DirectX 9.0 Shader Model 3.0 programmable vertex and pixel shaders in hardware
- Full speed 128-bit floating point processing for all shader operations
- Up to 128 simultaneous pixel threads
- Dedicated branch execution units for high performance dynamic branching and flow
control
- Dedicated texture address units for improved efficiency
- 3Dc+ texture compression
o High quality 4:1 compression for normal maps and two-channel data formats
o High quality 2:1 compression for luminance maps and single-channel data formats
- Multiple Render Target (MRT) support
- Render to vertex buffer support
- Complete feature set also supported in OpenGL 2.0
 
Advanced Image Quality Features
- 64-bit floating point HDR rendering supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 32-bit integer HDR (10:10:10:2) format supported throughout the pipeline
o Includes support for blending and multi-sample anti-aliasing
- 2x/4x/6x Anti-Aliasing modes
- Multi-sample algorithm with gamma correction, programmable sparse sample patterns,
and centroid sampling
o New Adaptive Anti-Aliasing feature with Performance and Quality modes
o Temporal Anti-Aliasing mode
- Lossless Color Compression (up to 6:1) at all resolutions, including widescreen HDTV
resolutions
- 2x/4x/8x/16x Anisotropic Filtering modes
o Up to 128-tap texture filtering
o Adaptive algorithm with Performance and Quality options
- High resolution texture support (up to 4k x 4k)
Avivo Video and Display Platform
- High performance programmable video processor
- Accelerated MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264 decoding and transcoding
o DXVA support
o De-blocking and noise reduction filtering
o Motion compensation, IDCT, DCT and color space conversion
o Vector adaptive per-pixel de-interlacing
o 3:2 pulldown (frame rate conversion)
- Seamless integration of pixel shaders with video in real time
- HDR tone mapping acceleration
o Maps any input format to 10 bit per channel output
- Flexible display support
o DVI 1.0 compliant / HDMI interoperable
o Dual integrated 10 bit per channel 400 MHz DACs
o 16 bit per channel floating point HDR and 10 bit per channel DVI output
o Programmable piecewise linear gamma correction, color correction, and color space
conversion (10 bits per color)
o Complete, independent color controls and video overlays for each display
o High quality pre- and post-scaling engines, with underscan support for all outputs
o Content-adaptive de-flicker filtering for interlaced displays
o Xilleon TV encoder for high quality analog output
o YPrPb component output for direct drive of HDTV displays*
o Spatial/temporal dithering enables 10-bit color quality on 8-bit and 6-bit displays
o Fast, glitch-free mode switching
o VGA mode support on all outputs
o Drive two displays simultaneously with independent resolutions and refresh rates
- Compatible with ATI TV/Video encoder products, including Theater 550
 
CrossFire
- Multi-GPU technology
- Four modes of operation:
o Alternate Frame Rendering (maximum performance)
o Supertiling (optimal load-balancing)
o Scissor (compatibility)
o Super AA 8x/10x/12x/14x (maximum image quality)

 







Radeon X1300 XT


         

Like the cards in the X1600 family, the Radeon X1300 XT is equipped with a GPU that's outfitted with 12-pixel shader processors, 5-vertex shader processors, dual-link DVI outputs (although cards will have DB15 / DVI output configurations), a single-slot cooler, and a 128-bit 4-channel memory interface. Reference Radeon X1300 XT GPUs will be clocked at 500MHz and their memory will be clocked at 800MHz, which essentially makes the card equivalent to a Radeon X1600 Pro.

The Radeon X1300 XT we received for testing is built by Sapphire Tech, one of ATI's premiere board partners. The Sapphire Radeon X1300 XT is very similar to the X1650 Pro pictured on the previous page, but Sapphire's 1300 XT has a different cooler, lower core GPU and memory clocks, and the bundle didn't include a composite video cable.  But please note, the Sapphire X1300 XT "Overclock Edition" pictured here has its GPU clocked at 575MHz and it's outfitted with 1.38GHz (DDR) memory, which will make it much faster than "standard" Radeon X1300 XT cards.

Digg it

Transparent
Super-AA IQ and Performance

In this batch of images, we want to update you with a comparison of NVIDIA's and ATI's multi-GPU anti-aliasing techniques. NVIDIA calls their technology SLIAA and ATI calls theirs Super-AA. These modes are only enabled when using a pair of graphics cards together, either in SLI or CrossFire modes, because each card renders the same frame before they are blended together.  For more details on these anti-aliasing modes, please take a look at this article on SLIAA and this one outlining the new features introduced with ATI's CrossFire.

Image Quality Analysis: SLI & CrossFire Anti-Aliasing Modes
NVIDIA vs. ATI
NVIDIA GeForce 7950 GX2 Quad-SLI AA Screenshots

GeForce 7950 GX2 Quad-SLI
1280x1024 - 8xS AA

GeForce 7950 GX2 Quad-SLI
1280x1024 - SLI8X AA

GeForce 7950 GX2 Quad-SLI
1280x1024 - SLI16X AA

GeForce 7950 GX2 Quad-SLI
1280x1024 - SLI32X AA

ATI Radeon X1950 XT CrossFire Super AA Screenshots

Radeon X1950 CrossFire
1280x1024 - 8X Super AA

Radeon X1950 CrossFire
1280x1024 - 10X Super AA

Radeon X1950 CrossFire
1280x1024 - 12X Super AA

Radeon X1950 CrossFire
1280x1024 - 14X Super AA

There are three portions of the screen to focus on these screen shots - the cables, the trees, and the antennas on top of the building in the distance. Our favorite modes from ATI would have to be the 10X and 14X Super-AA modes because they do an excellent job of eliminating jaggies in the cables and preserving fine detail in the trees and antennas.  NVIDIA also does a fine job with their multi-GPU SLIAA modes though.  The SLI32X screenshot in particular looks excellent.  Focus your attention on the antennas at the upper left of the image and you'll see the increased details. SLI32X AA is quite taxing on the hardware, however, and may cause unplayable framerates in many current titles. For older or CPU-bound games though, SLI32X AA is a reasonable option and looks great.

CrossFire AA Performance: FarCry v1.33
Upping the Number of Samples

We didn't perform a comprehensive test routine to assess the performance of all of ATI's CrossFire Super-AA modes with the X1950s, but we did run a couple of tests to get a general idea as to how the various modes perform in comparison to NVIDIA's Quad-SLI. Please note, the compositing engine used on the X1950 offers higher-performance in Super-AA modes than the engine used on the early X850 XT Master cards. The X1950's compositing engine can do the final blend with each individual card running at full speed, which results in higher performance.

NVIDIA's 8xS anti-aliasing mode does not require a multi-GPU configuration to function, but because it is the highest of the standard AA modes we've included it for comparison purposes only. As you can see, with FarCry running at 1600x1200, NVIDIA's SLI8X mode clearly offers the highest performance. Once the level of anti-aliasing is cranked above SLI8X on NVIDIA's hardware, however, ATI's mode offer the best performance. Because the compositing engine is doing a large portion of the work on the X1950, switching between the different Super-AA modes doesn't result in as large of a performance hit.

Digg it

Transparent
Test Systems and X1300 / X1650 3DMark06

HOW WE CONFIGURED THE TEST SYSTEMS: We tested the NVIDIA based cards used in this article on an Asus A8N32-SLI nForce 4 SLIX16 chipset based motherboard. The ATI powered cards, however, were tested on an A8R32-MVP motherboard based on the CrossFire Xpress 3200 chipset. Both systems used the same AMD Athlon 64 FX-60 dual-core processor and 2GB of low-latency Corsair XMS RAM. The first thing we did when configuring these test systems was enter each BIOS and loaded their "High Performance Defaults."  The hard drives were then formatted, and Windows XP Professional with SP2 was installed. When the installation was complete, we installed the latest chipset drivers available, installed all of the other drivers necessary for the rest of our components, and removed Windows Messenger from the system.  Auto-Updating and System Restore were also disabled, the hard drive was defragmented, and a 1024MB permanent page file was created on the same partition as the Windows installation. Lastly, we set Windows XP's Visual Effects to "best performance," installed all of the benchmarking software, and ran the tests.

The HotHardware Test Systems
AMD Athlon 64 FX Powered

Processor -

Motherboard -





Video Cards -











Memory -


Audio -

Hard Driv
e -

 

Hardware Used:
AMD Athlon 64 FX-60 (2.6GHz x 2)

Asus A8N32-SLI
nForce4 SLIX16 chipset

Asus A8R32-MVP
ATI CrossFire Xpress 3200

GeForce 7950 GX2 (x2)
GeForce 7900 GTX (x2)

GeForce 7900 GT
GeForce 7600 GT
Radeon X1950 XTX (x2)
Radeon X1900 XTX (x2)
Radeon X1900 XT 256MB
Radeon X1650 Pro
Radeon X1300 XT
Radeon X1800 GTO


2048MB Corsair XMS PC3200 RAM
CAS 2

Integrated on board

Western Digital "Raptor"

74GB - 10,000RPM - SATA

Operating System -
Chipset Drivers -
DirectX -

Video Drivers
-




Synthetic (DX) -
DirectX -
DirectX -
DirectX -
OpenGL -
Relevant Software:
Windows XP Professional SP2
nForce Drivers v6.86
DirectX 9.0c (August Redist.)

NVIDIA Forceware v91.45

ATI Catalyst v6.8


Benchmarks Used:
3DMark06 v1.0.2
FarCry v1.33*
F.E.A.R. v1.07
Half Life 2: Episode 1*
Prey*

* - Custom Test (HH Exclusive demo)
X1300 XT / X1650 Pro Performance Comparisons with 3DMark06 v1.0.2
Details: http://www.futuremark.com/products/3dmark06/

3DMark06
Futuremark recently launched a brand-new version of their popular benchmark, 3DMark06. The new version of the benchmark is updated in a number of ways, and now includes not only Shader Model 2.0 tests, but Shader Model 3.0 and HDR tests as well. Some of the assets from 3DMark05 have been re-used, but the scenes are now rendered with much more geometric detail and the shader complexity is vastly increased as well. Max shader length in 3DMark05 was 96 instructions, while 3DMark06 ups the number of instructions to 512. 3DMark06 also employs much more lighting, and there is extensive use of soft shadows. With 3DMark06, Futuremark has also updated how the final score is tabulated. In this latest version of the benchmark, SM 2.0 and HDR / SM3.0 tests are weighted and the CPU score is factored into the final tally as well.

Considering the facts that the Sapphire Radeon X1300 XT and Radeon X1650 Pro cards share the same PCB and memory compliment, and that a scant 25MHz core clock frequency is the only major difference between them that would affect performance, we were expecting the two cards to put up similar scores in all of 3DMark06's tests, and we were right.  Only a few percentage points separated the two cards here in every test thanks to the Sapphire Radeon X1300 XT's higher clocks.  Unfortunately, we could not reliably down-clock the card to reference levels to guage "stock" performance.  Please keep that in mind, as a reference X1300 XT card will most definitely perform below Sapphire's offering.

Transparent
X1300 / X1650 HL2: Episode 1

X1300 XT/ X1650 Pro Performance Comparisons with Half-Life 2: Episode 1
Details: http://www.half-life2.com/

Half Life 2: Episode 1
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait quite a while to get our hands on HL2.  Armed with the latest episodic update to HL2, Episode 1, we benchmarked the game with a long, custom-recorded timedemo that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,280 x 1,024 and 1,600 x 1,200 with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently, and with color correction and HDR rendering enabled in the game engine as well.

 

The Sapphire Radeon X1300 XT Overclock Edition and Radeon X1650 Pro performed similarly in our custom Half Life 2: Episode 1 benchmark.  Neither card was able to catch the GeForce 7600 GT or Radeon X1800 GTO as you can see, but they weren't meant to in the first place.  At their proposed sub-$100 price points, the X1300 XT and X1650 Pro will be about 20% - 50% less expensive then either the 7600 GT or the GTO, not to mention the 7900 GT.

Transparent
X1300 / X1650 FarCry v1.33

X1300 XT / X1650 Pro Performance Comparisons with FarCry v1.33
Details: http://www.farcry.ubi.com/

FarCry
If you've been on top of the gaming scene for some time, you probably know that FarCry was one of the most visually impressive games to be released on the PC in the last few years.  Courtesy of its proprietary engine, dubbed "CryEngine" by its developers, FarCry's game-play is enhanced by Polybump mapping, advanced environment physics, destructible terrain, dynamic lighting, motion-captured animation, and surround sound. Before titles such as Half-Life 2 and Doom 3 hit the scene, FarCry gave us a taste of what was to come in next-generation 3D gaming on the PC. We benchmarked the graphics cards in this article with a fully patched version of FarCry using a custom-recorded demo run taken in the "Catacombs" area checkpoint. The tests were run at various resolutions with 4X AA and 16X aniso enabled concurrently.

Sapphire's Radeon X1300 XT OC Edition and Radeon X1650 Pro were right on top of each other in our custom FarCry benchmark.  Once again, neither card was able to approach the performance of the more expensive GTO or 7600 GT, but for under a hundred bucks, either card would be a nice upgrade from integrated graphics. 

Transparent
X1300 / X1650 F.E.A.R. v1.07

X1300 XT / X1650 Pro Performance Comparisons with F.E.A.R
More Info: http://www.whatisfear.com/us/

F.E.A.R
One of the most highly anticipated titles of 2005 was Monolith's paranormal thriller F.E.A.R. Taking a look at the game's minimum system requirements, we see that you will need at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card in the Radeon 9000 or GeForce4 Ti-classes or better, to adequately run the game. Using the full retail release of the game patched to v1.07, we put the graphics cards in this article through their paces to see how they fared with a popular title. Here, all graphics settings within the game were set to their maximum values, but with soft shadows disabled (Soft shadows and anti-aliasing do not work together currently). Benchmark runs were then completed at resolutions of 1,280x960 and 1,600x1,200, with anti-aliasing and anisotropic filtering enabled.

 

By now we're sure you're all starting to see the trend that's played out over the last few pages. As we've already explained, the Sapphire Radeon X1300 XT OC Edition and Radeon X1650 Pro are so similar, they perform at nearly identical levels as illustrated by the F.E.A.R. benchmark.  Only one frame per second separated the cards at either resolution.

Transparent
X1300 / X1650 Prey Performance

X1300 XT / X1650 Pro Performance Comparisons with Prey
Details: http://www.prey.com/

Prey
After many years of development, Take-Two Interactive recently released the highly anticipated game Prey. Prey is based upon an updated and modified version of the Doom 3 engine, and as such performance characteristics between the two titles are very similar.  Like Doom 3, Prey is also an OpenGL game that uses extremely high-detailed textures and a plethora of dynamic lighting and shadows.  But unlike Doom3, Prey features a fare share of outdoor environments as well.  We ran these Prey benchmarks using a custom recorded timedemo with the game set to its "High-Quality" graphics mode, at resolutions of 1,280 x 1,024 and 1,600 x 1,200 with 4X AA and 16X anisotropic filtering enabled simultaneously.

 

There's not much to talk about in regard to Prey performance.  The same scenario discussed on the preceding pages plays out again here, with the Radeon X1300 XT OC Edition and Radeon X1650 Pro performing at virtually the same level at both resolutions.

Transparent
X1950 / X1900 3DMark06 Performance

** Important Note About Our Benchmarks: Due to inherent limitations in DirectX 9 a dual-GeForce 7950 GX2 Quad-SLI rig won't always perform better than a standard SLI GeForce 7900 GTX setup. DirectX 9 doesn't support queuing of enough back-buffers to effectively support high-performance 4-way AFR mode in D3D applications. This fact, combined with the GeForce 7950 GX2 GPUs and on-board memory being clocked lower than those of a GeForce 7900 GTX, along with Quad-SLI's increased overhead, means Quad-SLI could actually be slower than 7900 GTX SLI in some configurations. Quad-SLI will default to "AFR of SFR" mode for many Direct3D applications. And while this mode works well for some D3D apps, it is less effective with others. With OpenGL applications, however, 4-way AFR works properly and performance scales as expected. 

Performance Comparisons with 3DMark06 v1.0.2
Details: http://www.futuremark.com/products/3dmark06/

3DMark06
Futuremark recently launched a brand-new version of their popular benchmark, 3DMark06. The new version of the benchmark is updated in a number of ways, and now includes not only Shader Model 2.0 tests, but Shader Model 3.0 and HDR tests as well. Some of the assets from 3DMark05 have been re-used, but the scenes are now rendered with much more geometric detail and the shader complexity is vastly increased as well. Max shader length in 3DMark05 was 96 instructions, while 3DMark06 ups the number of instructions to 512. 3DMark06 also employs much more lighting, and there is extensive use of soft shadows. With 3DMark06, Futuremark has also updated how the final score is tabulated. In this latest version of the benchmark, SM 2.0 and HDR / SM3.0 tests are weighted and the CPU score is factored into the final tally as well.

In an effort to make our graphs a little easier to digest, we've highlighted the cards being introduced today in gold and separated the single-card configurations from the multi-card configurations.  As you can see, in the single-card tests, the new Radeon X1950 XTX is eclipsed only by NVIDIA's dual-GPU GeForce 7950 GX2.  And the 256MB Radeon X1900 XT is significantly faster than the GeForce 7900 GT.  Looking at the multi-card results, however, ATI's CrossFire X1950 and X1900 configurations are at the top of the heap - at least as far as 3DMark06 is concerned. 

If we focus on the Shader Model 2.0 results, the breakdown is similar, but in this test the GeForce 7900 GTX SLI rig posts the highest score.  We use the term "highest" loosely here, however, as the 7900 GTX SLI configuration's "leads" the X9150 XTX CrossFire configuration by only 1 point.

The results from 3DMark06's HDR / Shader Model 3.0 tests look much like the overall results above. In the single-card tests, nothing touches the X1950 or 7950 GX2, and in the multi-GPU tests, ATI's high-end CrossFire rigs finish in first and second place.  NVIDIA's Quad-SLI doesn't scale very well in this synthetic DirectX benchmark, however.  The results from our in-game tests don't necessarily jibe with these as you'll see on the ages ahead.

Digg it

Transparent
X1950 / X1900 HL2: Episode 1

Performance Comparisons with Half-Life 2: Episode 1
Details: http://www.half-life2.com/

Half Life 2: Episode 1
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait quite a while to get our hands on HL2.  Armed with the latest episodic update to HL2, Episode 1, we benchmarked the game with a long, custom-recorded timedemo that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,280 x 1,024 and 1,600 x 1,200 with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently, and with color correction and HDR rendering enabled in the game engine as well.

 

The results from our custom Half-Life 2: Episode 1 benchmark were a mixed bag.  In the single-card tests, the new Radeon X1950 XTX and GeForce 7950 GX2 traded victories depending on the resolution.  The Radeon was best at the lower resolution and the GeForce was stronger at the higher res.  The new 256MB Radeon X1900 XT also performed well, besting the GeForce 7900 GT by a sizable margin.

We also had some interesting results in our multi-card tests.  The Quad-SLI configuration did not scale properly and actually scored lower than a single GX2.  The GeForce 7900 GTX SLI rig, however, scaled as one would expect.  We suspect this is an issue NVIDIA will be addressing in a future driver release.  As it stands today though, ATI's high-end CrossFire configurations perform much better in Episode 1.

Digg it

Transparent
X1950 / X1900 FarCry v1.33

Performance Comparisons with FarCry v1.33
Details: http://www.farcry.ubi.com/

FarCry
If you've been on top of the gaming scene for some time, you probably know that FarCry was one of the most visually impressive games to be released on the PC in the last few years.  Courtesy of its proprietary engine, dubbed "CryEngine" by its developers, FarCry's game-play is enhanced by Polybump mapping, advanced environment physics, destructible terrain, dynamic lighting, motion-captured animation, and surround sound. Before titles such as Half-Life 2 and Doom 3 hit the scene, FarCry gave us a taste of what was to come in next-generation 3D gaming on the PC. We benchmarked the graphics cards in this article with a fully patched version of FarCry using a custom-recorded demo run taken in the "Catacombs" area checkpoint. The tests were run at various resolutions with 4X AA and 16X aniso enabled concurrently.

Our custom FarCry benchmark had the GeForce 7950 GX2 finishing well ahead of any other single card we tested. The Radeon X1950 XTX finished closed behind, but the GX2 was clearly faster at both resolutions.  The 256Mb Radeon X1900 XT fared better versus its main competition, the GeForce 7900 GT, besting the GT by double-digit framerates in both tests.

The multi-card tests continued to follow the same trend. Somewhat surprisingly, the Quad-SLI rig did not scale as one would expect it to and the GeForce 7900 GTX SLI rig was faster.  Please keep in mind that this is due to the DirectX limitation we mentioned earlier.  Had we used SLIAA for this test, as opposed to the in-game anti-aliasing option, Quad-SLI would have outpaced the GTX SLI rig.  The Radeon X1950 and X1900 CrossFire configuration once again ruled the roost finishing in first and second, but with all of the multi-GPU configuration posting triple-digit, or near triple-digit framerates, the performance deltas here aren't earth shattering.

Digg it

Transparent
X1950 / X1900 F.E.A.R. v1.07

Performance Comparisons with F.E.A.R
More Info: http://www.whatisfear.com/us/

F.E.A.R
One of the most highly anticipated titles of 2005 was Monolith's paranormal thriller F.E.A.R. Taking a look at the game's minimum system requirements, we see that you will need at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card in the Radeon 9000 or GeForce4 Ti-classes or better, to adequately run the game. Using the full retail release of the game patched to v1.07, we put the graphics cards in this article through their paces to see how they fared with a popular title. Here, all graphics settings within the game were set to their maximum values, but with soft shadows disabled (Soft shadows and anti-aliasing do not work together currently). Benchmark runs were then completed at resolutions of 1,280x960 and 1,600x1,200, with anti-aliasing and anisotropic filtering enabled.

 

If you look back at our F.E.A.R. performance numbers from the GeForce 7950 GX2 launch, you'll see that ATI has done an admirable job optimizing their drivers for this game. Back in June, a Radeon X1900 XTX CrossFire rig posted 66 frames per second in this test when the resolution was set to 1,600x1,200.  Now, the same rig puts up 75 FPS thanks to the improvements in the Catalyst v6.8 drivers.  Regardless of ATI's progress, the GeForce 7950 GX2 is clearly superior to the Radeon X1950 in the single-card tests.  And in the multi-card tests, Quad-SLI scaled fairly well which resulted in a one-two finish for NVIDIA here.

The 256MB Radeon X1900 XT continued to outpace the GeForce 7900 GT, and the Radeon X1950 XTX was able to overtake the GeForce 7900 GTX were the Radeon X1900 XTX could not, but the GeForce 7950 GX2 - whether using one or two cards - was clearly the dominant performer with F.E.A.R..

Digg it

Transparent
X1950 / X1900 Prey Performance

Performance Comparisons with Prey
Details: http://www.prey.com/

Prey
After many years of development, Take-Two Interactive recently released the highly anticipated game Prey. Prey is based upon an updated and modified version of the Doom 3 engine, and as such performance characteristics between the two titles are very similar.  Like Doom 3, Prey is also an OpenGL game that uses extremely high-detailed textures and a plethora of dynamic lighting and shadows.  But unlike Doom3, Prey features a fare share of outdoor environments as well.  We ran these Prey benchmarks using a custom recorded timedemo with the game set to its "High-Quality" graphics mode, at resolutions of 1,280 x 1,024 and 1,600 x 1,200 with 4X AA and 16X anisotropic filtering enabled simultaneously.

 

NVIDIA has traditionally dominated ATI in regard to OpenGL performance, especially in games that use the Doom 3 engine.  Prey is one of the newest titles to use the Doom 3 engine, and as expected NVIDIA's flagship GeForce 7950 GX2 easily outpaced the Radeon X1950 XTX here.  What's interesting to note, however, is that the X1950 XTX, and the 512 MB Radeon X1900 XTX for that matter, finished way ahead of the GeForce 7900 GTX in our custom Prey benchmark.  By working closely with Take-Two Interactive and optimizing their Catalyst v6.8 drivers, ATI's performance is much stronger with the Doom 3 engine than it historically has been.

Things are much closer in the multi-GPU tests, but ultimately only there is only one change in the rankings.  Quad-SLI is clearly the fastest configuration at both resolutions with Prey.  The 7900 GTX SLI and Radeon X1950 XTX CrossFire rigs trade second place finishes depending on the resolution, and the X1900 XTX CrossFire rig brings up the rear -- but not by much.  Only 2 - 6 frames per second separated the 7900 GTX SLI and X1900 XTX CrossFire configurations.

Digg it

Transparent
XHD Resolutions - HL2: Episode 1

On the proceeding few pages, we re-tested all of the graphics cards at XHD, or Extreme High-Definition, wide-screen resolutions. As prices continue to drop on large LCD panels, these resolutions will only become more popular. And with the power of some of today's multi-GPU graphics configurations, it sometimes takes running at game at these high resolutions to significantly tax the hardware.

Performance Comparisons with Half-Life 2: Episode 1 - XHD Resolutions
Details: http://www.half-life2.com/

Half Life 2: Episode 1
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait quite a while to get our hands on HL2.  Armed with the latest episodic update to HL2, Episode 1, we benchmarked the game with a long, custom-recorded timedemo that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,920 x 1,2004 and 2,560x1,600 with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently. Color correction and HDR rendering were enabled for these tests as well.

Regardless of the resolution or test configuration, ATI's new Radeon X1950 XTX and 256MB Radeon X1900 XT cards compared favorably to their NVIDIA's powered counterparts. The 256MB Radeon X1900 XT significantly outpaced the GeForce 7900 GT, and hung right alongside the 7900 GTX. The X1950 XTX fell victim to the GeForce 7950 GX2 at 1,9200x1,200, but at 2,560x1,600 only a fraction of a frame per second separated the two cards.

If we shift our attention to the multi-GPU tests, the DirectX issue we mentioned in our test system notes rears its ugly head again. The Quad-SLI rig's performance did not scale upwards, where all the others did. The high-end CrossFire rigs were clearly the fastest here, followed by the GeForce 7900 GTX SLI rig and then finally the dual-GeForce 7950 GX2 Quad-SLI setup.

Digg it

Transparent
XHD Resolutions: F.E.A.R.

Performance Comparisons with F.E.A.R - XHD Resolutions
More Info: http://www.whatisfear.com/us/

F.E.A.R
One of the most highly anticipated titles of 2005 was Monolith's paranormal thriller F.E.A.R. Taking a look at the minimum system requirements, we see that you will need at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card, that is a Radeon 9000 or GeForce4 Ti-class or better, to adequately run the game. Using the full retail release of the game patched to v1.03, we put the graphics cards in this review through their paces to see how they fared with a popular title. Here, all graphics settings within the game were set to the maximum values, but with soft shadows disabled (Soft shadows and anti-aliasing do not work together currently). Benchmark runs were then completed at resolutions of 1,920x1,200 and 2,560x1.600, with anti-aliasing and anisotropic filtering enabled.

The results are mixed with F.E.A.R. running at XHD resolutions. Once again, the 256MB Radeon X1900 XT outperformed the GeForce 7900 GT, but the X1950 XTX faced much stiffer competition at the high-end. At 1,920x1,200 the single-GeForce 7950 GX2 and dual-GeForce 7950 GX2 Quad-SLI configurations were the fastest by far.  With the resolution cranked up to 2,560x1,600 though, the scaled tipped in favor of the Radeon X1950 XTX.  In a single card configuration, the Radeon X1950 XTX posted 28 FPS, about 10% higher than the GeForce 7950 GX2. And the Radeon X1950 XTX CrossFire rig was also faster than the Quad-SLI system at the higher resolution, but this time only 1 FPS separated to the two.

Digg it

 

Transparent
XHD Resolutions: Prey

Performance Comparisons with Prey - XHD Resolutions
Details: http://www.prey.com/

Prey
After many years of development, Take-Two Interactive recently released the highly anticipated game Prey. Prey is based upon an updated and modified version of the Doom 3 engine, and as such, performance characteristics between the two titles are very similar.  Like Doom 3, Prey is an OpenGL-based game that uses extremely high-detailed textures and a ton of dynamic lighting and shadows. But unlike Doom3, Prey features some outdoor environments as well. We ran these Prey benchmarks using a custom demo with the game set to its "High-Quality" graphics mode, at resolutions of 1,920 x 1,200 and 2,560 x 1,600 with 4X AA and 16X anisotropic filtering enabled simultaneously.

Our XHD Prey benchmarks are all about the GeForce 7950 GX2.  In the single-card tests, the GeForce 7950 GX2 handily outpaced the Radeon X1950 XTX at both resolutions. The same basically holds true in the multi-GPU tests, although the Quad-SLI system's margins of victory are slightly smaller.  When compared to the GeForce 7900 GTX though, the Radeon X1950 XTX fares much better.  The Radeon X1950 XTX, whether running as a single card or linked up with a master card in a CrossFire configuration, outperforms the GeForce 7900 GTX or 7900 GTX SLI configurations regardless of resolution.

Transparent
AVIVO Video Performance

For our next round of tests we took another look at Digital Video processing performance between the two competing core GPU architectures. "PureVideo" technology is at work here for NVIDIA and "AVIVO" drives the ATI card.

WMV-HD Decode Acceleration
PureVideo Performance Explored

To illustrate CPU utilization when playing back WMV HD content, we used the Performance Monitor built into Windows XP. Using the data provided by Windows Performance Monitor, we created a log file that sampled the percent of CPU utilization every second, while playing back the 1080p version of the "Coral Reef Adventure" video available for download on Microsoft's WMVHD site. The CPU utilization data was then imported into Excel to create the graph below. The graph shows the CPU utilization for a GeForce 7950 GX2 and a Radeon X1950 XTX using Windows Media Player 10, patched using the DXVA updates posted on Microsoft's web site (Updates Available Here).


Average CPU Utilization (Athlon 64 FX-60 @ 2.6GHz x 2)

GeForce 7950 GX2 Radeon X1950 XTX
21.38% 19.25%

Both cards performed similarly in our WMXHD playback test. With the exception of a few seconds at the beginning of the video where the GeForce 7950 GX2 used about 10% to 15% more CPU resources than the X1950 XTX, there is not much separating the two cards throughout the graph.  While playing back this particular video, the Radeon X1950 XTX averaged approximately 2% less CPU utilization than the 7950 GX2.  In the past, using a different video, we've seen the opposite.  For all intents and purposes, AVIVO and PureVideo are on par with one another in terms of CPU utilization while playing back HD content.

DVD Video Quality: HQV Benchmark with PowerDVD 7
http://www.hqv.com/benchmark.cfm

Next up, we have the HQV DVD video benchmark from Silicon Optics. HQV is comprised of a sampling of SD video clips and test patterns that have been specifically designed to evaluate a variety of interlaced video signal processing tasks, including decoding, de-interlacing, motion correction, noise reduction, film cadence detection, and detail enhancement. As each clip is played, the viewer is required to "score" the image based on a predetermined set of criteria. The numbers listed below are the sum of the scores for each section. We played the HQV DVD using the latest version of CyberLink's excellent Power DVD 7 Deluxe, with hardware acceleration and PureVideo extensions enabled.

We give a slight edge to ATI in the HQV benchmark, but in our opinion the results are closer than the numbers appear. Due to the way HQV's scoring system is designed, we gave ATI an edge in both of the noise reduction tests, which resulted in an overall benchmark victory. We like the fact that NVIDIA's drivers gives users the ability to tweak the noise reduction and edge enhancement algorithms, however, so at least currently, PureVideo is more flexible than AVIVO.  The results in this benchmark have changed drastically over the last few months though, and HQV's usefulness is dwindling as both AVIVIO and PureVideo approach the perfect 130 score mark.

Digg it

Transparent
Overclocking The New Radeons

For our next set of performance metrics, we spent a little time overclocking the Radeon X1950 XTX and the 256MB Radeon X1900 XT using the clock frequency slider available within ATI's drivers, under the "Overdrive" tab.

Overclocking the Radeon X1950 XTX & 256Mb Radeon X1900 XT
(Fastest 3D Video Cards) + Overclocking = Even Faster Cards

To find each card's peak core and memory frequencies, we slowly raised their respective sliders until we begun to see visual artifacts on-screen while running a game or benchmark, or until our test system was no longer stable.


Radeon X1950 XTX Overclocked Speeds: 695MHz Core / 1080MHz (2.160GHz DDR) Memory
256MB Radeon X1900 XT Overclocked Speeds: 640MHz Core / 725MHz (1.45GHz DDR) Memory

 


Radeon X1950 XTX Overclocked Speeds: 695MHz Core / 1080MHz (2.160GHz DDR) Memory
256MB Radeon X1900 XT Overclocked Speeds: 640MHz Core / 725MHz (1.45GHz DDR) Memory

We had limited success overclocking the 256MB Radeon X1900 XT.  It seemed like our particular sample was right on the edge of being stable.  We were only able to increase its core clock speed by about 15MHz and its memory wouldn't budge.  If we increased the memory frequency by even a few MHz, our test system blue-screened almost immediately after applying the new settings.

Our experience overclocking the Radeon X1950 XTX was much better.  Even though the X1950 XTX already features the fastest memory to ever grace a graphics card, we were able to take its memory all the way up to 2.16GHz.  And the card's core hit a respectable 695MHz.

While overclocked, we re-ran a couple of benchmarks to see just how much each card's performance had changed. The 256MB Radeon X1900 XT's scores hardly changed because of its modest overclock, but the Radeon X1950 XTX gained a few frames per second in both tests.



Digg it

Transparent
Power Consumption and Noise

Total System Power Consumption, Acoustics & Temperatures
It's All About the Watts and Decibels

We have a few final data points to cover before bringing this article to a close. Throughout all of our benchmarking, we monitored how much power our test systems were consuming using a power meter, and also took some notes regarding their noise output. Our goal was to give you all an idea as to how much power each configuration used and to explain how loud the configurations were under load. Please keep in mind that we were testing total system power consumption here, not just the power being drawn by the video cards alone.

Surprisingly, the Radeon X1950 XTX, despite its super-fast 2GHz memory, consumed less power than a 512MB Radeon X1900 XTX according to our trusty power meter.  At like clock speeds, GDDR4 memory should consume less power than GDDR3, but considering the X1950 XTX's higher-clocks we were expecting the X1900 XTX and X1950 XTX to consume roughly the same amount of power.  But as you can see, this did not turn out to be the case.  As expected though, the 256MB Radeon X1900 XT consumed less power than its 512MB counterpart.

We should also mention that we've included the NVIDIA-based power consumption numbers here only as a reference point. Although the graph makes it appear as though the GeForce cards consume more power than the ATI cards, that's not necessarily the case.  In previous testing, we've found that the nForce SLIX16 chipset consumes roughly 30 - 40 more watts than the CrossFire Xpress 3200 chipset. This difference accounts for the higher numbers put up by the NVIDIA platform above.

We'd also like to talk a bit about the Radeon X1950 XTX's new cooler and the noise it generates. Unfortunately (or fortunately depending on your perspective), our sound level meter did not pick up the Radeon X1950 XTX over the noise generated by the components in our test system.  We will say that the new cooler on the X1950 XTX is much quieter than the one used on the X1900 XTX, especially when running at higher RPM.  Where the Radeon X1900 XTX sounded like a hair-dryer at high-RPM, the X1950 XTX produces only a low-pitched hum.

Digg it

Transparent
Our Summary and Conclusion

Performance Summary: There are a number of comparisons that are important to consider in this summary. First, lets talk about the 256MB Radeon X1900 XT. The 256MB Radeon X1900 XT performed very well, besting a GeForce 7900 GT virtually across the board and it finished right alongside a GeForce 7900 GTX in many of our tests. If this card is actually released at the price ATI is claiming it will be, we suspect it will be quite popular. The Radeon X1950 XTX was also an excellent performer. In a single-card configuration, the Radeon X1950 XTX was faster than a GeForce 7900 GTX in almost every test. The GeForce 7950 GX2 was faster than the Radeon X1950 XTX more often than not, however. The same scenario plays out in the multi-GPU/multi-card configurations as well. A Radeon X1950 XTX CrossFire configuration was faster than a GeForce 7900 GTX SLI setup in the vast majority of the tests, but Quad-SLI came out on top most of the time. The tests were much closer than expected though, considering the battle at the top was between two ATI GPUs and four NVIDIA GPUs.

ATI's new line-up of Radeons seem to put the company in a good position as we approach the all important fourth quarter of the year. Although we were unable to test the Radeon X1650 Pro and X1300 XT in time for launch, their specifications and projected price points should allow them to compare favorably to NVIDIA's current entry-level offerings. We're told cards are on the way to lab "as we speak", however, so we'll be able to tell how the Radeon X1650 Pro and X1300 XT stack up for certain very shortly.

** Update September 5, 2006: Radeon X1300 XT and Radeon X1650 Pro images and benchmarks are now available on pages 4-11 of this article.

** Update September 26, 2006: As of today, ATi has yet to deliver either the Radeon X1300 XT or Radeon X1650 Pro in major quantities, and unfortunately, the cards that are available are priced higher than what was quoted by ATI below.  We have not been able to find Radeon X1300XT cards for sale, and the X1650 Pro cards that are out there are priced around $130 as of today.


Projected Pricing According To ATI

The 256MB Radeon X1900 XT put up some very good numbers considering its projected $279 MSRP as well.  At that price, the 256MB Radeon X1900 XT will fall somewhere in between a GeForce 7900 GT and 7900 GTX. Its performance was clearly better than the 7900 GT though, which makes it easy to justify the increased expense.  The Radeon X1950 XTX is in a similar position. With a proposed MSRP of $449, the Radeon X1950 XTX's price should lands somewhere in between the GeForce 7900 GTX and GeForce 7950 GX2.  Looking at our benchmark results and accounting for its 2GHz GDDR4 memory and redesigned cooler, that's a fair position to be in. The Radeon X1950 XTX out-muscled a 7900 GTX most of the time, and traded victories with the GX2. The Radeon X1950 XTX is a strong performer, and is arguably the fastest single-GPU card we've ever tested.

The $64,000 question is whether or not these new Radeons actually arrive with street prices that approach their MSRPs. Although ATI is announcing the cards today, they won't be available in any kind of quantity until September 14 at the earliest, so we won't be able to check "street" prices until then.  If they arrive on-time and can be purchased at, or below, MSRP, ATI's line-up looks great heading into the holiday buying season.  If they don't show at retail on time, or if the cards are considerably more expensive than MSRP, however, NVIDIA's offering becomes all the more attractive.  As usual, we'll know more in the coming weeks.

  • New Quiet Cooler
  • FAST GDDR4 Memory
  • Great Performance
  • Improved CrossFire Scaling
  • Aggressive MSRPs
  • Excellent Image Quality
  • Won't be available for almost a month
  • Unknown "Street" Prices
  • CrossFire Dongle

Discuss This Article in HotHardware's PC Hardware Forum

If You Liked This Article, Please Digg It To Share With Others



Content Property of HotHardware.com