Logo   Banner   TopRight
TopUnder
Transparent
ATI Radeon X1900 XTX And CrossFire: R580 Is Here
Transparent
Date: Jan 24, 2006
Section:Graphics/Sound
Author: Marco Chiappetta
Transparent
Intro, Specs & Related Info

In October of '05 ATI officially unveiled a new family of graphics cards based on the company's R520 GPU core and its derivatives. With the R520 ATI introduced a new "Ultra-Threaded" architecture, along with their new Ring Bus memory controller, and a more powerful video engine dubbed AVIVO, among numerous other things. Unfortunately though, the R520 and its derivatives, which were the first members of what eventually became known as the X1K family of products, were a long time coming. The move to a 90nm manufacturing process in conjunction with a simple circuit bug that was replicated throughout the chip, resulted in ATI missing almost an entire product cycle. For about four months, rival NVIDIA sat alone atop the 3D Graphics food chain with its GeForce 7800 GTX and 7800 GT. And rumors of ATI's inevitable demise were discussed in many a forum.  Hello?

Those rumors were, of course, unwarranted and simply the result of a major force in the industry hitting a few speed bumps during the development of a new, very complex product. In meetings and in conversations with representatives from ATI during the R520's development, we never got the impression that the company was desperate. Frustrated and disappointed sometimes, yes. But certainly not desperate.

And during the X1K launch event, and subsequent discussion since then, we got the impression ATI was supremely confident. Which was surprising, considering the problems the company had to contend with during the second half of last year. Today though, we know why. This morning, less than four months since the introduction of the R520, ATI is unveiling a new GPU and four new products based upon that GPU, the Radeon X1900 XTX, the X1900 XT, an All-In-Wonder Radeon X1900 and an X1900 CrossFire Master card.

The Radeon X1900 GPU was code-named R580 during its development. It is very similar to the R520 used on the Radeon X1800 XT, but with a couple of major changes.  Specifically a threefold increase in the number of pixel shader processors, and some updates that'll increase performance at ultra-high resolutions and with a certain type of soft shadow.  We'll go into more detail on the pages ahead.

ATI Radeon X1900 & CrossFire
Features & Specifications
Features - ATI Radeon X1900
• 380+ million transistors on a 90nm fabrication process
• Ultra-threaded architecture with fast dynamic branching
• Forty-Eight pixel shader processors
• Eight vertex shader processors
• 256-bit 8-channel GDDR3/GDDR4 memory interface
• Native PCI Express x16 bus interface
• Dynamic Voltage Control

Ring Bus Memory Controller
• 512-bit internal ring bus for memory reads
• Programmable intelligent arbitration logic
• Fully associative texture, color, and Z/stencil cache designs
• Hierarchical Z-buffer with Early Z test
• Lossless Z Compression (up to 48:1)
• Fast Z-Buffer Clear
• Z/stencil cache optimized for real-time shadow rendering
• Optimized for performance at high display resolutions, including widescreen HDTV resolutions


Ultra-Threaded Shader Engine
• Support for Microsoft DirectX 9.0 Shader Model 3.0 programmable vertex and pixel shaders in hardware
• Full speed 128-bit floating point processing for all shader operations
• Up to 512 simultaneous pixel threads
• Dedicated branch execution units for high performance dynamic branching and flow control
• Dedicated texture address units for improved efficiency
• 3Dc+ texture compression
_o High quality 4:1 compression for normal maps and two-channel data formats
_o High quality 2:1 compression for luminance maps and single-channel data formats
• Multiple Render Target (MRT) support
• Render to vertex buffer support
• Complete feature set also supported in OpenGL 2.0

Avivo Video and Display Engine
• High performance programmable video processor
_o Accelerated MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264 decoding (including DVD/HD-DVD/Blu-ray playback), encoding & transcoding
_o DXVA support
_o De-blocking and noise reduction filtering
_o Motion compensation, IDCT, DCT and color space conversion
_o Vector adaptive per-pixel de-interlacing
_o 3:2 pulldown (frame rate conversion)
• Seamless integration of pixel shaders with video in real time
• HDR tone mapping acceleration
_o Maps any input format to 10 bit per channel output
• Flexible display support
_o Dual integrated dual-link DVI transmitters
_o DVI 1.0 / HDMI compliant and HDCP ready
_o Dual integrated 10 bit per channel 400 MHz DACs
_o 16 bit per channel floating point HDR and 10 bit per channel DVI output
_o Programmable piecewise linear gamma correction, color correction, and color space conversion (10 bits per color)
_o Complete, independent color controls and video overlays for each display
_o High quality pre- and post-scaling engines, with underscan support for all outputs
_o Content-adaptive de-flicker filtering for interlaced displays
_o Xilleon™ TV encoder for high quality analog output
_o YPrPb component output for direct drive of HDTV displays
_o Spatial/temporal dithering enables 10-bit color quality on 8-bit and 6-bit displays
_o Fast, glitch-free mode switching
_o VGA mode support on all outputs
• Compatible with ATI TV/Video encoder products, including Theater 550
Advanced Image Quality Features
• 64-bit floating point HDR rendering supported throughout the pipeline
_o Includes support for blending and multi-sample anti-aliasing
• 32-bit integer HDR (10:10:10:2) format supported throughout the pipeline
_o Includes support for blending and multi-sample anti-aliasing
• 2x/4x/6x Anti-Aliasing modes
_o Multi-sample algorithm with gamma correction, programmable sparse sample patterns, and centroid sampling
_o New Adaptive Anti-Aliasing feature with Performance and Quality modes
_o Temporal Anti-Aliasing mode
_o Lossless Color Compression (up to 6:1) at all resolutions, including widescreen HDTV resolutions
• 2x/4x/8x/16x Anisotropic Filtering modes
_o Up to 128-tap texture filtering
_o Adaptive algorithm with Performance and Quality options
• High resolution texture support (up to 4k x 4k)


CrossFire
• Multi-GPU technology
• Four modes of operation:
_o Alternate Frame Rendering (maximum performance)
_o Supertiling (optimal load-balancing)
_o Scissor (compatibility)
_o Super AA 8x/10x/12x/14x (maximum image quality)
_o Program compliant


Radeon Graphics x 2


Radeon X1900 XTX


Radeon X1900 CrossFire Edition

There is a myriad of information related to the launch of the Radeon X1900 available on our site that will help you get familiar with the GPU's architecture and key features. The Radeon X1900 has a number of features in common with other cards in the Radeon X1K family of products, and we've detailed the features of the Radeon Xpress 200 chipset and CrossFire in a few past articles as well.

At a minimum, if you haven't already done so, we recommend reading our CrossFire Multi-GPU technology preview, the Radeon Xpress 200 preview, the X1K family review, and the Radeon X1800 CrossFire evaluation. In those four articles, we cover the vast majority of the features offered by the Radeon X1900. There is quite a bit of background information in those articles that laid the foundation for what we're going to showcase here today.

Transparent
The Radeon X1900 Family

As we mentioned earlier, ATI is releasing four Radeon X1900 cards today. Pictured below are the new flagship Radeon X1900 XTX and a Radeon X1900 CrossFire Master card. A lower clocked Radeon X1900 XT and an All-In-Wonder Radeon X1900 256MB card are being introduced as well, but they haven't arrived in the lab just yet. The XT looks just like the XTX, though, and the All-In-Wonder X1900 is a dead-ringer, at least physically, for the AIW X1800 XL reviewed here.

The ATI Radeon X1900 XTX
      

      
Click any Image for an Enlarged View

The card pictured here is a 512MB Radeon X1900 XTX (MSRP $649). At its heart is an R580 GPU comprised of approximately 380 million transistors, built using a .09 micron manufacturing process. The GPU on this card is equipped with 48-pixel shader processors, 8-vertex shader processors, and a 256-bit 8-channel GDDR3/GDDR4 memory interface. The internal Ring Bus memory controller is 512-bits, however.

The Radeon X1900 XTX's core is clocked at an impressive 650MHz and its memory is running at a robust 1.55GHz. To sustain these high clock speeds, the Radeon X1900 XTX sports the same beefy dual-slot cooler used on the Radeon X1800 XT. The PCB is much larger than X800 series Radeons, and this card is equipped with Volterra's multi-phase voltage regulator chipset (under the thin, red heatsink). The lower clocked Radeon X1900 XT (625MHz / 1.45GHz) will look similar, but will be priced initially with an MSRP of $549.

      

We were curious to see how large the Radeon X1900 GPU really was after hearing that the core is composed of over 380 million transistors, so we popped the heatsink off of the Radeon X1900 Master Card to take a closer look. Using a trusty old ruler, we found the Radeon X1900 GPU to be roughly 18.5mm x 18.5mm, or 342mm2. Conversely, a GeForce 7800 GTX, which is built on TSMC's .11 micron line, is a bit larger. We measured a GeForce 7800 GTX (G70) at approximately 19mm x 18.5mm, or 351.5mm2. And we found an X1800 to be roughly 288mm2. If yields at TSMC are good, it could be more cost efficient for ATI to produce X1900s than it is for NVIDIA to currently make the GTX, which could push street prices down in time. Then again, packing 512MB of 1.55GHz GDDR3 RAM on a flagship card won't be cheap for the foreseeable future. NVIDIA also has plans to shrink the G70 down using a 90nm process, which would then give NVIDIA a significant edge.


The Radeon X1900 Master Card looks very similar to the X1900 XTX on the surface. The GPU on our X1900 Master Edition card was clocked at 625MHz and its memory was clocked at 720MHz (1440MHz DDR), just shy of the 650MHz / 1550MHz of the Radeon X1900 XTX. The aproximate 100MHz memory clock disparity should have a relatively small effect on performance, so we won't dwell on it here. And CrossFire doesn't require a matched-pair of video cards to function, so the difference in memory clock speed shouldn't pose a compatibility problem either.

The ATI Radeon X1900 CrossFire Edition
      

       
Click any Image for an Enlarged View

To bring CrossFire to the X1900, ATI used the same compositing engine introduced on the X1800. The second generation compositing engine used on the X1900, is similar but superior to the one used on the older Radeon X850 XT Master cards. If you remember, because X850 cards were equipped with single-link DVI outputs, X850 CrossFire was limited to a max resolution of 1600x1200 with a lowly refresh rate of 60Hz. Graphics cards in the X1K family of products are equipped with dual-link DVI outputs, however. Having dual-link DVI outputs means more bandwidth, which the new compositing engine capitalizes on to offer higher-resolutions than the first generation CrossFire implementation.

   

The compositing engine on the Radeon X1900 XT CrossFire Edition Master card consists of a handful of chips. The biggest chip in the group, in the middle of the picture, is a Xilinx Spartan XC3S400 FPGA. The XC3S400 is a more capable FPGA (Field Programmable Gate Array) than the one ATI used to enable CrossFire on the X850 XT. The XC3S400 is the chip that's programmed to do the actual compositing work. In total, this chip has ~ 400K logic gates, which is fairly low-end by today's standards for an FPGA. And at a cost below $7, the overall retail price point of the board isn't affected too adversely. We should note that it's upgradeable via firmware as well, so ATI could theoretically incorporate more features into X1900 CrossFire moving forward. To the left of the Xilinx FPGA is the flash ROM chip, that actually contains the necessary programming and configuration code.

Above and to the left of the Xilinx FPGA in the picture, are a pair of Silicon Image SiI 163B TMDS receivers. These are the chips that receive data from the slave card. The "data" is the information being transmitted from the slave card's dual-link DVI output and on through the custom dongle pictured above. The Silicon Image SiI 163B TMDS receivers are clocked at 165MHz and are capable of processing images at a resolution of up to 1600x1200 @ 60Hz. But because there are two of them that work in tandem, the maximum resolution is doubled to 2560x1600. The three smaller Silicon Image chips to the right are a pair of SiI PanelLink TDMS transmitters and an Analog Devices RAMDAC, which then power any displays connected to the output on the CrossFire dongle.

As you can see, although the Radeon X1900 has 48 pixel shader processors, it's raw pixel fillrate is only marginally higher than an X1800 XT, because it has the same number of ROPs (Raster Operation units) and texture units -- 16. The X1900 XTX's faster memory and core, should give it an advantage over the X1800 XT though, even when the 48 pixel shader processors aren't being fully utilized.

Transparent
Architectural Details

The Radeon X1900 (R580) is architecturally very similar to the Radeon X1800 (R520), so we recommend perusing this article for a more comprehensive look at the features common to both GPUs, like the 512-bit Ring Bus Memory Controller, Shader Model 3.0 support and AVIVO.


ATI Radeon X1900 Architectural Overview Diagram

This high level architectural block diagram above highlights the main feature of the Radeon X1900 -- its 48 pixel shader processors. Unlike the Radeon X1800 which has 16 pixel shader processors, and the GeForce 7800 GTX which has 24, the Radeon X1900 has a full 48 pixel shader processors in the first stage of the 3D pipeline.

With the R520, ATI de-coupled the individual stages in the 3D pipeline, which gave them the ability to have an asymmetrical number of pixel shader processors and ROPs, or "Render Backends". So basically, what ATI did with the R580, was take the existing R520 architecture and tripled the number of pixel shader processors in the first stage of the pipeline. But the rest of the chip is largely unchanged. The R580 still has the same 8 vertex shaders, 16 texture units, 16 ROPs, and the same memory controller. The large number of pixel shader processors gives the X1900 a ton of resources for executing pixel shader code, but the in situations where pixel shading is not a limiting factor in performance, the X1900 should perform similarly to the X1800 XT. Just about every new gaming title and many titles released over the last couple of years, make use of pixel shaders however, so the additional 60 million transistors used on the X1900 to increase the number of shader processors should certainly be put to good use, especially with newer gaming titles that make heavy use of longer, more complex shaders.

The pixel shader processors are grouped together in quads. The Radeon X1900 has twelve of these quads for executing pixel shader code. The components of each quad and their respective computational capabilities are outlined above. Over an above the number of pixel shader processors, ATI had made a couple of other advancements with the R580 as well.  We'll talk about those changes next.

Transparent
More On The Architecture

You can't read about a new game these days, without hearing about the quality of the shadows produced by its engine. Doom 3 and Quake 4 immediately come to mind, as two games that make heavy use of shadows to produce a realistic looking game world.

A widely used method for producing shadows is shadow mapping. This technique of rendering shadows works by first rendering the scene from the point of view of a light source. The results are not displayed, but instead stored in a special shadow map texture where each value represents the distance of the nearest object to the light source. The scene is then rendered from the gamer's viewpoint, and each pixel is checked against the shadow map to determine if there are any objects between it and the light source. If an object falls within the shadow map, the pixel is in shadow and will be darkened, otherwise it is lit normally.

Basic shadow maps typically create hard-edged shadows, which isn't very realistic. In the real world, shadows usually have much softer edges. To create soft shadows in games, the shadow map is usually filtered in some way. The filtering can be done by taking X number of samples, and then combining them in a pixel shader. Generally, the more samples used, the better the resulting soft shadows.  Doing this requires a large number of texture lookups, however, which can hurt performance.

To speed up the texture lookups necessary for using this technique for soft shadows, the Radeon X1900 includes a new texture sampling feature called Fetch4. It works by exploiting the fact that most textures are composed of color values, each with four components (Red, Green, Blue, and Alpha or transparency). The texture units are designed to sample and filter all four components from one texture address simultaneously. However, when looking up different types of textures with single-component values (such as shadow maps), Fetch4 instead allows four values from adjacent addresses to be sampled simultaneously. This effectively increases the texture sampling rate by a factor of 4. To exploit the Fetch4 feature though, specific code needs to be used in the game engine.

Another enhancement made to the R580 should help with performance at ultra high resolutions, think 1920x1200 and above. All Radeon GPUs support a Hierarchical Z feature, that works by detecting and eliminating pixels that will be hidden in the final rendered image, and discarding them before any further pixel processing takes place. To function though, this feature requires high speed on-chip memory, or a buffer, and this memory is often of a limited size. Rendering at resolutions higher than this integrated buffer was designed to support can reduce the effectiveness of Hierarchical Z. The Radeon X1900 incorporates 50% more on-chip memory for Hierarchical Z than the Radeon X1800. This means that its performance should not drop off as dramatically at very high resolutions.

Transparent
AVIVO: Video Performance

We've written specifically about the AVIVO video engine incorporated into ATI's new X1K family of products in a couple of articles before (here and here), so we won't go into the specifics again. We will however, re-evaluate its performance using an X1900 and ATI's latest software suite. When the X1800 launched, the AVIVO video engine wasn't being fully exploited.  With the X1900 however, that is not the case.

DVD Video Quality: HQV Benchmark
Details: http://www.hqv.com/benchmark.cfm

For our first test, we used the HQV DVD video benchmark from Silicon Optics. HQV is comprised of a sampling of video clips and test patterns that have been specifically designed to evaluate a variety of interlaced video signal processing tasks, including decoding, de-interlacing, motion correction, noise reduction, film cadence detection, and detail enhancement. As each clip is played, the viewer is required to "score" the image based on a predetermined set of criteria. The numbers listed below are the sum of the scores for each section. We played the HQV DVD using the latest version of NVIDIA's PureVideo Decoder on the GeForce 7800 GT, and as recommended by ATI, we played the DVD on the X1900 using Intervideo's WinDVD 7 Platinum, with hardware acceleration enabled.

When the X1K family of products first hit store shelves, their score in this benchmark was below 40 points. With the latest set of Catalyst drivers though, video playback quality is vastly improved. The biggest boost to ATI's score comes by way of the eight individual film cadence test and the noise reduction tests. For playing back DVDs, or similar digital video files, ATI's X1K family of cards are the products to beat.

WMV-HD Decode Acceleration
So, what does Avivo do for me, today?

Microsoft's Windows Media Video 9 (WMV9) HD format was accepted by the SMPTE HD-DVD consortium as a new HD format. The Windows Movie Maker software, which comes bundled with Windows XP, makes it easy for consumers to edit and save their favorite videos. These videos are saved in the .WMV format. Most of today's high-end GPUs include dedicated hardware to accelerate the playback of WMV and WMV-HD content for fluid full frame rate video even on systems with entry-to mid level CPUs. Previous generations of GPUs were not able to support WMV9 decode acceleration, so often times HD WMV9 content would drop frames when being played back on legacy hardware.

To document CPU utilization when playing back WMV HD content, we used the performance monitor built into Windows XP. Using the data provided by performance monitor, we created a log file that sampled the percent of CPU utilization every second, while playing back the 1080p versions of the "MP10 Digital Life" video available on Microsoft's WMVHD site. The data was then imported into Excel to create the graphs below. The graphs shows the CPU utilization for a GeForce 7800 GTX and a Radeon X1900 XTX using Windows Media Player 10, patched using the DXVA update posted on Microsoft's web site (Update Available Here).

Average CPU Utilization - MP10 Digital Life
ATI Radeon X1900 XTX 38.23%
NVIDIA GeForce 7800 GTX 40.03%

With this particular video, ATI had a slight advantage in CPU utilization. The new Radeon X1900 XTX used just over 39% of our CPU's resources, while the GTX peaked at just a hair over 40%. Neither card had any trouble playing this video, and we didn't witness any dropped frames. We should note that CPU utilization will vary depending on the video being played back. Had we chosen a different video, NVIDIA could have come out on top here.

The Radeon X1900 XTX and the rest of the X1K family are also capable of accelerating H.264 and VC1 video as well. This will be very important once Blu-Ray and HD DVD discs hit sometime this year. NVIDIA currently doesn't have this ability, but a future driver update should expose this feature in the PureVideo engine.

Transparent
AA Image Quality Analysis

Prior to benchmarking the new Radeon X1900, we spent some time analyzing its in-game image quality versus a GeForce 7800 GTX. First, we used the "background 1" map in Half-Life 2 to get a feel for how each card's anti-aliasing algorithm's affected the scene.

Image Quality Analysis: Standard Anti-Aliasing Modes
NVIDIA vs. ATI
NVIDIA GeForce 7800 GTX Screenshots

GeForce 7800 GTX
1280x1024 - No AA

GeForce 7800 GTX
1280x1024 - 2x AA

GeForce 7800 GTX
1280x1024 - 4X AA

GeForce 7800 GTX
1280x1024 - 8xS AA

ATI Radeon X1900 XTX Screenshots

Radeon X1900 XTX
1280x1024 - No AA

Radeon X1900 XTX
1280x1024 - 2x AA

Radeon X1900 XTX
1280x1024 - 4x AA

Radeon X1900 XTX
1280x1024 - 6x AA

In this first batch of screenshots, 16X anisotropic filtering was enabled in conjunction with the various levels of anti-aliasing offered by each card. As you can see, the "No AA" screen shots look quite similar on both cards, as do the 2X AA screen shots. In the 4X AA screen shots though, you can pick out some subtle differences. The cables at the top of the screen are softer and more realistic looking on the X1900, but the tree loses some detail. In contrast, NVIDIA seems to do a better job with the antennas on the top of the buildings. The ATI 6XAA vs. NVIDIA 8xS AA shots reveal similar differences, with NVIDIA having a clear edge in detail, especially in the trees where ATI's multi-sample only algorithm has minimal impact.

Image Quality Analysis: Adaptive/Transparency Anti-Aliasing Modes
Still NVIDIA vs. ATI

In this next batch of screen shots, our goal is to compare NVIDIA's and ATI's various single-card anti-aliasing modes when used in conjunction with each company's transparency or adaptive AA techniques. Please note that we used NVIDIA's super-sample transparency AA here, as we've been unable to find a clear in-game example where MSTAA has a measurable impact on image quality.

NVIDIA GeForce 7800 GTX Screenshots with Transparency AA

GeForce 7800 GTX
1280x1024 - No AA

GeForce 7800 GTX
1280x1024 - 2X Trans. AA

GeForce 7800 GTX
1280x1024 - 4X Trans. AA

GeForce 7800 GTX
1280x1024 - 8xS Trans. AA

ATI Radeon X1900 XTX Adaptive AA Screenshots

Radeon X1900 XTX
1280x1024 - No AA

Radeon X1900 XTX
1280x1024 - 2X AAA

Radeon X1900 XTX
1280x1024 - 4X AAA

Radeon X1900 XTX
1280x1024 - 6X AAA

As you browse through each progressing level of AA, you'll see a similar trend to the images above. As the AA level increases, visible jagged edges are decreased. In these specific tests, NVIDIA clearly has an advantage, because fine detail just seems to disappear on the Radeon X1900. Hopefully a future driver update will resolve this issue on the Radeon X1900.

Transparent
CrossFire AA Performance

Here we have yet another set of screen shots for your inspection. In this batch of images, we want to compare NVIDIA's and ATI's dual-GPU anti-aliasing techniques. NVIDIA calls it SLIAA and ATI SuperAA. These modes are only enabled when using a pair of cards together, either in SLI or CrossFire modes, because each card renders the same frame before they are blended together.  For more details on these anti-aliasing modes, take a look at this article on SLIAA and this one outlining the new features introduced with ATI's CrossFire.

Image Quality Analysis: SLI & CrossFire Anti-Aliasing Modes
Still More NVIDIA vs. ATI
NVIDIA GeForce 7800 GTX SLI AA Screenshots

GeForce 7800 GTX SLI
1280x1024 - No AA

GeForce 7800 GTX SLI
1280x1024 - SLI8X AA

GeForce 7800 GTX SLI
1280x1024 - SLI16X AA

ATI Radeon X1900 XT CrossFire Super AA Screenshots

Radeon X1900 CrossFire
1280x1024 - 8X Super AA

Radeon X1900 CrossFire
1280x1024 - 10X SuperAA

Radeon X1900 CrossFire
1280x1024 - 12X Super AA

Radeon X1900 CrossFire
1280x1024 - 14X Super AA

There are three portions of the screen to focus on these screen shots - the cables, the tree, and the scaffolding under the bridge in the distance. Our favorite modes would unquestionably be ATI's 10X and 14X SuperAA modes because they do an excellent job in eliminating jaggies in the cables and preserving fine detail under the bridge, but the trees lose more and more detail as the level of AA is increased.


We didn't perform a comprehensive test routine to assess the performance of all of ATI's CrossFire SuperAA modes with the X1900, but we did run a couple of tests to get a general idea as to how the various modes perform. The new revision of the compositing engine introduced on the X1800 XT Master Card, and used on the X1900, offers higher-performance in SuperAA modes than the engine used on the X850 XT, because the compositing engine can do the final blend with each individual card running at full speed.

CrossFire AA Performance: Half Life 2 & FarCry
Upping the Number of Samples

 

 

As you can clearly see, as the level of anti-aliasing is increased NVIDIA's SLIAA has a much more dramatic effect on performance. SLI16X AA is playable in both games at 1280x1024, but ATI's 14X AA offers much better frame rates. Basically, if a single Radeon X1900 XT is capable of playable frame rates in a game with a certain level of anti-aliasing, adding a master card and running in CrossFire mode will offer the same frame rates with virtually double the amount of anti-aliasing applied to the scene.



GeForce 7800 GTX SLI
1280x1024 - SLI16X AA
w/ 16X Aniso & Trans. AA

Radeon X1900 CrossFire
1280x1024 - 14X Super AA
w/ 16X HQ Aniso & Adaptive AA

As a final treat for the image quality fanatics among you, we snapped off two final screen shots with each platform configured for the best image quality.  On our GeForce 7800 GTX SLI rig, we enabled 16X SLIAA, transparency anti-aliasing, and maxed out the anisotropic filtering. On the X1900 CrossFire rid we enabled 14X SuperAA with high quality adaptive anti-aliasing and high quality 16X anisotropic filtering. The SLI rig has the best fine detail under the bridge, and in the trees, but ATI does a better job with the aniso. Look at the sloping hill to the right and you'll see what we mean. Believe it or not, both platforms produced very good frame rates in Half Life 2 at these settings and resolution too. The CrossFire rig could run the game at 1280x1024 with these "ultra high quality" settings at over 125 FPS, whereas the SLI rig's performance hovered around 85 FPS.

Transparent
Anisotropic Filtering Quality

With this next set of screen shots, we followed a similar procedure outlined on the two previous pages to evaluate the effect of the ATI's new anisotropic filtering techniques on a given scene. The screen shots below are from Half-Life 2's "background 4" map. We've again compared similar settings using the GeForce 7800 GTX and a Radeon X1900 XTX. For this set of screen shots, anti-aliasing was disabled to isolate the effect each card's respective anisotropic filtering algorithms altered the images.

Image Quality Analysis: Anisotropic Filtering
Standard & High Quality Aniso
NVIDIA GeForce 7800 GTX Screenshots

GeForce 7800 GTX
1280x1024 - No Aniso

GeForce 7800 GTX
1280x1024 - 4x Aniso

GeForce 7800 GTX
1280x1024 - 8x Aniso

GeForce 7800 GTX
1280x1024 - 16x Aniso

ATI Radeon X1900 XTX Standard Aniso Screenshots

Radeon X1900 XTX
1280x1024 -
No Aniso

Radeon X1900 XTX
1280x1024 -
4x Aniso

Radeon X1900 XTX
1280x1024 -
8x Aniso

Radeon X1900 XTX
1280x1024 -
16x Aniso

 

ATI Radeon X1900 XTX High Quality Aniso Screenshots

Radeon X1900 XTX
1280x1024 - No Aniso

Radeon X1900 XTX
1280x1024 -
4x HQ Aniso

Radeon X1800 X1900 XTX
1280x1024 -
8x HQ Aniso

Radeon X1900 XTX
1280x1024 -
16x HQ Aniso

When perusing the images above, pay special attention to the road and the hill to the lower right, as these areas are where anisotropic filtering has the most impact. In the "No Aniso" shots at the top, which have only trilinear filtering enabled, the blurring in the road and on the hill is clearly evident.

However, with 8X anisotropic filtering enabled, the detail in the road is dramatically enhanced. If you open each of the standard shots individually and skip through them quickly, you're likely to notice a bit more detail in the shots taken with the GeForce 7800 GTX versus those taken with the Radeon using its standard angular dependant anisotropic filtering mode, disregarding artifacts produced by the JPG compression.

The same seemed to be true when inspecting the 16x aniso images. Of course, image quality analysis is objective by its nature, but based on these images, we think the GeForce 7800 GTX has the better image quality as it relates to anisotropic filtering when standard "optimized" aniso is used, by a small margin. The new Radeon X1K family of graphics cards offers another "high quality" anisotropic mode, that doesn't have the same angular dependency as ATI's previous generation of cards. The new high-quality aniso mode offered by the X1K Family, applies nearly the same level of filtering regardless of the angle. Overall, the effect of enabling ATI's high-quality aniso mode is positive, as it does an even better job of sharpening texture and increasing the detail level. The fully appreciate ATI's high-quality aniso mode though, you've got to see it in action. Still screen shots don't convey the full effect.  If you focus on the furthest part of the road, and on the hill though, you can see some areas where HQ aniso does a better job than NVIDIA.

Transparent
Anisotropic Filtering Performance

When testing the performance of ATI's different Super-AA modes a couple of pages back, we stepped through each successive level of AA while benchmarking FarCry and Half Life 2 at a resolution of 1280x1024. The results on this page were attained using a similar methodology, but we altered the level of anisotropic filtering being applied to the images instead and used FarCry and F.E.A.R running at a much higher resolution. Anti-aliasing was disabled throughout this batch of tests to isolate the effect anisotropic filtering alone was having on performance.

ATI Anisotropic Filtering Performance: FarCry & F.E.A.R.
Sharpening Up Those Textures

 

 

As we demonstrated on the previous page, ATI's high-quality anisotropic filtering modes offer arguably the best anisotropic filtering available in a consumer level graphics card. And the performance data on this page shows that there is virtually no reason to have it disabled and use the lower quality setting. ATI's high-quality aniso modes perform just barely below the comparable "standard" modes in both games, and have a minimal impact on performance versus just using trilinear filtering.

Transparent
Test System & ShaderMark v2.1

HOW WE CONFIGURED THE TEST SYSTEM: We used two different test systems for this article.  We tested our NVIDIA based cards on an Asus A8N32-SLI, nForce 4 SLIX16 chipset based motherboard, but tested the ATI based cards on an ECS KA1 MVP Radeon Xpress 200 motherboard.  Both systems were powered by an AMD Athlon 64 FX-55 processor and 1GB of low-latency Corsair XMS RAM. The first thing we did when configuring these systems was enter each BIOS and loaded the "High Performance Defaults."  The hard drives were then formatted, and Windows XP Professional with SP2 was installed. When the installation was complete, we installed the chipset drivers, installed all of the other necessary drivers for the rest of our components, and removed Windows Messenger from the system. Auto-Updating and System Restore were also disabled, the hard drive was defragmented, and a 768MB permanent page file was created on the same partition as the Windows installation. Lastly, we set Windows XP's Visual Effects to "best performance," installed all of the benchmarking software, and ran the tests.

The HotHardware Test System
AMD Athlon 64 FX Powered

Processor -

Motherboard -





Video Cards -





Memory -


Audio -

Hard Driv
e -

 

Hardware Used:
AMD Athlon 64 FX-55 (2.6GHz)

Asus A8N32-SLI
nForce4 SLIX16 chipset

ATI Reference CrossFire MB
ATI Radeon Xpress 200 CF Edition

Radeon X1900 XTX (+ CF Edition)

Radeon X1900 XT
Radeon X1800 XT (+ CF Edition)
GeForce 7800 GTX 512MB (x2)
GeForce 7800 GTX (x2)

1024MB Corsair XMS PC3200 RAM
CAS 2

Integrated on board

Western Digital "Raptor"

36GB - 10,000RPM - SATA

Operating System -
Chipset Drivers -
DirectX -


Video Drivers -



Synthetic (DX) -
Synthetic (DX) -
DirectX -

DirectX -
DirectX -
DirectX -
OpenGL -
OpenGL -
Relevant Software:
Windows XP Pro SP2 (Patched)
nForce Drivers v6.82
DirectX 9.0c

NVIDIA Forceware v81.98

ATI Catalyst v5.13/v6.1

Benchmarks Used:
ShaderMark v2.1 (Build 1.30a)
3DMark06 v1.0.2
Splinter Cell: Chaos Theory v1.05
F.E.A.R. v1.02
FarCry v1.33*
Half Life 2*
Doom 3 v1.3 (Single Player)*
Quake 4 v1.0.5.2*

* - Custom Test (HH Exclusive demo)

Performance Comparisons with ShaderMark v2.1 (Build 1.30a)
Details:http://www.shadermark.de

Shadermark v2.1
For most of our recent video card-related articles, we've stuck to using games, or benchmarks based on actual game engines, to gauge overall performance. One problem with using this approach exclusively is that some advanced 3D features may not be fully tested, because the game engines currently in use tend not to use the absolute latest features available within cutting-edge graphics hardware. In an effort to reveal raw shader performance, which is nearly impossible to do using only the games on the market today, we've incorporated ToMMTi-System's ShaderMark v2.1 into our benchmarking suite for this article. ShaderMark is a Direct 9.0 pixel shader benchmark that exclusively uses code written in Microsoft's High Level Shading Language (HLSL) to produce its imagery.


ShaderMark Test Conducted @ 1600x1200x85Hz

If you look back at our Radeon X1K family launch article, you'll see that the GeForce 7800 GTX skunked the Radeon X1800 XT in all but six of ShaderMark's performance tests. This time around though, we used the much higher clocked 512MB GeForce 7800 GTX and pitted it up against the new Radeon X1900 XTX.  As you can see, the X1900's 48-pixel shader processors propel's ATI's latest well ahead of NVIDIA's best in all but three tests. Something interesting to note is that in the Dual-Layer tests with Flow Control, ATI actually loses one test by about 17% and wins the other by only 3%, even though ATI has been touting the Radeon X1K's family's ability to handle dynamic flow control as one of the GPU's strong suits.

Transparent
3DMark06 v1.0.2

Performance Comparisons with 3DMark06 v1.0.2
Details: http://www.futuremark.com/products/3dmark06/

3DMark06
Futuremark recently launched a brand-new version of their popular benchmark, 3DMark06. The new version of the benchmark is updated in a number of ways, and now includes not only Shader Model 2.0 tests, but Shader Model 3.0 and HDR tests as well. Some of the assets from 3DMark05 have been re-used, but the scenes are now rendered with much more geometric detail and the shader complexity is vastly increased as well. Max shader length in 3DMark05 was 96 instructions, while 3DMark06 ups the number of instructions to 512. 3DMark06 also employs much more lighting, and there is extensive use of soft shadows. With 3DMark06, Futuremark has also updated how the final score is tabulated. In this latest version of the benchmark, SM 2.0 and HDR / SM3.0 tests are weighted and the CPU score is factored into the final tally as well.

3DMark06's overall score has the X1800 XT falling prey to the 7800 GTX, but the Radeon X1900 XT, XTX, and CrossFire configuration best NVIDIA's competitive offerings by a few hundred points. We've also got the individual graphics scores, however, which tell a much more interesting story.

With 3DMark06's Shader Model 2.0 tests, which are basically updated versions of the "game" tests that were part of 3DMark05, a pair of 512MB 7800 GTX running in SLI mode finishes on top, followed by X1900 CrossFire. A single 512MB GTX trails the X1900, however, which hints to SLI's better scaling in this benchmark.

The new HDR/Shader Model 3.0 tests in 3DMark06 tell yet another story. Here, nothing touches the Radeon X1900, and the only edge NVIDIA has is a pair of GTXs over X1800 XT CrossFire, but even then the margin of victory is only 10 points.  It seems that, at least according to 3DMark06, that Shader Model 2.0 performance should be competitive between NVIDIA and ATI, but that ATI should have an edge in Shader Model 3.0 performance. Let's see how things pan out in our actual "real-world" game tests.

Transparent
Splinter Cell: Chaos Theory v1.05

Performance Comparisons with Splinter Cell: Chaos Theory v1.05
Details: http://www.splintercell3.com/us/

SC: Chaos Theory
Based on a heavily modified version of the Unreal Engine, enhanced with a slew of DX9 shaders, lighting and mapping effects, Splinter Cell: Chaos Theory is gorgeous with its very immersive, albeit dark, environment. The game engine has a shader model 3.0 code path that allows the GeForce 6 & 7 Series of cards, and the new X1000 family of cards, to really shine, and a recent patch has implemented a shader model 2.0 path for ATI's X8x0 generation of graphics hardware. For these tests we enabled the SM 3.0 path on all of the cards we tested. However, High Dynamic Range rendering was disabled so that we could test the game with anti-aliasing enabled (a future patch should enable AA with HDR on the X1K family). We benchmarked the game at resolutions of 1,280 x 1024 and 1,600 x 1,200, both with and without anti-aliasing and anisotropic filtering.

 

NVIDIA's 512MB GeForce 7800 GTX was dominant in this benchmark before today, but ATI's new Radeon X1900 XTX reclaims the overall performance lead in Splinter Cell: Chaos Theory. Both the X1900 XT and X1900 XTX were able to outpace the GeForce 7800 GTX, by margins ranging from about 4% to 10% depending on the test configuration, and X1900 CrossFire was faster than a pair of 7800 GTX 512s running in SLI mode as well. Framerates were competitive when no anti-aliasing or anisotropic filtering was used but with AA and Aniso, ATI had a more pronounced advantage.

Transparent
F.E.A.R. v1.02

Performance Comparisons with F.E.A.R
More Info: http://www.whatisfear.com/us/

F.E.A.R
One of the most highly anticipated titles of 2005 was Monolith's paranormal thriller F.E.A.R. Taking a look at the minimum system requirements, we see that you will need at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card, that is a Radeon 9000 or GeForce4 Ti-class or better, to adequately run the game. Using the full retail release of the game patched to v1.02, we put the graphics cards in this review through their paces to see how they fared with a popular title. Here, all graphics settings within the game were set to the maximum values, but with soft shadows disabled (Soft shadows and anti-aliasing do not work together currently). Benchmark runs were then completed at resolutions of 1280x960 and 1600x1200, with and without anti-aliasing and anisotropic filtering enabled.

 

We had some interesting results with the F.E.A.R. benchmark. In a single-card configuration, the new Radeon X1900 XTX was the best performer, outpacing NVIDIA's 512MB GeForce 7800 GTX in all but one test (1280x960 No AA / No Aniso). Generally speaking, without any anti-aliasing or anisotropic filtering NVIDIA's performance is strong, while ATI's performance is more formidable when additional pixel processing is used. In a dual-card configuration, however, a pair of 512MB GTXs running in SLI mode offers the best performance. CrossFire doesn't scale very well in this game; as you can see, even a single X1900 is faster than a pair of 1800 XTs. We suspect things could change with a future driver optimizations, but for now NVIDIA's still got an edge here as far as multi-GPU performance goes.

Transparent
FarCry v1.33

Performance Comparisons with FarCry v1.33
Details: http://www.farcry.ubi.com/

FarCry
If you've been on top of the gaming scene for some time, you probably know that FarCry was one of the most visually impressive games to be released on the PC last year. Courtesy of its proprietary engine, dubbed "CryEngine" by its developers, FarCry's game-play is enhanced by Polybump mapping, advanced environment physics, destructible terrain, dynamic lighting, motion-captured animation, and surround sound. Before titles such as Half-Life 2 and Doom 3 hit the scene, FarCry gave us a taste of what was to come in next-generation 3D gaming on the PC. We benchmarked the graphics cards in this article with a custom-recorded demo run taken in the "Catacombs" area checkpoint, at various resolutions without anti-aliasing or anisotropic filtering enabled, and then again with 4X AA and 16X aniso enabled concurrently.

 

FarCry was essentially CPU bound in the lower resolution test, with all cards and configurations performing within a few frames per second of one another. Technically, NVIDIA had the edge in the default graphics configuration, but when anti-aliasing and aniso was used, ATI pulled ahead. Every card performed well though hovering around the 100 FPS mark. The same basically holds true at the higher resolution, but the performance deltas between the default tests and the tests with additional pixel processing were much larger. At the higher resolution, the Radeon X1900 XTX and XT are clearly the highest performers with anti-aliasing and anisotropic filtering enabled, besting NVIDIA's 512MB GeForce 7800 GTX by about 13%. A pair of 512MB GTXs running in SLI mode had a slight edge over an X1900 CrossFire configuration, but the difference is minimal and more a result of CPU limitation than a bottleneck in the graphics sub-system.

Transparent
Half Life 2

Performance Comparisons with Half-Life 2
Details: http://www.half-life2.com/

Half Life 2
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait until November '04 to get our hands on this classic. We benchmarked Half-Life 2 with a long, custom-recorded timedemo in the "Canals" map, that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without any anti-aliasing or anisotropic filtering and with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently.

 

There isn't very much to talk about in regard to Half Life 2 performance. All of the cards we tested, whether running in a single-card configuration or partnered with a similar card for dual-GPU operation, performed very well at over 120 frames per second. Even though it was powered by the second fastest single-core CPU available, our test system was CPU bound when running Half Life 2 with all but the 256MB GeForce 7800 GTX, which fell just behind all of the other cards.  Any mid to high-end graphics card available today is able to run this game at high resolutions with all of the eye candy turned up, so lets just call this one a virtual tie and move on.

Transparent
Doom 3 v1.3

Performance Comparisons with Doom 3
Details: http://www.doom3.com/

Doom 3
id Software's games have long been pushing the limits of 3D graphics. Quake, Quake 2, and Quake 3 were all instrumental in the success of 3D accelerators on the PC. Now, many years later, with virtually every new desktop computer shipping with some sort of 3D accelerator, id is at it again with the visually stunning Doom 3. Like most of id's previous titles, Doom 3 is an OpenGL game that uses extremely high-detailed textures and a ton of dynamic lighting and shadows. We ran this batch of Doom 3 single player benchmarks using a custom demo with the game set to its "High-Quality" mode, at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without anti-aliasing enabled and then again with 4X AA and 8X aniso enabled simultaneously.

 

By now you all know that Doom 3 performance has been a strong point for NVIDIA ever since the game was originally released. Today, NVIDIA still holds onto the overall lead in performance for this game, but only because we tested 512MB GeForce 7800 GTXs alongside their 256MB counterparts. The 512MB GeForce 7800 GTXs were about 10% and 25% faster than the Radeon X1900 XTX or X1900 CrossFire. But the 512MB GeForce 7800 GTX is also nearly impossible for end-users to get ahold of at the moment.  Remove it from the equation, and the X1900 pulls ahead of the 256MB GeForce 7800 GTX by a few percentage points when anti-aliasing and anisotropic filtering are used.

Transparent
Quake 4 v1.0.5.2

Performance Comparisons with Quake 4
Details: http://www.quake4game.com/

Quake 4
id Software, in conjunction with developer Raven, recently released the latest addition to the wildly popular Quake franchise, Quake 4. Quake 4 is based upon an updated and slightly modified version of the Doom 3 engine, and as such performance characteristics between the two titles are very similar.  Like Doom 3, Quake 4 is also an OpenGL game that uses extremely high-detailed textures and a ton of dynamic lighting and shadows, but unlike Doom3, Quake 4 features some outdoor environments as well. We ran this these Quake 4 benchmarks using a custom demo with the game set to its "High-Quality" mode, at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without anti-aliasing enabled and then again with 4X AA and 8X aniso enabled simultaneously.

 

Our custom Quake 4 benchmark tell basically the same story as the Doom 3 test. The 512MB GeForce 7800 GTX was the top performer, in both single-card and dual-card SLI configurations, followed by the new Radeon X1900 XTX / XT. ATI's new flagship cards are able to outpace the 256MB GeForce 7800 GTX's when anti-aliasing and anisotropic filtering are used, but the GeForces come out on top without the additional pixel processing.

Transparent
Overclocking the Radeon X1900

Overclocking the Radeon X1900 XTX & X1900 CrossFire
(Fastest 3D Video Cards) + Overclocking = Even Faster Cards

For our next set of performance metrics, we spent a little time overclocking the Radeon X1900 XTX and the XTX / X1900 CrossFire Combo using the clock frequency slider available within ATI's drivers, under the "Overdrive" tab.

We'd like to note that overclocking video cards in a dual-GPU configuration is somewhat more difficult than overclocking a single card. When overclocking a pair of cards, your peak overclock will be limited by whichever card overclocks the lowest.  If card A's core can hit 700MHz, and card B's core can hit 690MHz, both cards end up being clocked at 690MHz, even though card B still has some clock speed headroom to spare. Each card is not currently overclocked individually, although this may change in future driver releases.


CrossFire Overclocked Speeds: 631MHz Core / 797MHz (1.59GHz DDR) Memory
X1900 XTX Overclocked Speeds:
689MHz Core / 797MHz (1.59GHz DDR) Memory

 


CrossFire Overclocked Speeds: 631MHz Core / 797MHz (1.59GHz DDR) Memory
X1900 XTX Overclocked Speeds:
689MHz Core / 797MHz (1.59GHz DDR) Memory


We had limited success while overclocking in CrossFire mode, but had much better luck with the single Radeon X1900 XTX. In CrossFire mode, we were only able to increase the Master card's core clock speed to 631MHz, up from 625MHz.  The memory, however, was much more cooperative and peaked at 797MHz (1.59GHz DDR). By itself, the Radeon X1900 XTX hit a peak core clock speed of 689MHz, with the same 797MHz memory speed. It seems there is a bit of clock speed headroom left in the X1900 considering we squeezed and additional 39MHz out of the core, so we suspect with more updated overclocking tools, cranking up core clock speeds in CrossFire mode will be possible.

While overclocked, we re-ran a couple of benchmarks to see what kind of additional performance we gained by raising the cards' core and memory clock speeds. In a single-card configuration, the Radeon X1900 XTX's 3DMark06 score jumped by 175 points, and its frame rate in our high-resolution Doom 3 benchmark went up by 3.2 FPS. In CrossFire mode, we saw somewhat smaller gains though. With the pair of X1900s overclocked, the 3DMark06 score increased by less than 100 points, and Doom 3's frame rate went up by only 2.8 frames per second.

Transparent
Power Consumtion, Noise & Temps

Total System Power Consumption, Acoustics & Temperatures
It's All About the Watts and Decibels

We have a few final data points to cover before bringing this article to a close. Throughout all of our benchmarking, we monitored how much power our ATI based CrossFire test system was consuming using a power meter, and also took some notes regarding its noise output and temperatures. Our goal was to give you all an idea as to how much power each configuration used and to explain how loud the configurations were under load. Please keep in mind that we were testing total system power consumption here, not just the power being drawn by the video cards alone.

We set aside our Radeon Xpress 200 reference board in favor of a brand new ECS motherboard, so the numbers presented here should not be compared with previous articles. As you can see, while idling, all of the single card configurations consumed similar amounts of power, which should be expected considering all of the cards run with similar clock speeds, have the same memory compliment, and are equipped with the same 2D engine. And the dual-card CrossFire configurations consumed about ~30 more watts than any single card while idling. Under load, however, the CrossFire rigs consumed much more power than any single card, and the X1900s used much more power than the X1800 XT. Clearly, anyone considering an X1900 or X1900 CrossFire should also make sure they've got a capable power supply.  We used an Enermax 565 Watt model throughout all of our testing and didn't have any trouble.

We'd also like to talk a bit about the noise associated with running a pair of X1900 XTs in a single system. As was the case when we first evaluated X1800 CrossFire, when we initially setup our test machine, and powered it up for the first time, it was clearly the loudest system that had ever graced the lab. Upon initial startup, both fans on both X1900s rotated at their maximum speeds, which resulted in a significant amount of noise. Once the drivers were installed, however, the fans on both cards spun-down dramatically and the test system became relatively quiet. In fact, the system was quiet enough to work with daily, without distraction. And throughout out entire testing process, the fans never spun up to their maximum speeds again. To put it simply, except for the initial shock of hearing two X1900s running at full-bore when we first turned on the machine, our X1900 CrossFire test system was relatively quiet, and we would not consider excessive noise an issue at all during normal use.

Lastly, we took some temperature readings using a Mastercool Laser Thermometer at various points around the X1900s to see how much heat the cards were throwing off.  In a single card configuration, we found the hottest part of the card to be the area on the back of the card directly behind the GPU. The spot on the back of the card hit temperatures around 58.5oC, and the external plate vents hit 27.5oC. In a dual-card CrossFire configuration, the card at the top of the chassis maintained similar temperatures, but the card at the bottom hit 62.5oC behind its GPU, and each card's fan shroud hit 41oC (top) and 46oC (bottom). Not cool running cards by any means, but the large heatsinks, and coolers that exhaust warm air from the system mean heat shouldn't be an issue in any well ventilated case.

Transparent
Our Summary & Conclusion

 

Performance Summary: ATI's new Radeon X1900 XTX and XT performed very well throughout all of our testing. In the video related tests, ATI's flagship clearly out scored NVIDIA's GeForce 7800 GTX in HQV, and while playing back high-definition video it had slightly lower CPU-utilization as well (please note, CPU utilization will vary depending on the video being played, however). During the game tests, the new Radeon X1900 XTX and XT outperformed NVIDIA's best in about 80% of the benchmarks, especially in the tests where anisotropic filtering and anti-aliasing were used concurrently. The same essentially holds true for the X1900 CrossFire configuration, although NVIDIA's SLI scaled better in a couple of tests, like F.E.A.R for example. ATI's strongest performance continues to be in DirectX applications, whereas NVIDIA continues to be strong in OpenGL applications.

It's amazing what can happen in a little over three months in this industry. In our initial look at ATI's X1K Graphics family back in October '05, we were hard on ATI for the company's past problems with availability and were somewhat underwhelmed by the R520's (X1800 XT) performance in a couple of key areas compared to NVIDIA's GeForce 7800 GTX. At the time, the capabilities of the AVIVO video pipeline weren't being fully utilized and DXVA video acceleration wasn't even working properly. And OpenGL performance was also lacking when compared to NVIDIA's products. Although in ATI's defense, DirectX performance was relatively good at launch.

Today however, and the landscape has completely changed. Even with the introduction of the higher-clocked 512MB GeForce 7800 GTX, ATI has been able to overtake NVIDIA in terms of overall performance and features. AVIVO is also working properly, OpenGL performance is much better thanks to some driver tweaks, and we encountered no major issues with stability or compatibility. The only problems we encountered had to do with some missing detail in HL2, and CrossFire not scaling as well as SLI in some areas, but software updates could remedy these issues. Now that ATI's software engineers have seemingly caught up with their hardware team, the brand new Radeon X1900 launches with more optimized drivers that expose more of the hardware's capabilities. We couldn't say that when the X1800 XT launched.


Chart Courtesy of ATI

These new Radeon X1900 should also enjoy much wider availability at launch then ATI's previously released products.  According to ATI thousands of X1900s in various flavors have already shipped, and should be on sale almost immediately, at price points ranging from $499 for the new All-In-Wonder to $649 for the Radeon X1900 XTX. In fact, they've already showed up at a few on-line retailers, at prices very close to MSRP. In contrast, NVIDIA's flagship 512MB GeForce 7800 GTXs are nearly impossible to find in retail at the moment and if you do find one, odds are the price will be well above MSRP.

   
X1900s In-Stock at NewEgg...

If you inspect the chart above, you'll notice a big gap in ATI's current product line-up where the X1800 XL and XT should be. We asked ATI what would happen to the X1800 family now that the X1900 is already here, and were told the two would co-exist for some time. But representatives were not specific as to how long that time would be. We suspect X1800s, especially X1800 XLs, will be available for at least a couple of months though, and prices on the X1800 XT and XL will drop in the short term to fill the gap in the chart above.

Overall, we have to give kudos to ATI for starting off 2006 with a such a bang. The company's problems in 2005 were well documented, so we won't rehash them here. Instead, we'll congratulate ATI for launching the Radeon X1900 so quickly, and thank them for keeping the rivalry between them and NVIDIA alive. ATI fought hard, and seems to have wrestled the performance crown from NVIDIA this time around. NVIDIA surely isn't sitting idle though, and will certainly have an answer for the X1900 sometime soon, but for now ATI is riding high. It wasn't a blow-out by any means, but the X1900 is definitely a winner.

_48 Shader Processors!
_High Performance
_AVIVO
_Dual-Link DVI Outputs
_CrossFire Ready
_Expensive
_Power Hungry
_Image Quality Bug
_Clunky CrossFire Dongle

Get into HotHardware's PC Hardware Forum Right Now!



Content Property of HotHardware.com