Logo   Banner   TopRight
TopUnder
Transparent
ATI X1000 Graphics Family
Transparent
Date: Oct 05, 2005
Section:Graphics/Sound
Author: Marco Chiappetta and Dave Altavilla
Transparent
Introduction, Specs. & Related Information

This past year has been a tumultuous one for ATI. The Canadian graphics giant has been plagued by well-documented supply problems and have had to deal with multiple delays to their next-gen product, being developed under the code name "R520". The situation is reminiscent of NVIDIA's prior to the launch of the NV30 which later became known as the GeForce FX 5800. Back then, NVIDIA had been commissioned to develop a GPU and chipset for the original Xbox, and planned to have their new GPUs produced on an advanced .13 micron manufacturing process. The combination of the move to .13 micron and the resource drain of developing core components for the Xbox caused NVIDIA to falter, which allowed ATI to step in with the R300. For roughly two years following that scenario, ATI was perceived as the leader in PC graphics, while NVIDIA worked feverishly to regain a leadership role. Today, it seems as though the tables have completely turned, however. ATI is now building the GPU for the upcoming Xbox 360, and they are moving to a .09 micron manufacturing process to produce their latest line of GPUs. In kind, ATI has also fallen behind NVIDIA by nearly a full product cycle. Coincidence? Perhaps. Ironic? Definitely.

Today though, ATI is ready to unveil an entire family of products based on their delayed R500 series architecture. The new X1000 graphics family consists of numerous cards, ranging from the entry level 4-pixel shader pipeline Radeon X1300 all the way on up to the 16-pipe Radeon X1800 XT. We were recently presented with four cards from ATI's new line-up, and plan to give you all of the salient details in today's showcase of the X1000 Graphics Family. Read on to see if ATI is on the road to redemption, or if their latest products are too little, too late.

ATI Radeon X1000 Graphics Family
Features & Specifications

Features - ATI Radeon X1800
• 321 million transistors on 90nm fabrication process
• Ultra-threaded architecture with fast dynamic branching
• Sixteen pixel shader processors
• Eight vertex shader processors
• 256-bit 8-channel GDDR3/GDDR4 memory interface
• Native PCI Express x16 bus interface
• Dynamic Voltage Control

Ring Bus Memory Controller
• 512-bit internal ring bus for memory reads
• Programmable intelligent arbitration logic
• Fully associative texture, color, and Z/stencil cache designs
• Hierarchical Z-buffer with Early Z test
• Lossless Z Compression (up to 48:1)
• Fast Z-Buffer Clear
• Z/stencil cache optimized for real-time shadow rendering
• Optimized for performance at high display resolutions, including widescreen HDTV resolutions

Ultra-Threaded Shader Engine
• Support for Microsoft DirectX 9.0 Shader Model 3.0 programmable vertex and pixel shaders in hardware
• Full speed 128-bit floating point processing for all shader operations
• Up to 512 simultaneous pixel threads
• Dedicated branch execution units for high performance dynamic branching and flow control
• Dedicated texture address units for improved efficiency
• 3Dc+ texture compression
_o High quality 4:1 compression for normal maps and two-channel data formats
_o High quality 2:1 compression for luminance maps and single-channel data formats
• Multiple Render Target (MRT) support
• Render to vertex buffer support
• Complete feature set also supported in OpenGL 2.0

Advanced Image Quality Features
• 64-bit floating point HDR rendering supported throughout the pipeline
_o Includes support for blending and multi-sample anti-aliasing
• 32-bit integer HDR (10:10:10:2) format supported throughout the pipeline
_o Includes support for blending and multi-sample anti-aliasing
• 2x/4x/6x Anti-Aliasing modes
_o Multi-sample algorithm with gamma correction, programmable sparse sample patterns, and centroid sampling
_o New Adaptive Anti-Aliasing feature with Performance and Quality modes
_o Temporal Anti-Aliasing mode
_o Lossless Color Compression (up to 6:1) at all resolutions, including widescreen HDTV resolutions
• 2x/4x/8x/16x Anisotropic Filtering modes
_o Up to 128-tap texture filtering
_o Adaptive algorithm with Performance and Quality options
• High resolution texture support (up to 4k x 4k)

Avivo Video and Display Engine
• High performance programmable video processor
_o Accelerated MPEG-2, MPEG-4, DivX, WMV9, VC-1, and H.264 decoding (including DVD/HD-DVD/Blu-ray playback), encoding & transcoding
_o DXVA support
_o De-blocking and noise reduction filtering
_o Motion compensation, IDCT, DCT and color space conversion
_o Vector adaptive per-pixel de-interlacing
_o 3:2 pulldown (frame rate conversion)
• Seamless integration of pixel shaders with video in real time
• HDR tone mapping acceleration
_o Maps any input format to 10 bit per channel output
• Flexible display support
_o Dual integrated dual-link DVI transmitters
_o DVI 1.0 / HDMI compliant and HDCP ready
_o Dual integrated 10 bit per channel 400 MHz DACs
_o 16 bit per channel floating point HDR and 10 bit per channel DVI output
_o Programmable piecewise linear gamma correction, color correction, and color space conversion (10 bits per color)
_o Complete, independent color controls and video overlays for each display
_o High quality pre- and post-scaling engines, with underscan support for all outputs
_o Content-adaptive de-flicker filtering for interlaced displays
_o Xilleon™ TV encoder for high quality analog output
_o YPrPb component output for direct drive of HDTV displays
_o Spatial/temporal dithering enables 10-bit color quality on 8-bit and 6-bit displays
_o Fast, glitch-free mode switching
_o VGA mode support on all outputs
• Compatible with ATI TV/Video encoder products, including Theater 550

CrossFire
• Multi-GPU technology
• Four modes of operation:
_o Alternate Frame Rendering (maximum performance)
_o Supertiling (optimal load-balancing)
_o Scissor (compatibility)
_o Super AA 8x/10x/12x/14x (maximum image quality)
_o Program compliant




Features - ATI Radeon X1600
• 157 million transistors on 90nm fabrication process
• Twelve pixel shader processors
• Five vertex shader processors
• 128-bit 4-channel DDR/DDR2/GDDR3/GDDR4 memory interface
• Native PCI Express x16 bus interface
_o AGP 8x configurations also supported with external bridge chip
• Dynamic Voltage Control

Ring Bus Memory Controller
• 256-bit internal ring bus for memory reads
• Programmable intelligent arbitration logic
• Fully associative texture, color, and Z/stencil cache designs
• Hierarchical Z-buffer with Early Z test
• Lossless Z Compression (up to 48:1)
• Fast Z-Buffer Clear
• Z/stencil cache optimized for real-time shadow rendering





Features - ATI Radeon X1300
• 105 million transistors on 90nm fabrication process
• Dual-link DVI
• Four pixel shader processors
• Two vertex shader processors
• 128-bit 4-channel DDR/DDR2/GDDR3 memory interface
_o 32-bit/1-channel, 64-bit/2-channel, and 128-bit/4-channel configurations
• Native PCI Express x16 bus interface
_o AGP 8x configurations also supported with external bridge chip\
• Dynamic Voltage Control

High Performance Memory Controller
• Fully associative texture, color, and Z/stencil cache designs
• Hierarchical Z-buffer with Early Z test
• Lossless Z Compression (up to 48:1)
• Fast Z-Buffer Clear
• Z/stencil cache optimized for real-time shadow rendering



Radeon X1300



Radeon X1600



Radeon X1800

 


The X1000 family of graphics cards we'll be looking at in this article vary greatly in terms of price and performance, but they share essentially the same feature set. Like NVIDIA has been doing for the past few years, ATI is releasing an entire line of graphics cards based on the same core GPU architecture this time around. The X1000 family of graphics cards, as they are now known, have full support for Shader Model 3.0, they are all built on a .09 micron manufacturing process, the entire line supports ATI CrossFire multi-GPU rendering, they have dual-link DVI outputs and are equipped with ATI's recently announced Avivo video engine. Looking at the specifications and features listed above, the X1000 family seemingly surpasses NVIDIA in terms of their overall feature set, with the exception of 4 - 8 fewer pixel pipelines on the higher end products.  But their performance and value are yet to be determined, so let's move on and dig a little deeper.

 

Transparent
The Product Line-Up

ATI is announcing an entire line of products based on their new core GPU architecture, each clocked at different speeds, and targeting different price point and performance levels. We were given the opportunity to test four cards in the new family, the Radeon X1300 Pro, the Radeon X1600 XT, the Radeon X1800 XL, and the new flagship Radeon X1800 XT.

   

   
ATI Radeon X1300 Pro: 4-Pipes / 600MHz Core / 800MHz Memory (256MB) - MSRP $149

The Radeon X1300 pro pictured here is the fastest of ATI's new entry-level graphics cards. Like all of their new GPUs, the X1300 is built using TSMCs .09 micron manufacturing process. The X1300 core is comprised of approximately 105 million transistors, and cards based on this GPU will feature a single-slot cooler, Dual-link DVI outputs, Shader Model 3.0 support, the new Avivo video engine, 4-pixel shader processors, 2-vertex shader processors and a variety of memory interface configurations ranging from a 128-bit 4 channel configuration to a 32-bit one-channel configuration. The card pictured above is the Radeon X1300 Pro. Its core is clocked at 600MHz and its 256MB of on-board RAM is clocked at 800MHz. An X1300 Pro with this configuration carries an MSRP of $149. There are two other X1300 cards on the way as well, which include the Radeon X1300 with a 450MHz core and 500MHz memory ($129 256MB / $99 128MB) and the Radeon X1300 HyperMemory with a 450MHz core and 32MB of memory clocked at 1GHz ($79).

 

   
ATI Radeon X1600 XT: 12-pipes / 590MHz core / 1.38GHz Memory (256MB) - MSRP $249

Taking a step up from the Radeon X1300, we have the new Radeon X1600 XT. The board pictured above will initially be the fastest of the X1600 family. As we mentioned earlier, all of ATI's new cards have full support for Shader Model 3.0 and share essentially the same feature set. The X1600 GPU is comprised of roughly 157 million transistors, and features 12-pixel shader processors, 5-vertex shader processors, two dual-link DVI outputs, single-slot coolers, and a 128-bit 4-channel memory interface. The card pictured above is the Radeon X1600 XT, which sports 256MB of fast 1.38GHz RAM and a GPU core clocked at 590MHz. The Radeon X1600 XT's MSRP is $249. Along with the 256MB X1600 XT, ATI is announcing a 128MB version of the card ($199), and a lower clocked "Pro" version. The Radeon X1600 Pro will also be eventually be available in 128MB and 256MB flavors, but the Pro features a GPU clocked at 500MHz and memory clocked at 780MHz. The 128MB and 256MB Radeon X1600 Pros are expected to sell for $149 and $199, respectively.

 

   

   

ATI Radeon X1800 XL: 16-Pipes / 500MHz core / 1.0GHz Memory (256MB) - MSRP $449

Further fleshing out the Radeon X1000 family of graphics cards is the impressive looking Radeon X1800 XL. The Radeon X1800 XL's GPU is composed of roughly 321 million transistors, and features ATI's self proclaimed "Ultra-threaded architecture" with fast dynamic branching. The Radeon X1800 XL GPU has 16-pixel shader processors, 8-vertex shader processors, and is equipped with a 256-bit 8-channel memory interface. The X1800 XL will initially be available in only one flavor, a 256MB card with its core clocked at 500MHz and its memory clocked a 1GHz (MSRP $449). At these clock speeds, a large-single slot cooler is sufficient to keep core and memory temperatures in check. On the surface, the Radeon X1800 XL's specifications seem much like an X800 XT, but the X1800 XL sports a SM 3.0 support, a pair of dual-link DVI outputs, the new Avivo engine, and a more advanced memory controller.

 

Transparent
The Product Line-Up (Cont.)

Finally, we have the new flagship Radeon X1800 XT. This is the card that ATI hopes will find a home in many of your high-end gaming rigs when it finally ships, reportedly sometime next month. Its specifications certainly warrant consideration...

   

   
ATI Radeon X1800 XT / 16-pipes / 625MHz core / 1.5GHz memory (512MB) - MSRP $549

The card you see here is a 512MB Radeon X1800 XT (MSRP $549). At its heart is essentially the same GPU found on the Radeon X1800 XL, but on the XT it is clocked much higher. To reiterate, the Radeon X1800 XT's GPU is comrised of appoximately 321 million transistors and is built using a .09 micron manufacturing process. The GPU on this card is equipped with 16-pixel shader processors, 8-vertex shader processors, and a 256-bit 8-channel GDDR3/GDDR4 memory interface. The Radeon X1800 XT's core is clocked at an impressive 625MHz and its memory is running at a whopping 1.5GHz. To sustain these high clock speeds, the Radeon X1800 XT sports a beefy dual-slot cooler that is very similar to the one found on the older Radeon X850 XT. The PCB is much larger than previous Radeons, and sports Volterra's multi-phase voltage regulator chipset (under the thin, red heatsink). The Radeon X1800 XT is expected to also be available in a 256MB version, which will be priced initially with an MSRP of $499.

 

   

  

We were curious to see how large the Radeon X1800 GPU really was after hearing that the core is composed of over 320 million transistors, so we popped the heatsink off of the Radeon X1800 XL to take a closer look. And for good measure, we did the same to a GeForce 7800 GTX. Using a trusty old ruler, we found the Radeon X1800 GPU to be roughly 18mm x 16mm, or 288mm2. As you can see, only its corners are visible under a dime. Conversely, a GeForce 7800 GTX, which is built on TSMC's .11 micron line, is a bit larger. We measured the GeForce 7800 GTX at approximately 19mm x 18.5mm, or 351.5mm2. If yields at TSMC are good, it could be more cost efficient for ATI to produce X1800s than it is for NVIDIA to make the GTX, which could push street prices down in time. Then again, packing 512MB of 1.5GHz GDDR3 RAM on a flagship card won't be cheap for the forseeable future either, so watch for those 256MB X1800 variants.

Transparent
Features & Architecture

We outlined the specifications for all of the cards in ATI's new X1000 family line-up on the last couple of pages, but didn't contrast them against NVIDIA's current offerings, or ATI's previous generation of cards. The chart below will give you a general idea of where ATI's new cards stand in terms of bandwidth and fillrate.

As you can see, even though the X1800s have fewer pixel shader pipelines than NVIDIA's GeForce 7800 GT and GTX, their higher clock speeds help keep fillrate competitive, and memory bandwidth for the X1800 XT is well above the rest of the pack. The X1600s and X1300s don't fare quite as well, however. The 12-pipe/4 ROP and 4-pipe/4 ROP configurations and 128-bit memory interfaces prevent them from putting up the same kind of numbers as the higher-end cards listed here.

The ATI Radeon X1000 Graphics Family
An Architectural Overview

ATI focused on efficiency and scalability with their new GPU architecture. Their goals were to reduce idle time and latency,while decoupling processing units from their previous rigidly defined pipelines. ATI also wanted to expand their feature set, and they've done so by finally introducing full Shader Model 3.0 support in the entire X1000 graphics family, from top to bottom.


An Overview of the Family...

Decoupling the GPU processing units allows ATI to design an entire line of products based on the same core GPU architecture, but with varying levels of performance and affording better overall design efficiency. NVIDIA took a similar approach with the design of the GeForce 6 and 7 series.


GPU Core Block Diagram

Workloads enter the pipeline via the Vertex Engine and are then passed on to the geometry setup engine, which then forwards to the dispatch processor for allocation amongst the pixel shaders.  The "Ultra-Threaded Dispatch Processor" is supposedly where a lot of pre-processing efficiencies come into play with significantly improved levels of flow control and thread management, keeping the pixel pipelines fully utilized and avoiding stalls.  In this architecture there are 16 pixel shader processors organized in four independent quad-shader cores, that are managed by the Ultra-Threading Dispatch Processor. 


The Pixel Shader

The Vertex Shader

All told, there are 8 vertex shaders, 16 texture address units, 16 texture units and 16 render back-end units in a top -of-the-line X1800 series GPU.  Of course both pixel and vertex shaders have been upgraded to support the Shader Model 3.0 specification, with dynamic flow control and virtually unlimited instruction length.

Another more efficient approach ATi is supposedly bringing to the table with the entire X1000 series of cards is the ability to process a larger number of smaller threads for better granularity and parallelism.   As the diagram above shows, the GPU processes a given workload in 4 pixel square (16 pixels total) thread sizes, which in scenarios like the shadow mapping example noted above, can provide much better coverage of an area that needs to be processed and rendered but avoiding areas that do not need to be processed for the operation.

The GPUs at the heart of the X1000 family of graphics cards are enhanced with other capabilities and features as well. For example, ATI now has full support for HDR with anti-aliasing, although most games will need to be patched to take advantage of this capability. The X1000 family also features a new adaptive AA algorithm to reduce the appearance of jagged edges in scenes where transparent textures are used, along with a new type of memory controller, and a beefed up video pipeline, dubbed Avivo.

Transparent
Memory Controller and Tech Demos

ATi took an entirely new approach to the Memory Controller architecture in the Radeon X1000 series GPUs, a hybrid approach of sorts.  Traditional Cross Bar Switch memory controller architectures have inherent latencies associated with them due to multiple simultaneous service requests in different areas of memory space.  Access to a specific chunk of memory at any given time may be delayed based on switching dependencies of a crossbar architecture.

Essentially, what we've learned here is that there are two switching resources now for reads and writes on board X1800 and X1600 memory controllers.  The X1800 has a 512-bit internal "ring bus" architecture which then maps out to a 256-bit 8 channel memory interface.  The X1600 has 256-bit "ring bus" architecture which then maps out to a 128-bit 4 channel memory interface.  The ring bus however is only utilized for latency sensitive memory read requests while memory writes must travel through the internal cross bar switch.  Regardless, read latency is significantly reduced with the bi-directional ring bus, which has direct access to the memory interface.  A final side benefit of the ring bus architecture is that it significantly simplifies trace routing and layout in board designs and theoretically this should translate to a cost benefit in the end product.

In addition ATi claims to have beefed up the cache resources inside the X1000 GPU architecture, such that caches are now fully associative for texture, color and Z/stencil operations. Fully associative cache has the best percentage likelihood for a cache hit because any line in the cache can hold any address that needs to be cached.  However significantly more complex control logic must be employed for this type of design since it also inherently suffers from much more strenuous search requirements over direct mapped cache, since a given address can be stored in any one of tens of thousands of cache lines and thus you have to know where to look for it.  Typically it takes much more exotic search algorithms to manage fully associative caches and more control logic which also translates to die real-estate.  However the net result is that caching efficiency with this architecture is significantly better at a small sacrifice of search speed.  In fact ATi claims that performance expensive cache misses are reduced by as much as 30+% versus the Radeon X850's architecture, in games like Battlefield 2, Far Cry and Half Life 2.

Another approach to memory access efficiency that ATi took was that the number of channels on the Radeon X1800's 256-bit memory controller interface have been divided up into 8, 32-bit channels for better granularity on memory access.  Since GDDR3 DRAM typically has a 2M x 32-bit x 8 or 4 bank organization, this translates to a 1 to 1 mapping of memory controller channels to DRAM chips.  In kind, the 128-bit, 4 channel interface on the Radeon X1600 and X1300 map in one to one as well.

ATI's New Demos: Parthenon, Ruby: The Assassin & The Toyshop
Random Imagery

So now that your brain is feeling a bit spongy from all that techno-chatter, we'll let you relax a bit before we start twistin' your melon again discussing the X1000 series video pipeline, which is up next.  For now, feast your eyes on the candy that ATi bestowed upon us at their Editor's Day in September.

ATi Parthenon Demo:

   

The base artwork for ATi's Parthenon demo was shot on location in Greece and is in fact a 3D rendered re-creation of the real deal.  The geometry of this demo comes from laser scans of the actual Parthenon and consists of over 90 million polygons.  This demo shows off a progressive level of detail algorithm that ATi developed that allows surface texture details to blend into view naturally as the camera perspective changes on a given scene.  The result is that there was absolutely no popping visible when this the camera pans around this model and all the different surfaces are exposed.  ATi's pals at Crytek should take note of this technique and figure out a way to keep those palm trees in Far Cry from popping in and out of view, regardless of draw distance detail that is selected.  As with many things in life, we're sure this is easier said than done of course.

Ruby is back:

   

If there's one development effort that ATi definitely has NVIDIA beat on, hands down, that would have to be the art of the tech demo.  This year Ruby was back and looking sexier and even more bad-ass than ever.  We're not just talking about dancing pixies or friendly Biker Dudes, Ruby is actually a short-take movie with a story line and a rendering engine to die for.  Too bad you can't play it but then again, tech demos can be over-the-top like this because they don't have to perform like a game.

The Gloomy but oh-so pretty "Toy Shop":

   

Finally, what probably impressed us most was ATi's Toy Shop demo, which makes heavy use of parallax occlusion mapping to render images in things like the brick-work and cobblestone streets.  It was amazing to note that the highly detailed and realistic cobblestones in the street area, were actually made up of only 2 polygons.  The rest of the impressive 3D surfaces area of the stones was all done in a parallax occlusion mapping effect.  On a side note, there are over 700 unique shaders used in this demo, as well as dynamic soft shadows, volumetric lighting in the rain and fog, misty halos and glow effects.

Transparent
ATI Avivo Video Pipeline

Another major addition to ATI's GPU core architecture is their recently announced Avivo video and display pipeline. ATI thinks so much of Avivo, that they actually made a separate announcement outlining the technology a few weeks ago. Basically, Avivo is the name ATI has given to a number of enhancements that have all been made to various stages of the video and display pipeline in the X1000 family and accompanying "Theater" chips. Avivo was designed to enhance overall video quality, and better prepare the PC for playback of high-definition content like H.264 and VC1.

Each stage of the video pipeline, which ATI breaks down into five separate groups, has been enhanced in some way by Avivo. The Capture stage of the pipeline has capabilities that include automatic gain control, a 3D comb filter, 12-bit analog to digital converters, and noise reduction capabilities (Rage Theater). The Encode stage of the pipeline is enhanced by hardware based encoding, and GPU assisted transcoding of H.264, VC-1, WMV 9, WMV 9-PMC, MPEG-2, MPEG-4, and DivX media (this feature likely won't be available until exposed in ATI's Catalyst drivers sometime late in the year). And the decode stage of the pipeline also features hardware assisted decoding of H.264, VC-1, WMV 9, MPEG-2, and MPEG-4.

In addition to the features previously found in older ATI GPU architectures, the Post Processing stage of the Avivo video pipeline is further enhanced by a new vector adaptive de-interlacing algorithm and an advanced video scaler that helps preserve fine detail when scaling video up or down. Which finally brings us to the display stage of the pipeline. This stage is enhanced with a number of new capabilities that include a Xilleon TV Encoder leveraged from ATI's CE division, dual-link DVI support with 10-bit/16-bit output, and HDCP and HDMI support. The 10-bit Avivo display engine also performs Gamma and Color correction, scaling / compression, and dithering where necessary. The net effect is an increase in quality at every stage of the pipeline, which in turn should generate a better final image.

We should mention that the enhancements made to the display stage of the pipeline affect all aspects of video, including 2D and 3D output, and offer more flexibility in terms of connectivity as well. ATI's goal was to create a video engine that not only increased overall image quality, but was able to connect to virtually any type of display. And best of all, because multiple stages in the pipeline support 10-bit precision, not just the DACs like some older architectures, data integrity is preserved and the final output should be closer to the original input signal.

Transparent
ATI Avivo Video Pipeline (Cont.)

Windows Media Video 9 Acceleration: Microsoft's Windows Media Video 9 (WMV9) HD format was accepted by the SMPTE HD-DVD consortium as a new HD format. The Windows Movie Maker software, which comes bundled with Windows XP, makes it easy for consumers to edit and save their favorite videos. These videos are saved in the .WMV format. Most of today's high-end GPUs include dedicated hardware to accelerate the playback of WMV and WMV-HD content for fluid full frame rate video even on systems with entry-to mid level CPUs. Previous generations of GPUs were not able to support WMV9 decode acceleration, so often times HD WMV9 content would drop frames when being played back on legacy hardware.

WMV-HD Decode Acceleration
So, what does Avivo do for me, today?

To characterize CPU utilization when playing back WMV HD content, we used the performance monitor built into Windows XP. Using the data provided by performance monitor, we created a log file that sampled the percent of CPU utilization every second, while playing back the 1080p version of the "Magic of Flight" video available on Microsoft's WMVHD site. The data was then imported into Excel to create the graph below. The graph shows the CPU utilization for a GeForce 7800 GTX, a Radeon X850 XT PE, and a Radeon X1800 XT using Windows Media Player 10, patched using the DXVA update posted on Microsoft's web site (Update Available Here).

Average CPU Utilization (Athlon 64 FX-55 @ 2.6GHz)

GeForce 7800 GTX Radeon X850 XT PE Radeon X1800 XT
33.04% 37.85% 35.26%

Before we talk about the numbers posted above, we have to report on a problem we experienced with ATI's X1000 cards playing HD content with Microsoft's DXVA patch applied to Widows Media Player 10. With the first set of drivers supplied to us by ATI, X1000 cards would playback WMV HD videos with garbled blocks over a majority of the video. We reported the problem to ATI and were sent a second set of drivers that helped with DVD playback image quality, but didn't resolve the WMV HD playback issue. Instead, with the second set of drivers caused WMP 10 to display a black window when running a WMV HD video. So, at this point in time consider hardware accelerated WMV HD playback "broken" on the X1000 family. ATI claims they'll have this issue resolved in a future driver release.

With that said, we recorded CPU utilization numbers using the first set of drivers that we received. The movie didn't look right, but the audio was there, and we could see some of the video. These numbers could very well change considerably with a future driver release, however. As it stands now, the Radeon X1800 XT fared only marginally better than the older Radeon X850 XT but X1800 XT utilized slightly more CPU cycles than a GeForce 7800 GTX.

DVD Video Quality: HQV Benchmark
http://www.hqv.com/benchmark.cfm

Next up, we have a new addition to the HotHardware testing arsenal, the HQV DVD video benchmark from Silicon Optics. HQV is comprised of a sampling of video clips and test patterns that have been specifically designed to evaluate a variety of interlaced video signal processing tasks, including decoding, de-interlacing, motion correction, noise reduction, film cadence detection, and detail enhancement. As each clip is played, the viewer is required to "score" the image based on a predetermined set of criteria. The numbers listed below are the sum of the scores for each section. We played the HQV DVD using the latest version of NVIDIA's PureVideo Decoder on the GeForce 7800 GTX, and as recommended by ATI, we played the DVD on the ATI hardware using Intervideo's WinDVD 7 Platinum, with hardware acceleration enabled.

Somewhat surprisingly, the GeForce 7800 GTX did a better job playing the HQV DVD benchmark than either of ATI's cards. The GeForce 7800 GTX excelled in the film detail and "jaggies" tests, where ATI seemed to falter a bit. Overall, ATI's hardware did well, but NVIDIA had a marked advantage in a majority of HQV's tests. We were also surprised to see the X1800 XT score so closely to the X850. This fact leads us to believe the advanced features of the new Avivo video pipeline are not being fully exploited with ATI's current drivers. Hopefully, this situation will change for the better with future driver releases.

Transparent
New Anti-Aliasing Modes

ATI has claimed that the new X1000 graphics family offers unsurpassed image quality, so prior to benchmarking the new cards, we spent some time analyzing an X1800 XT's in-game image quality versus a Radeon X850 XT Platinum Edition and NVIDIA's flagship GeForce 7800 GTX. First, we used Half Life 2's "background 2" map to get a feel for how each card's anti-aliasing algorithm's affected the jaggies in the scene...

Image Quality Analysis: Anti-Aliasing
7800 vs. x850XT vs. X1800XT

GeForce 7800 GTX
1280x1024 - No AA

GeForce 7800 GTX
1280x1024 - 4X AA

GeForce 7800 GTX
1280x1024 - 8XS AA

GeForce 7800 GTX
1280x1024 - No AA

200% Zoom

GeForce 7800 GTX
1280x1024 - 4X AA

200% Zoom

GeForce 7800 GTX
1280x1024 - 8XS AA

200% Zoom


Radeon X850XT PE
1280x1024 - No AA

Radeon X850XT PE
1280x1024 - 4x AA

Radeon X850XT PE
1280x1024 - 6x AA

Radeon X850XT PE
1280x1024 - No AA

200% Zoom

Radeon X850XT PE
1280x1024 - 4x AA

200% Zoom

Radeon X850XT PE
1280x1024 - 6x AA

200% Zoom


Radeon X1800 XT
1280x1024 - No AA

Radeon X1800 XT
1280x1024 - 4x AA

Radeon X1800 XT
1280x1024 - 6x AA

Radeon X1800 XT
1280x1024 - No AA

200% Zoom




Radeon X1800 XT
1280x1024 - 4x AA

200% Zoom




Radeon X1800 XT
1280x1024 - 6x AA

200% Zoom




Radeon X1800 XT
1280x1024 - No AA

Radeon X1800 XT
1280x1024 - 4X Adaptive AA

Radeon X1800 XT
1280x1024 - 6X Adaptive AA

Radeon X1800 XT
1280x1024 - No AA

200% Zoom

Radeon X1800 XT
1280x1024 - 4X Adaptive AA

200% Zoom

Radeon X1800 XT
1280x1024 - 6X Adaptive AA

200% Zoom

Yes, we know there are a ton of images on this page; they are grouped in batches of six or twelve depending on the graphics card used.  When opening the pop-up, full-sized images, note the file name as it will explain which card and AA mode was used.

If you direct your attention to the water-tower and crane in the background of these images, the impact anti-aliasing has on image quality is readily apparent. In the "No AA" shots it seemed to us that the Radeon X850 XT Platinum Edition and Radeon X1800 XT had the lowest detail, and had the most prominent "jaggies." Look closely at the ladder on the water tower and you'll notice parts missing in the Radeon shots that are there on the GeForce 7800 GTX.  With standard multi-sample 4X anti-aliasing enabled, though, it becomes much harder to discern any differences between the cards. The ladder in the background gets cleaned considerably, as do the cables on the crane. The same holds true when ATI's 6X MSAA and NVIDIA's 8xS AA is enabled, although in this comparison, we'd give an edge in image quality to NVIDIA, because the additional super-sampling applied by 8xS AA does a decent job of cleaning up edges of transparent textures.

However, at the very bottom of the page, we've got some screen shots using the Radeon X1000 family's new adaptive anti-aliasing algorithm.  Adaptive AA is basically a combination of multi-sampling and super-sampling AA, similar to NVIDIA's 8xS mode, or a combination of NVIDIA's MSAA and the GeForce 7's transparency AA.  ATI's adaptive AA mode super-samples any textures that have transparency to reduced jaggies that don't land on the edge of a polygon. There are multiple Adaptive AA modes available with the new X1000 family of cards. When in quality mode, for example, 4X Adaptive AA is a combination of 4X MSAA and 4X SSAA; 6X Adaptive AA is 6X MSAA and 6X SSAA.  In performance mode though, the number of samples applied in the super-sample stage are halved (performance mode was not available in the drivers we used for testing). As you can see, ATI's adaptive AA does a great job of reducing jaggies in the scene. Open up a standard 4X or 6X AA shot, and compare the trees and grass in the scene to either of the adaptive AA screens. You'll see a significant reduction in the prominence of jaggies. Overall, we were impressed with the images produced by ATI's Adaptive AA. The X1800 XT produced some of the best images we have seen on the PC to date.

Transparent
Anti-Aliasing Performance Scale

Due to the total number of anti-aliasing modes now offered by ATI with the X1000 graphics family, it would be a monumental undertaking to do an as close to apples-to-apples comparison of all of the AA modes available on both NVIDIA's and ATI's platforms. We know you'll be interested in how each successive level of standard anti-aliasing and adaptive AA perform, however, so we ran one of our custom benchmarks in every mode offered with ATI's current drivers to give you an idea as to how these new modes affect performance.

ATI Adaptive AA Performance: FarCry
Say Good Bye To Jaggies

 

Using our custom FarCry benchmark, with the game patched to v1.33, we ran the game at two resolutions with an increasing level of anti-aliasing applied. As expected, performance dropped as more and more pixel processing was being performed on the scene. With FarCry in particular, we'd say playability begins to suffer once you enable 4x Adaptive AA.  6x Adaptive AA was a bit to much for the X1800 XT to handle with a game like FarCry though. With older titles, or CPU bound games, however, the higher levels of Adaptive AA should be perfectly playable at relatively high resolutions, and they'll offer superior image quality than was available with ATI's previous generation of cards.

Transparent
New Anisotropic Filtering Modes

With this next set of screen shots, we followed a similar procedure outlined on the two previous pages to evaluate the effect of the ATI's new anisotropic filtering techniques on a given scene.  The screen shots below are from Half-Life 2's "background 4" map. We've again compared similar settings using the GeForce 7800 GTX, Radeon X850 XT Platinum Edition, and a Radeon X1800 XT. For this set of screen shots, anti-aliasing was disabled to isolate the effect each card's respective anisotropic filtering algorithms altered the images.

Image Quality Analysis: Anisotropic Filtering
NVIDIA vs. ATI - The Perpetual Battle Continues

GeForce 7800 GTX
No Aniso

GeForce 7800 GTX
8X Aniso

GeForce 7800 GTX
16x Aniso

GeForce 7800 GTX
No Aniso - Close Up

GeForce 7800 GTX
8x Aniso - Close Up


GeForce 7800 GTX
16x Aniso - Close Up



Radeon X850 XT PE
No Aniso

Radeon X850 XT PE
8x Aniso

Radeon X850 XT PE
16x Aniso

Radeon X850 XT PE
No Aniso - Close Up

Radeon X850 XT PE
8x Aniso - Close Up

Radeon X850 XT PE
16x Aniso - Close Up


Radeon X1800 XT
No Aniso


Radeon X1800 XT
8x Aniso

Radeon X1800 XT
16x Aniso

Radeon X1800 XT
No Aniso - Close Up





Radeon X1800 XT
8x Aniso - Close Up





Radeon X1800 XT
16x Aniso - Close Up





Radeon X1800 XT
No Aniso

Radeon X1800 XT
8x High-Quality Aniso

Radeon X1800 XT
16x High-Quality Aniso

Radeon X1800 XT
No Aniso - Close Up

Radeon X1800 XT
8x High-Quality Aniso
- Close Up

Radeon X1800 XT
16x High-Quality Aniso
- Close Up

As we mentioned a couple of pages ago, take note of the file names when browsing through the enlarged versions of these images. It'll help you keep track of which card was used to snap the screen shot. When perusing the images above, pay special attention to the lower left portion of the scene, as this is where anisotropic filtering has the most impact. In the "No Aniso" shots at the top, which have only trilinear filtering enabled, the blurring in the road is clearly evident.

However, with 8X anisotropic filtering enabled, the detail in the road is dramatically enhanced. If you open each of the standard shots individually and skip through them quickly, you're likely to notice a bit more detail in the shots taken with the GeForce 7800 GTX, disregarding artifacts produced by the JPG compression.

The same seemed to be true when inspecting the 16x aniso images. Of course, image quality analysis is objective by its nature, but based on these images, we think the GeForce 7800 GTX has the best image quality as it relates to anisotropic filtering when standard "optimized" aniso is used. The new Radeon X1000 family of graphics cards offer another "high quality" anisotropic mode, that doesn't have the same angular dependency as ATI's previous generation of cards.  The new high-quality aniso mode offered by the X1000, applies nearly the same level of filtered regardless of the angle.  Overall, the effect of enabling ATI's high-quality aniso mode is positive, as it does an even better job of sharpening texture and increasing the detail level.  The fully appreciate ATI's high-quality aniso mode though, you've got to see it in action. Still screen shots don't convey the full effect.

Transparent
Anisotropic Filtering Performance Scale

When testing the performance of ATI's different anti-aliasing modes a couple of pages back, we stepped through each successive level of AA while benchmarking FarCry at resolutions of 1280x1024 and 1600x1200. The results on this page were attained using a similar methodology, but we altered the level of anisotropic filtering being applied to the images instead. Anti-aliasing was disabled throughout this batch of tests to isolate the effect anisotropic filtering alone was having on performance.

ATI Anisotropic Filtering Performance: FarCry
Sharpening Up Those Textures

 

 

As expected, as the level of anisotropic filtering being applied to the scene was increased, performance decreased. But unlike the anti-aliasing results, where adaptive AA dragged performance down significantly, ATI's new high-quality anisotropic filtering modes had a negligible impact on performance, at least in FarCry. In fact, the difference in performance between the no-aniso tests, and the tests where 16x high-quality anisotropic filtering was applied was less than 10%. We'd have to do more extensive testing with a multitude of games to make any broad, sweeping statements, but initially, it seems like there is no reason not to enable high-quality anisotropic filtering on the X1000 family of cards. There in a minimal effect on performance, but a noticeable increase in image quality while gaming.

Transparent
Test System & X1300 w/ 3DMark05 & Halo

HOW WE CONFIGURED THE TEST SYSTEM: We put together two different test systems for this article.  We tested our NVIDIA based cards on a Gigabyte K8NXP-SLI nForce 4 SLI chipset based motherboard, powered by an AMD Athlon 64 FX-55 processor and 1GB of low-latency Corsair XMS RAM. However, the ATI based cards were tested on an ATI reference Radeon Xpress 200 motherboard, with the same processor and RAM. The first thing we did when configuring these test systems was enter each BIOS and load the "High Performance Defaults."  The hard drive was then formatted, and Windows XP Professional with SP2 was installed. When the installation was complete, we installed the latest chipset drivers available, installed all of the other necessary drivers for the rest of our components, and removed Windows Messenger from the system. Auto-Updating and System Restore were also disabled, the hard drive was defragmented, and a 768MB permanent page file was created on the same partition as the Windows installation. Lastly, we set Windows XP's Visual Effects to "best performance," installed all of the benchmarking software, and ran the tests.

The HotHardware Test System
AMD Athlon 64 FX Powered

Processor -

Motherboard -





Video Cards -












Memory -


Audio -

Hard Driv
e -

 

Hardware Used:
AMD Athlon 64 FX-55 (2.6GHz)

Gigabyte GA-K8NXP-SLI
nForce4 SLI chipset

ATI Reference CrossFire MB
ATI Radeon Xpress 200 CF Edition

ATI Radeon X1800 XT

ATI Radeon X1800 XL
ATI Radeon X1600 XT
ATI Radeon X1300 Pro
ATI Radeon X850XT (x2)
ATI Radeon X800 XL (512MB)
GeForce 7800 GTX (x2)
GeForce 7800 GT
GeForce 6800 Ultra (x2)
GeForce 6800 GT
GeForce 6600 GT
GeForce 6200 TC 32/128

1024MB Corsair XMS PC3200 RAM
CAS 2

Integrated on board

Western Digital "Raptor"

36GB - 10,000RPM - SATA

Operating System -
Chipset Drivers -
DirectX -

Video Drivers
-




Synthetic (DX) -
Synthetic (DX) -
DirectX -
DirectX -

DirectX -
DirectX -
OpenGL -
OpenGL -
Relevant Software:
Windows XP Professional SP2 (Patched)
nForce Drivers v6.82
DirectX 9.0c

NVIDIA Forceware v81.82

ATI Catalyst v5.9/v5.10 beta


Benchmarks Used:
ShaderMark v2.1
3DMark05 v1.2.0
Halo v1.06
Splinter Cell: Chaos Theory v1.04
FarCry v1.33*
Half Life 2*
Doom 3 v1.3 (Single & Multi-Player)*
Chronicles of Riddick v1.1*

* - Custom Test (HH Exclusive demo)

Please note, we have broken up the testing into two sections for this article. On this page and the next, we're going to focus on the performance of the new Radeon X1300 Pro and Radeon X1600 XT, versus a few GeForce 6 cards. The entry-level 4-pipe Radeon X1300 Pro is simply not testable in some of the more taxing configurations that we use to compare high-end graphics cards, making a top-to-bottom, apples-to-apples comparison between various cards nearly impossible. So, to keep our data relatively uncluttered, we dedicated a separate section to the X1300 Pro...

Performance Comparisons with 3DMark05 v1.2.0
Details: http://www.futuremark.com/products/3dmark05/

3DMark05
3DMark05 is the latest installment in a long line of synthetic 3D graphics benchmarks, dating back to late 1998. 3DMark2005 is a synthetic benchmark that requires a DirectX 9.0 compliant video card, with support for Pixel Shaders 2.0 or higher, to render all of the various modules that comprise the suite. To generate its final "score", 3DMark05 runs three different simulated game tests and uses each test's framerate in the final tabulation. Fillrate, Memory bandwidth, and compute performance especially all have a measurable impact on performance in this benchmark. We ran 3DMark05's default test (1,024 x 768) on all of the cards and configurations we tested, and have the overall results posted for you below.

Futuremark's synthetic DirectX 9.0 benchmark puts the Radeon X1300 Pro's performance somewhere in between a GeForce 6600 GT and a GeForce 6200 32/128 TurboCache card. The GeForce 6600 GT scored a full 775 points higher than the X1300 Pro, even though it had a smaller frame buffer and is clocked significantly lower. The X1600 XT pulls out a marginal victory here, just barely edging out a GeForce 6800 GT.

Performance Comparisons with Halo v1.06
Details: http://www.bungie.net/Games/HaloPC/

Halo
No additional patches or tweaks are necessary to benchmark with Halo, as Gearbox has included all of the necessary information to test with this game within its Readme file. This benchmark works by running through four of the long cut-scenes within the game, after which the average frame rate is recorded. Halo was one of the first games to have a PS 2.0 code path, and even though its graphics are no longer considered cutting edge, compute performance and fillrate still affect overall performance in this test. We updated Halo using the most recent v1.06 patch and ran this benchmark at a resolution of 1,280 x 1,024. Anti-aliasing doesn't work properly with Halo, so all of the tests below were run with AA disabled.

The new Radeon X1300 Pro's performance in Halo was adequate, but significantly lower than a GeForce 6600 GT's. The X1300 Pro was roughly twice as fast as a GeForce 6200 32/128 TurboCache card, but the GeForce 6600 GT was about 80% faster than the X1300 Pro. And in stark contrast to what 3DMark05 says, the GeForce 6800 GT pulled way ahead of the Radeon X1600 XT, by a margin of almost 20 frames per second.

Transparent
X1300 Performance: HL2 & Doom 3

Performance Comparisons with Half-Life 2
Details: http://www.half-life2.com/

Half Life 2
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait until November 2004 to get our hands on this classic. We benchmarked Half-Life 2 with a long, custom-recorded timedemo in the "Canals" map, that takes us through both outdoor and indoor environments. These tests were run at a resolutions of 1,280 x 1,024 without any anti-aliasing or anisotropic filtering and with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently.

Before we disect these Half Life 2 scores, we like to mention one change in our test configuration. When testing HL2 with the GeForce 6600 GT and Forceware v81.82 drivers, the game would cause a BSOB upon launch. We reported the issue to NVIDIA, and representatives from the company claim the problem will be fixed in the next driver release.  So, for these tests the GeForce 6600 GT was tested using the Forceware v78.03 driver package. None of the other GeForce cards exhibited this problem.

The X1300 Pro put up a good fight in our custom Half Life 2 benchmark, besting the GeForce 6200 32/128 TC and GeForce 6600 GT when anti-aliasing and anisotropic filtering were enabled. Without any additional pixel processing being applied though, the GeForce 6600 GT smoked the X1300 Pro by about 25%. The X1600 XT and GeForce 6800 GT were more evently matched, but the GeForce was much faster when anti-aliasing and anisotropic filtering were enabled.

Performance Comparisons with Doom 3
Details: http://www.doom3.com/

Doom 3
id Software's games have long been pushing the limits of 3D graphics. Quake, Quake 2, and Quake 3 were all instrumental in the success of 3D accelerators on the PC. Now, many years later, with virtually every new desktop computer shipping with some sort of 3D accelerator, id is at it again with the visually stunning Doom 3. Like most of id's previous titles, Doom 3 is an OpenGL game that uses extremely high-detailed textures and a ton of dynamic lighting and shadows. We ran this batch of Doom 3 single player benchmarks using a custom demo with the game set to its "High-Quality" mode, at a resolution of 1,280 x 1,024 without anti-aliasing enabled and then again with 4X AA and 8X aniso enabled simultaneously.

NVIDIA's traditional dominance with regard to Doom 3 performance continues to hold true, as even the GeForce 6600 GT is able to outperform a Radeon X1600 XT in this game when anti-aliasing is turned on. In this context, the GeForce 6800 GT is in a league of its own, besting both of the ATI cards tested by a wide margin in both of the test configurations.

Transparent
ShaderMark v2.1

From this point forward in this article, we'll be focusing on the performance of ATI's new Radeon X1800 XT, X1800 XL , and X1600 XT versus their previous generation of cards, and virtually all of NVIDIA's current high-end offerings. There's lots of data on the proceeding pages, so bear down for the long haul...

Performance Comparisons with ShaderMark v2.1 (Build 129)
Strict High-Level Shading Language

Shadermark v2.1
For most of our recent video card-related articles, we've stuck to using games, or benchmarks based on actual game engines, to gauge overall performance. The problem with using this approach exclusively is that some advanced 3D features may not be fully tested, because the game engines currently in use tend not to use the absolute latest features available within cutting-edge graphics hardware. In an effort to reveal raw shader performance, which is nearly impossible to do using only the games on the market today, we've incorporated ToMMTi-System's ShaderMark v2.1 into our benchmarking suite for this article. ShaderMark is a Direct 9.0 pixel shader benchmark that exclusively uses code written in Microsoft's High Level Shading Language (HLSL) to produce its imagery.

 

 

ATI's new Radeon X1800 XT didn't fare very well against NVIDIA's GeForce 7800 GTX in most of the ShaderMark v2.1 tests. In the first 19 tests, the Radeon X1800 XT is between 3% and 37% slower than NVIDIA's current flagship GPU. Although ATI claims the X1000 family is equipped with more efficient shader engines, the fact that the 7800 GTX has 50% more shader pipelines than the X1800 XT is too much for the Radeon X1800 XT to handle, even though the latter's core is clocked much higher. In the last six tests, where flow-control and multiple passes though the pipeline are necessary, the Radeon X1800 XT does much better, however. In the last few tests, the Radeon X1800 XT is between 1.3% and 38% faster than the GeForce 7800 GTX.

Transparent
3DMark05 & Halo v1.06

Performance Comparisons with 3DMark05 v1.2.0
Details: http://www.futuremark.com/products/3dmark05/

3DMark05
3DMark05 is the latest installment in a long line of synthetic 3D graphics benchmarks, dating back to late 1998. 3DMark2005 is a synthetic benchmark that requires a DirectX 9.0 compliant video card, with support for Pixel Shaders 2.0 or higher, to render all of the various modules that comprise the suite. To generate its final "score", 3DMark05 runs three different simulated game tests and uses each test's framerate in the final tabulation. Fillrate, Memory bandwidth, and compute performance especially all have a measurable impact on performance in this benchmark. We ran 3DMark05's default test (1,024 x 768) on all of the cards and configurations we tested, and have the overall results posted for you below.

There is a lot to digest within the graphs on the next few pages, but we'll do our best to simplify what you're seeing moving forward. We'll start with the X1600 XT's performance, them move up to the X1800 XL and the X1800 XT...

3DMark05's default benchmark, had the new Radeon X1600 XT finishing behind all of the competition, except for the 512MB Radeon X800 XL. All of NVIDIA's offerings, and the X850 XT were faster than ATI's latest mid-range card. The X1800 XL fares a little better, besting ATI's previous generation of cards, and the GeForce 7800 Ultra, but the GeForce 7 series of cards, and obviously the SLI configurations were faster. Strictly looking at single-card performance, the new Radeon X1800 XT was the top-dog, besting the GeForce 7800 GTX by almost 1400 points, however both SLI configurations were alone at the top.

Performance Comparisons with Halo v1.06
Details: http://www.bungie.net/Games/HaloPC/

Halo
No additional patches or tweaks are necessary to benchmark with Halo, as Gearbox has included all of the necessary information to test with this game within its Readme file. This benchmark works by running through four of the long cut-scenes within the game, after which the average frame rate is recorded. Halo was one of the first games to have a PS 2.0 code path, and even though its graphics are no longer considered cutting edge, compute performance and fillrate still affect overall performance in this test. We updated Halo using the most recent v1.06 patch and ran this benchmark twice, once at a resolution of 1,280 x 1,024 and then again at 1,600 x 1,200. Anti-aliasing doesn't work properly with Halo, so all of the tests below were run with anti-aliasing disabled.

 

The Radeon X1600 XT seemed to struggle a bit with Halo. The 12-pipeline Radeon X1600 XT fell behind every other card, including the X800 XL, by significant margins at both resolutions. ATI's Radeon X1800 XL performed much better, almost doubling the X1600 XT's performance at the higher resolution, but all of the NVIDIA powered cards except the GeForce 6800 Ultra outperformed the X1800 XL. The Radeon X1800 XT overtook the lower priced GeForce 7800 GT at both resolutions, but the GTX finished well ahead of the X1800 XT, and both SLI configurations and the CrossFire rig we the fastest overall.

Transparent
Splinter Cell: Chaos Theory

Performance Comparisons with Splinter Cell: Chaos Theory v1.04
Details: http://www.splintercell3.com/us/

SC: Chaos Theory
We've recently added Ubisoft's great new game, Splinter Cell: Chaos Theory, to our suite of game benchmarks. Based on a heavily modified version of the Unreal Engine, enhanced with a slew of DX9 shaders, lighting and mapping effects, Splinter Cell: Chaos Theory is gorgeous with its very immersive, albeit dark environment. The game engine has a shader model 3.0 code path that allows the GeForce 6 & 7 Series of cards, and the new X1000 family of cards, to really shine, and a recent patch has implemented a shader model 2.0 path for ATI's X8x0 generation of graphics hardware. For these tests we enabled the SM 3.0 path on the GeForce cards and the X1000 cards, but the SM 2.0 path was enabled for the older Radeons. High Dynamic Range rendering and parallax mapping were disabled, and we benchmarked the game at resolutions of 1,280 x 1024 and 1,600 x 1,200, both with and without anti-aliasing and anisotropic filtering.

 

Splinter Cell: Chaos Theory proved to be somewhat of a strong point for ATI's new high-end cards. However, the Radeon X1600 XT took a bit of a beating. The X1600 XT consistently trailed behind the pack and was clearly outpaced by every other card we tested. The Radeon X1800 XL performed fairly well, but was outpaced by GeForce 7800 GT, GTX and SLI configurations in all but one test configuration. However, the X1800 XT was strong here. When AA and Aniso were enabled, the Radeon X1800 XT was the fastest single-card configuration we tested. The multi-GPU configurations had no problem pulling ahead of the XT, though.

The main batch of Splinter Cell benchmarks above were run with some of the advanced Shader Model 3.0 features disabled, to keep the playing field level between the SM 2.0 and SM 3.0 capable cards. We though that most of you would be interested to see what happens when "everything is turned on", however. To get the scores listed here, we turned on the SM 3.0 path, and enabled all of the image quality enhancing features associated with it, including HDR rendering, tone mapping, and parallax mapping. Unfortunately, although the Radeon X1800 XT is technically capable of applying AA in HDR mode, the option to enable AA was disabled when HDR was switched on. We did enabled 16x anisotropic filtering, though. As you can see, with everything "turned on", the Radeon X1800 XT performed very well in Splinter Cell: Choas Theory, besting the GeForce 7800 GTX by about 10% in both test configurations.

Transparent
FarCry v1.33

 

Performance Comparisons with FarCry v1.33
Details: http://www.farcry.ubi.com/

FarCry
If you've been on top of the gaming scene for some time, you probably know that FarCry is one of the most visually impressive games to be released for the PC. Courtesy of its proprietary engine, dubbed "CryEngine" by its developers, FarCry's game-play is enhanced by Polybump mapping, advanced environment physics, destructible terrain, dynamic lighting, motion-captured animation, and surround sound. Before titles such as Half-Life 2 and Doom 3 hit the scene, FarCry gave us a taste of what was to come in next-generation 3D Gaming on the PC. We benchmarked the graphics cards in this review with a custom-recorded demo run taken in the "Catacombs" area checkpoint, at various resolutions without anti-aliasing or anisotropic filtering enabled, and then with 4X AA and 16X aniso enabled concurrently.

 

FarCry proved to run very well on ATI's new X1800 cards. As we've been saying for the past few pages, the X1600 XT lagged behind the pack, but it will also be the least expensive of the cards here, so we won't dwell on it's performance much. The X1800s were the top dogs though. With AA and Aniso enabled, which is how anyone thinking about buying one of these cards would likely play the game, the X1800 XL and the X1800 XT were the fastest of the single-card configurations, besting the GeForce 7800 GTX by approximately 1% to 30%. And the Radeon X1800 XT was actually faster then a couple of the multi-GPU configurations. Only the proof-of-concept CrossFire rig and 7800 GTX SLI rig were able to overtake the Radeon X1800 XT here, when AA and Aniso were enabled at the higher resolution.

Transparent
Half Life 2

Performance Comparisons with Half-Life 2
Details: http://www.half-life2.com/

Half Life 2
Thanks to the dedication of hardcore PC gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So, when Valve announced Half-Life 2 was close to completion in mid-2003, gamers the world over sat in eager anticipation. Unfortunately, thanks to a compromised internal network, the theft of a portion of the game's source code, and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait until November 2004 to get our hands on this classic. We benchmarked Half-Life 2 with a long, custom-recorded timedemo in the "Canals" map, that takes us through both outdoor and indoor environments. These tests were run at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without any anti-aliasing or anisotropic filtering and with 4X anti-aliasing and 16X anisotropic filtering enabled concurrently.

 

We're not going to dwell on these Half Life 2 scores for long, because quite frankly this game will run well on virtually any mid-to-high end graphics card, with all of the eye candy enhancing features turned on. As good as Half Life 2 looks, it simply doesn't tax the graphics subsystem hard enough to draw any meaningful conclusions. Our custom benchmark reported that every card was capable of playable framerates, even with AA and Aniso enabled at 1600x1200. The Radeon X1800 XT technically stood alone at the head of the pack, followed by the GeForce 7 series of cards, and the X1800 XL. The Radeon X1800 XT was even faster than all three of the multi-GPU configurations, but its victory wasn't decisive. The CPU overhead associated with multi-GPu rendering is a large reason they fall behind the single-GPU configurations in this test.

Transparent
Doom 3

Performance Comparisons with Doom 3
Details: http://www.doom3.com/

Doom 3
id Software's games have long been pushing the limits of 3D graphics. Quake, Quake 2, and Quake 3 were all instrumental in the success of 3D accelerators on the PC. Now, many years later, with virtually every new desktop computer shipping with some sort of 3D accelerator, id is at it again with the visually stunning Doom 3. Like most of id's previous titles, Doom 3 is an OpenGL game that uses extremely high-detailed textures and a ton of dynamic lighting and shadows. We ran this batch of Doom 3 single player benchmarks using a custom demo with the game set to its "High-Quality" mode, at resolutions of 1,280 x 1,024 and 1,600 x 1,200 without anti-aliasing enabled and then again with 4X AA and 8X aniso enabled simultaneously.

 

Things took a major turn for the worse for ATI when we started benchmarking with Doom 3. To put it bluntly, NVIDIA's cards continue to thrash ATI in this OpenGL based game. The only test where ATI was able to outpace even the GeForce 6800 Ultra, was at 1280x1024 where the Radeon X1800 XT just barely nudged past the Ultra by a couple of frames per second when anti-aliasing was enabled. In every other test configuration though, NVIDIA's cards were dominant. Even if we took the numbers out of the graphs above, just look at those bars. A couple of years ago, when NVIDIA stated their future architectures were "designed for Doom 3", they certainly weren't lying.

Transparent
Chronicles of Riddick

Performance Comparisons with Chronicles of Riddick: Escape From Butcher Bay
Details: http://www.riddickgame.com/

Chronicles of Riddick
Starbreeze Studios is responsible for creating the surprisingly good game, The Chronicles of Riddick: Escape From Butcher Bay. Those familiar with the movie will recall Butcher Bay was one of the prison options on tap for the main character. While the movie never actually made it to Butcher Bay, we find the main character right at home in this first person shooter that's powered by the proprietary Starbreeze Engine. Not only does The Chronicles of Riddick - Escape From Butcher Bay boast excellent game play with impressive visuals and a mature story line, but the Chronicles of Riddick also proves to be a tough challenge and a game actually worth buying, which makes it an excellent addition to our suite of custom benchmarks.

 

Our second OpenGL based benchmark, a custom demo run using the game Chronicles of Riddick: Escape from Butcher Bay, proved to be another struggle for ATI's current and next-gen hardware. Once again, only the Radeon X1800 XT was able to score a single victory here, but it was over the GeForce 6800 Ultra at 1280x1024 when anti-aliasing and anisotropic filtering were disabled. In every other configuration, the NVIDIA powered cards handily outpaced ATI's best, and the SLI configurations were simply in a different league altogether.

Transparent
Total System Power Consumption

Total System Power Consumption & Acoustics
It's All About the Watts and Decibels

We have two final data points we'd like to cover before we bring this article to a close. Throughout all of our benchmarking, we monitored how much power our ATI powered test system was consuming using a power meter, and also set up a sound level meter about six inches away from the graphics cards. Our goal was to give you all an idea as to how much power each configuration used and to explain how loud the high-end configurations were under load. Please keep in mind that we were testing total system power consumption, not just the power being drawn by the video card alone.

If you saw our power consumption numbers in our CrossFire article from early last week, you'll notice that these are much lower than what was reported there. We found that our Radeon Xpress 200 motherboard was severely over-volting virtually every component, and it was having a huge impact on total power consumption. The board had our CPU running at almost 1.8v by default, which was much higher than it should have been. Once we set all of the voltages manually, power consumption shot way down.

As the graph above shows, ATI's X1000 graphics family consumes only slightly more power under full load than the older X8x0 series of cards. What more interesting to note in this case is idle power consumption. It seems the higher core clock speeds of ATI's new GPUs, coupled with the increased current leakage caused by the move to a .09 micron manufacturing process had a major impact on idle power consumption. Even the lowly 4-pipe Radeon X1300 Pro consumed more power than a Radeon X850 XT while sitting idle.

There isn't much to report with regard to each card's acoustic signature, because the sum total of our test rig's PSU and CPU cooling fans, were louder than the graphics cards.  The test system's acoustic signature, from about 6 inches away with the side panel removed, hovered between 65db and 72db depending on the card installed in the system at the time.  To be more specific though, we found the X1300 and X1600 to be relatively quiet, but there was a consistent, noticeable whine being emitted from their cooling fans at all times. The Radeon X1800 XL was in a similar category, but the noise coming from its fan had a much lower pitch and was easily tolerable. The Radeon X1800 XT was an altogether different animal though. When the X1800 XT's fan spins up, and runs at its maximum speed, it sounds very much like a hair-dryer on a low speed; similar to an X850. Throughout all of our testing though, our particular Radeon X1800 XTs fan never spun at its maximum speed, an only stepped up a notch or two above its slowest speed. And when running at a lower speed, the Radeon X1800 XT is very quiet, in our opinion. We've spoken with some other analysts, however, who have also been testing ATI's new cards, and they did not have the same experience as us. So, until we get to test a larger sampling of X1800s, we won't comment definitively on its acoustic output.

Transparent
Our Summary & Conclusion

Performance Summary: There is a ton of performance data on the preceding pages, so we're going to break down our summary into a couple of sections for each new family of Radeon.

Radeon X1300 Pro and X1600 XT
Throughout our entire battery of tests, the Radeon X1300 Pro performed somewhere in between a GeForce 6200 TC card and a GeForce 6600 GT. It should definitely be considered a low-end, budget solution. The Radeon X1600 XT was much faster than the X1300, as expected, but its performance fell somewhere in between a GeForce 6600 GT and a GeForce 6800 GT. In a head-to-head comparison though, the GeForce 6800 GT would significantly outperform the X1600 XT in the vast majority of today's benchmarks.

Radeon X1800 XT and X1800 XL
The X1800 XL performed at a similar, albeit somewhat lower performance level versus a GeForce 7800 GT, but even a GeForce 6800 Ultra was faster in many circumstances. The Radeon X1800 XT, however, if we disregard the multi-GPU configurations, traded the top spot with a GeForce 7800 GTX in some tests (3DMark, Splinter Cell, FarCry, HL2) with about a 60/40 split in favor of NVIDIA. And in typical fashion the Radeons shine better with AA and Aniso Filtering enabled, but the X1800 XT's larger 512MB frame buffer certainly helped it in this area. In general, ATI's new cards performed better in Direct3D applications than they did in OpenGL applications, which has historically been the case for ATI's products. Overall though, we'd consider the GeForce 7800 GTX the "faster" all-around card in terms of general gaming performance.

ATI has given us a lot to talk about today. Not only with regard to their new family of products, but with their execution and reputation within the market and community as a whole. Looking back over the previous pages, it seems to us that ATI initially began designing the X1000 with NVIDIA's GeForce 6 Series in mind. Had the R520 architecture not suffered from delay after delay, and launched before the GeForce 7 Series, ATI would have been in a much better position. The X1000 graphics family is feature-rich, and performs very well with the GeForce 7 out of the picture. But that's not what happened, and at each of their product's respective projected price points, they are outperformed by NVIDIA's products in many scenarios.

For the X1000 series to gain any traction in the market, cards are going to have to hit store shelves quickly, and in quantity, to drive their street prices down -- and fast. Considering that a GeForce 7800 GTX can be bought for under $480 already, we suspect a >$500 price tag for a Radeon X1800 XT, when / if it arrives with the same specifications that we tested here, is not going to be well received by the majority of potential buyers. Luckily for ATI, the X1800's die is significantly smaller than a GeForce 7800 GTX's, so if they're getting adequate yields from TSMC, they'll be able to drop prices relatively quickly to be competitive this holiday buying season. Unfortunately, we can't be certain when the cards we've tested here will actually be available for purchase, based on the obvious past executional issues in recent ATi product launches.

Last week, we tested a Radeon X850 XT CrossFire system, and were told that motherboards and master cards would be made available almost immediately. Yet, here we are, eight days later and CrossFire is no where to be found. We're sure you've probably read numerous unbiased editorials posted on this subject as of late, so we won't beat a dead horse here. Essentially, X850 CrossFire turned out to be a technology demonstration rather than a product launch, even though ATI representatives looked us straight in the eye and said they waited to re-launch CrossFire until products were available in quantity. That is simply not the case as of today and it's looking more and more like X800 series CrossFire boards are pretty much "still born", with at least one of ATi's major AIB's commenting to us that they're still deciding whether or not to bring them to market. In this case, we guess "available" is a relative term. The problem with doing this - again - is that we can no longer take ATI on their word when talking availability. What we will do is show you exactly when ATI says the X1000 family of graphics cards will be made available...

No need to speculate on PR spin here.
These milestones are either hit or they're not...

The image above was taken directly from one of ATI's recent product briefings. No speculation, here. So, if in the next few days, Radeon X1800 XLs, X1300 Pros, and X1300s aren't available, everyone will know whether or not ATI's word is their bond. To put it bluntly, ATI has to stick to this schedule if they want to start rebuilding the reputation they earned during the Radeon 9700 - 9800 days. It's kind of like ATI is in the same position that NVIDIA was in when the much maligned NV30 / GeForce FX 5800 Ultra was scheduled to launch after repeated delays. ATI also needs to stick to this schedule if they want to make any real money this holiday season. It's not just about the rep. As you can see, the Radeon X1800 XT, and both X1600s, are at least a month to two months away. If ATI falters even slightly, cards will not be on store shelves this holiday season, and you can't sell what's not available. And that's even if cards ship at the specifications listed in this article. Why did you do this to yourself ATI? A conclusion should be filled with definitive statements. Not more questions.

Noticeably missing from the schedule above are the X1x00 CrossFire master cards. Not having an idea when they'll be released also raises some questions.  However, we have confirmation from folks like Asus and Abit that Crossfire motherboards are being readied now, with Asus also currently offering a BIOS upgrade for their P5WD2, i955X board to support CrossFire. What about CrossFire X1x00 series master cards, though? We'll see. We'll take a page from our friends in Missouri, the "show-me" state, on this one.

We also feel ATI has got to get Avivo working flawlessly, transparently, and quickly. The video related features of the X1000 Graphics family are technically second to none, at least on paper. But the current state of their drivers and third party software support doesn't expose all of Avivo's features. If Avivo did everything it is supposed to do, the X1000 family of cards would have scored much better on the HQV video benchmark, and playback of DXVA accelerated WMV HD content would be working properly. We've seen Avivo functioning with our own eyes, on multiple occasions though, so we're confident ATI can remedy this particular situation. It's just a matter of getting the necessary code written and tweaked.  Again this looks like just another hint of that execution issue that ATI seems to be stumbling with as of late.

There you have it. The details surrounding ATI's new X1000 Graphics Family are no longer a secret. The GPU family is equipped with an advanced Ring-Bus memory controller and a feature-rich video engine, and in-game image quality is top notch. But delays have certainly hurt ATI. If the company executes from this point forward, however, and delivers real product, they will be in a much better position to compete with NVIDIA in the coming weeks and months. Let's all hope they pull it all together. It's never good to have just one competitor running the show.

_Good Direct3D performance at each price point
_Dual-Link DVI Outputs
_Excellent in-game image quality
_Promising Avivo Technology
_CrossFire Ready
_Don't know when they'll REALLY ship
_Outperformed by similarly, or lower priced NVIDIA products
_No tangible real-world benefits with Avivo, yet
_Sub-Par OpenGL performance

Get into HotHardware's PC Hardware Forum Right Now!



Content Property of HotHardware.com