ATi Radeon X800 XT & X800 Pro - Heart Burn For The NV40
We'll stay in the Direct X track for another look at a DX9 enhanced title and then step back a bit, to a wildly successful game engine built on OpenGL technology.
![]() |
|
![]() Tomb Raider AOD |
Although Tomb Raider: Angel of Darkness won't be winning any "Game of the Year" awards, it is one of the more advanced DirectX games currently available. We've recorded a custom demo of Lara jogging through an indoor garden area in the "Prague3" map. When using the Pixel Shader 2.0 code path, this area of the game utilizes a DOF (depth of field) blurring effect to enhance the sense of distance and range. We ran our custom demo at a resolution of 1024x768 and then again at 1600x1200, using both the Pixel Shader 1.4 and 2.0 code paths. We then also took readings with and without 4x anti-aliasing in the PS 2.0 tests. |
As we've seen so many times before, at 1024X768 resolution the graphics subsystem is being held back, as overall performance is CPU limited, allowing even the legacy Radeon 9800 and GeForce FX cards too make a showing.
The remedy is to just twist up sister resolution a notch or two and then watch the field spread out more. Here the Radeon X800 XT puts the hurt on all competitors, besting the GeForce 6800 Ultra by over a full 10 frames per second in some of the tests, as well as NVIDIA's hand selected 6800 Ultra Extreme (again represented with a 6800 Ultra+ in our graphs).
The 12 pixel pipe driven Radeon X800 Pro overtakes its 16 pipe GeForce 6800GT counterpart, in two of the tests, losing ground when fill rate is taxed a bit more and 4X AA is enabled. This is a clear indication that the GeForce 6800's weakness in this benchmark is its pixel shader performance, with the specific implementation of depth of field blur and other shader effects in this game engine.
![]() |
|
![]() Wolfenstein ET |
We also ran through a batch of timedemos with the OpenGL based game Wolfenstein: Enemy Territory. Wolf ET is a free, standalone multiplayer game that is based on Ids impressive Return to Castle Wolfenstein, that was released a couple of years back. It uses a heavily modified version of the Quake 3 engine, which makes it a very easy to use benchmarking tool. We created our own custom demo and used the built-in timedemo feature to check each card's frame-rate. The tests below were run at 1024x768 and again at 1600x1200, without anti-aliasing, with 4X AA, 4X AA and 8X aniso and lastly with 4X AA and 16X anisotropic filtering enabled concurrently. |
Can we get a collective yawn here? Once again, things are pretty much CPU limited with scores lining up right on top of each other, at each setting for each respective card. We even have a couple of testing anomalies, with the GeForce 6800GT skipping by the 6800 Ultra, in one of the tests. This issue is certainly within the benchmark's margin of error. The only thing to note here is ATi's deficit in OpenGL, where all of ATi's cards lose to their NVIDIA based counterparts.
As we turned resolution up to 1600X1200, the numbers began to skew a bit more and it was painfully obvious that ATi still has some serious catch-up work to do with Quake engine based OpenGL titles. The scores here left us wondering how things will shape up for ATi, when Id finally ships Doom 3. Hopefully, the folks in Toronto's performance lab have their eye on the ball for that title.
Article Update - 5/6/2004
We've recently become aware that NVIDIA's new driver revision (61.11), that we used on the Wolfenstein test in this section, have an issue with the "Trilinear Optimization" on or off switch in their advanced settings control list. With the 61.11 Forceware driver, Trilinear Optimization are set to "on" all the time. As a result, the GeForce 6800 Ultra numbers you see in the above Wolfenstein ET performance charts, are scores taken with a combination of Bilinear and Trilinear sampling techniques, while implementing Anisotropic Filtering as well. ATi cards on the other hand are running in pure Trilinear mode, which is obviously not a completely fair methodology to measure competitive performance between architectures.
The graph here below shows the effects of turning Trilinear optimizations off and on, in the Wolfenstein benchmark at high resolution. These scores were taken with NVIDIA's 60.72 Forceware driver.
As you can see, in the 4X AA + 8X Aniso and 16X Aniso tests, the Trilinear optimization "on" setting nets NVIDIA a 5% to 10% gain, from 8X Aniso to 16X Aniso respectively. We felt it was important to inform our readers of the impact of this setting option in NVIDIA's drivers, versus overall performance with ATi's top card, the Radeon X800 XT. Please feel free to email the author if you have any questions.