Next Generation Gaming Performance Analysis - A Doom 3 and Counter Strike Source Drag Race
Introduction and The Competitors
As the old saying goes, "hindsight is 20/20". That is to say, if you were in the market for a new high end graphics card, back in early August, as a direct result of the launch of Id Software's much buzzed about title Doom 3, you may have opted to go the way of an NVIDIA product. For sure, with NVIDIA's obvious strength in the Doom 3 game engine, and subsequently the quickly emerging benchmarks that surfaced around the web, if you were placing bets on NVIDIA's new GeForce 6 series line-up, it was definitely a solid wager in that gaming scenario. Additionally, the Doom 3 game engine will probably power many a title in the coming year, so NVIDIA's scope of influence on performance could expand well into other upcoming leading edge titles as well.
However, as we've noted time after time here at HotHardware, one benchmark data-point is not indicative of the total picture of performance. Regardless, as the weeks rolled on, we were witness to all the canned demo runs of Doom 3 and even a few custom demos, that showed NVIDIA dominating the benchmarks. It almost seemed like overnight, there was a paradigm shift that took place, launching NVIDIA into the poll position of what was historically a nip and tuck benchmark race across the various other game engines. And we waited until the dust settled and the smoke cleared. Of course, once the smoke cleared a bit, Valve stepped up to the plate as well, and launched their "Counter Strike: Source" demo, with its built in "Graphics Stress Test" and the enthusiast had a bit more data to collectively chew on.
And so today, we're dropping down from our 10K foot view of it all here at HotHardware, with a fresh set of drivers from ATi and NVIDIA , and one of the fastest Gaming Test Rigs money can buy. The following pages will detail a full Doom 3 performance progression for you, across various maps all using custom time demos, as well as a multiplayer deathmatch demo recording. In addition, we've loaded up Counter Strike Source and its Graphics Stress Test and taken readings at medium and high res with various image quality settings. Finally, as a basis for comparison we've pitted the two top 3D Graphics cards in the market against each other in a virtual drag race of sorts. The NVIDIA GeForce 6800 Ultra and the ATi Radeon X800 XT PE square off and do battle here, on each of their own respective turfs, ATi with it's seemingly built to order edge in Valve's Source engine and NVIDIA's Doom 3 advantage.
Here's a quick refresh on the hardware level specifics of the contenders.
|
At first glance, one might conclude that there is a bit of a technological mismatch of the two competitive products we'll be analyzing in this showcase. On the surface, that may in fact be true. A Radeon X800 XT PE has a peak fill rate of almost a full 2 Gigapixels per second more bandwidth than a GeForce 6800 Ultra, 8.32Gpixels/sec for the X800 XT PE and 6.4Gpixels/sec for the 6800 Ultra. This is due to the higher clock speed of the R420 chip that drives the Radeon X800 XT PE at 520MHz, versus NVIDIA's 400MHz NV40 GPU on the 6800 Ultra.
Conversely, it can be said perhaps that NVIDIA's NV40 architecture, since the two boards in many current gaming scenarios are about on par performance wise, is a more efficient chip from a operations per clock perspective. However, if you look at it from a power consumption perspective, efficient is hardly a term one would associate with the NV40/45 architecture. Comparatively, an NV40 consumes significantly more power than an ATi R420 core and the chip is comprised of about 60 million more transistors than ATi's top end graphics processor. If NVIDIA could actually scale the NV40 to 500MHz, the chip would be a lot faster than ATi's R420 overall. Of course power consumption and heat dissipation issues make this an impossibility, short of a die shrink to 90 nanometer technology.
And so, when we try to analyze the two competing architecture in the classical sense, it's hard to differentiate which is a more efficient and powerful design. There's one thing for certain and that is that they are very different in their approaches to solving the same computational problem. In many respects, due to the complexity of the average modern day graphics processor, it's an even more puzzling equation to decipher than that of the main system processor architectures and the differences between the Pentium 4 and Athlon 64, for example.
Which then brings us back to what some would call the "layman's perspective". In other words, what do the performance numbers look like? What is the system impact on power, space and heat and what does the bang for your buck ratio add up to? Image quality is also a concern, of course, but these days, great image quality is an expectation. Great frame rates are nothing, without good lossless image quality. Now, since most folks realize in general that a GeForce 6800 eats up a bit more space and power than an Radeon X800 and the two top end cards are about on price parity with each other, the only really meaningful metric that is left to analyze is performance, more importantly, performance in next generation gaming engines and the wealth of titles that will be created based on those engines.
That's the objective of this article we have for you today. These are some of the questions we'll try to answer in the pages ahead. To achieve this objective, we'll be looking at benchmark readings for the top two 3D Graphics cards on the market, in the top two leading edge game engines now available (or almost available) on the market, Id Software's "Doom 3" and Valve Software's "Counters Strike: Source" beta.