Logo   Banner   TopRight
XFX GeForce 6200TC
Date: Sep 12, 2005
Author: Sean Pelletier


Without question, it is typically far more interesting to read about the latest flagship $600 graphics card than it is to hear about a budget card's performance. Unfortunately, the vast majority of us doesn't have the luxury of being able to afford those high-end cards and must instead focus our attention on some more realistic alternatives. In the past, having a mainstream-level card would mean lackluster performance at best in most games with little advantages over even the despised integrated graphics solutions.

With the launch of the GeForce 6200 back in December 2004, NVIDIA brought key new features such as Shader Model 3.0 and TurboCache to the entry-level gamer. By moving towards the TurboCache architecture and efficiently utilizing system memory, less memory needs to be natively installed on the card. This dramatically lowers the overall cost to produce the card making prices cheaper overall and more offering more headroom for the higher-end features mentioned above. As seen here in our initial review of the GeForce 6200, the card had impressive performance considering its sub $100 price. Months later, we're taking a look at XFX's implementation of the GeForce 6200 GPU and seeing how the TurboCache architecture fares in the latest crop of games.

Specifications of the XFX GeForce 6200TC 128MB Video Card
Performance on a Budget
GeForce 6200

Graphics Bus:
PCI Express

Memory size:
supporting 128MB*

GPU/Memory clock:
350 / 700 MHz

Memory Interface:

Effective Memory Bandwidth(GB/sec):

Fill Rate(billion texels/sec):

Vertices/sec (million):



0.11 micron

S-Video, DVI, VGA

* System memory must be 512MB or higher.

NVIDIA TurboCache technology
Shares the capacity and bandwidth of dedicated video memory and dynamically available system memory for turbo charged performance and larger total graphics memory.

PCI Express support
Designed to run perfectly with the next-generation PCI Express bus architecture. This new bus doubles the bandwidth of AGP 8x delivering over 4GB/s in both upstream and downstream data transfers.

Microsoft DirectX 9.0 Shader Model 3.0 support
Ensures top-notch compatibility and performance for all DirectX 9 applications, including Shader Model 3.0 titles.

NVIDIA CineFX 3.0 engine
Powers the next generation of cinematic realism. Full support for Microsoft DirectX 9.0 Shader Model 3.0 enables stunning and complex special effects. Next-generation shader architecture delivers faster and smoother game play.

NVIDIA UltraShadow II technology
Enhances the performance of bleeding-edge games, like id Software's Doom III, that feature complex scenes with multiple light sources and objects. Second-generation technology delivers more than 4x the shadow processing power over the previous generation.

NVIDIA Intellisample 3.0 technology
The industry's fastest antialiasing delivers ultra-realistic visuals, with no jagged edges, at lightning-fast speeds. Visual quality is taken to new heights through a new rotated grid sampling pattern.

NVIDIA PureVideo technology
The combination of the GeForce 6 Series GPU's high-definition video processor and NVIDIA video decode software delivers unprecedented picture clarity, smooth video, accurate color, and precise image scaling for all video content to turn your PC into a high-end home theater.

NVIDIA ForceWare Unified Driver Architecture (UDA)
Delivers rock-solid forward and backward compatibility with software drivers. Simplifies upgrading to a new NVIDIA product or driver because all NVIDIA products work with the same driver software.

NVIDIA nView multi-display technology
Advanced technology provides the ultimate in viewing flexibility and control for multiple monitors.

NVIDIA Digital Vibrance Control 3.0 technology
Allows the user to adjust color controls digitally to compensate for the lighting conditions of their workspace, in order to achieve accurate, bright colors in all conditions.

OpenGL 1.5 optimizations and support
Ensures top-notch compatibility and performance for all OpenGL applications.

Given the GeForce 6200's sub-$100 price tag, it's no surprise to see that the card comes with rather minimalist packaging. Opening the box, one finds the bare essentials ranging from a driver CD and user manual to an S-Video cable and the card itself.



The Card


XFX's GeForce 6200TC 128MB Up Close
Closer Inspection

At the heart of the XFX card is a GeForce 6200TC GPU that is designed around the PCI-Express interface. Although there are AGP variants of the GeForce 6200 GPU, those featuring TurboCache are strictly PCI-Express only for reasons we'll illustrate on the following page. Taking a look at the front and back of the card, we find that XFX has managed to keep the board layout extremely clean and efficient. Here, we find ample room for the board's passive heatsink and more than enough room for the two discrete memory modules.


Removing the passive heatsink, we find the surprisingly small die of the GeForce 6200. Based on a .11u process, the entire die is able to remain roughly 13mm square. This small GPU runs at a default clock speed of 350MHz and as seen above, is able to be efficiently cooled by a passive heatsink. The heatsink itself is a simple finned block of aluminum with rubber-like feet to ensure flat and proper placement on the card. Retention duties are handled by two simple plastic push-pins which lock the heatsink in place. One item of notice is the large amount of thermal paste applied at the factory. Those looking to overclock this card and find its limits would do well to clean the core and reapply a thinner film of high quality thermal paste such as Arctic Silver.


On each side of the board, we find a 2.86ns Samsung GDDR memory module which is rated for an operating frequency of 700MHz. Surprisingly enough, these 128Mbit modules managed to clock up to 783MHz without any cooling whatsoever. Taking a quick look at the specifications Samsung lists for this family of memory module, we find that there are some versions clocked as high as 900MHz so the success of the overclock will vary on the quality of the bin for these modules.


Looking at the card as a whole, we see that there are three main video outputs. The combination of DVI and standard VGA outputs allows for dual-monitors to be used. In addition, the presence of an S-Video output gives users the options of connecting to a TV or additional display. Moving our attention to the opposite end of the card, we find that the board lacks any power header. Here, the relatively low power requirements of the GeForce 6200 GPU are more than within the capabilities of the PCI-Express bus so no external connector is required.


The GF 6200 TC Architecture


From a technical standpoint, the GeForce 6200 is a direct derivative of the NV44 architecture albeit with only four pipelines. In short, this means that the GeForce 6200 is a fully DX9 compliant card complete with support for Shader Model 3.0, PureVideo, and 32-bit floating data types. Going through the feature chart below, we find that the GeForce 6200 series shares many of the same features as the higher end cards. Unfortunately, there are a few limitations introduced to the core with both color and Z-compression being left out of the architecture. Although this will have no direct negative impact on standard gaming, it will have some very adverse effects once any FSAA is introduced.

Without question the largest advancement for this GPU is the implementation of NVIDIA's new TurboCache technology. Essentially, this new technology takes advantage of the enormous increase in bi-directional bandwidth afforded by PCI-Express to allow the GPU to use system memory in addition to its own local memory. This allows vendors to save significant amounts of money by using less local memory on the graphics card with little to no performance degradation. Comparing the standard architecture diagram below to the TureboCache model on the right, we see the architectural changes that were necessary to accomplish this. However, in each case we see the GPU's four pixel pipelines and three vertex shaders remaining unchanged. For a complete analysis on the features and functionality of the GeForce 6200 GPU, please reference Marco's in-depth review here.



Test System and 3DMark05


HotHardware's Test System
Not all are created equal...

Socket T - Pentium 4 (3.4GHz)

Intel D915GUX motherboard
I915G Chipset
Kingston HyperX PC-5400 DDR2 – 1GB(2x512MB)
XFX GeForce 6200 TurboCache supporting 128MB

Intel Graphics Media Accelerator 900
Sapphire Radeon X300 128MB
On-Board 10/100/1000 Ethernet
On-Board Audio
WD 120GB 7200rpm PATA Hard Drive
NVIDIA ForceWare 77.77
ATI Catalyst 5.8
Windows XP Pro SP2

Performance Comparisons With 3DMark05
Futuremark's Latest - The Jury is Still Out...

3DMark05 is the latest installment in a long line of synthetic 3D graphics benchmarks, dating back to late 1998.  3DMark99 came out in October of 1998 and was followed by the very popular DirectX 7 benchmark, 3DMark2000, roughly two years later.  The DirectX 8.1-compliant 3DMark2001 was released shortly thereafter, and it too was a very popular tool used by many hardcore gamers.  3DMark05 is a fairly advanced DirectX 9 benchmarking tool.  We ran 3DMark05's default test (1,024 x 768) on all of the cards we tested and have the overall results for you posted below...

After looking at the scores from 3DMark05, we realize that nobody armed with any of these cards is going to be setting any records any time soon. However, we should note that these sub $100 cards are all handling a full array of DX9 tests without issue. Compared with the lowly Intel IGP, they are heads and shoulders above it in terms of performance. It's interesting to see that although the Radeon X300 with 128MB of memory soldered on the board has a distinct advantage over the GeForce 6200TC model, the XFX card is keeping surprisingly close despite using a significant amount of system memory. When overclocked, the gap is further reduced to a point that would be tough to discern in gameplay. This notion is put to the test on the following pages with a few real life benchmarks of actual gameplay.

F.E.A.R Singleplayer Demo


Performance Comparisons With F.E.A.R SP Demo
Candy for the Mind and Eyes...

One of the most highly anticipated titles of 2005, Monolith's new psychological thriller F.E.A.R promises to be as thrilling to the mind as it is to the eyes. Taking a look at the minimum system requirements, we see that you will need at least a 1.7GHz Pentium 4 with 512MB of system memory and a 64MB graphics card that is a Radeon 9000 or GeForce4 Ti-class or better. Considering our TurboCache graphics card and the Intel integrated solution, it would be interesting to see how our benchmark results would turn out. Using the newly released singleplayer demo, we put each card through its paces to see how they fared with a promising new title. Here, all graphics settings within the game were set to "Medium" and Trilinear Filtering enabled. Benchmark runs were then completed at resolutions of 640x480 and 800x600.

Taking a look at the results, it is somewhat surprising to see that the XFX GeForce 6200TC has the highest performance in each resolution we tested. As one would expect, the Intel IGP did a horrendous job at keeping up though it was surprising to see the game even run as it is well below the minimum system requirements. Even at these low resolutions, the game is taxing enough to have each card's performance well below the known 30fps requirement for adequate gameplay. However, it should be noted that the settings within the game were kept at a higher level to ensure the game was being presented as the developer intended. If you intend on spending a great deal of time with this game, you're better off investing more in your graphics budget as these cards all have their hands full with this taxing new title.

Starship Troopers Demo


Performance Comparisons With Starship Troopers Demo
The Bugs in this Game are Intentional!

Starship Troopers
Yet another taxing new title to arrive this year, the Starship Troopers game is based upon the successful movie from TriStar. Using a new Swarm Engine which can take advantage of all sorts of DX9 bells and whistles, it is a good candidate for benchmarking graphics cards. As was the case with the F.E.A.R demo, the Starship Troopers demo has some pretty high minimum system requirements. Here, they require a 2GHz Pentium 4, 512MB of RAM, and a GeForce FX5900 or Radeon 9800. Without question, the cards being tested here are borderline at best for these requirements. Regardless, it should be easy to see where the limitations of these architectures are using this benchmark. All graphics settings were set to "Medium" with benchmark runs taken at resolutions of 800x600 and 1024x768.

Going through the results, we see that each card had a tough time with this benchmark. Despite running at lower resolutions, no card was able to meet a 30fps average framerate. As was the case with the F.E.A.R demo, the image quality settings were purposely kept as high as possible to preserve the intended look and feel of the game. If you've fallen in love with this game, you'd do well to upgrade up to the next level card from either NVIDIA or ATI. Fortunately, XFX has a wide array of cards based on the entire family of GPU's from NVIDIA.

Battlefield 2


Benchmarks & Comparisons With Battlefield 2
Locked and Loaded...

Battlefield 2
With the original Battlefield 1942 easily one of the most popular multiplayer games of all time, Battlefield 2 is the latest addition to the popular franchise and comes equipped with a host of new features and eye candy. With massive maps and an impressive level of detail, it's no surprise to see some pretty steep hardware requirements. Here, the minimum acceptable hardware includes a 1.7GHz Pentium4, 512MB of system memory, and a 128MB DX9 graphics card equal or better to a GeForce FX5700 or Radeon 8500. Again, we're faced with a situation where the cards being tested are borderline to these requirements at best. It should be interesting to see how (and if) the game runs on this crop of cards. All graphics settings were set to "Medium" with the exception of Texture Filtering which was set to "High". Benchmark runs were then taken at resolutions of 800x600 and 1024x768 with and without 4x FSAA and 8x AF enabled.

With the resolution set at 800x600, we find the XFX GeForce 6200TC turning in some very playable framerates which are the highest of the group. However, the roughly 10fps advantage is almost completely reversed once 4x FSAA and 8x AF are enabled. Here, the Radeon X300 takes the lead and exploits the fact that the GeForce 6200TC is lacking color and Z-compression. Unfortunately, the Intel IGP refused to load the game correctly and could not be benchmarked.

Increasing the resolution to 1024x768, we find that the GeForce 6200TC still has the performance crown as long as AA and AF are not in the picture. At this resolution, the XFX card has an average framerate of 42fps which is very smooth and playable. The Radeon X300 is a few fps behind with an average of 35fps. In each case, AA and AF were attempted at this resolution though the performance was poor enough to cancel the tests and declare the settings unusable.


Half-Life 2


Benchmarks & Comparisons With Half-Life 2
It Shipped!  And it's GOOD!

Half Life 2
Thanks to the dedication of millions of gamers and a huge mod-community, the original Half-Life became one of the most successful first person shooters of all time.  So when Valve announced Half-Life 2 was close to completion in mid 2003, gamers the world over began chomping at the bit.  Unfortunately, thanks to a compromised internal network; the theft of a portion of the game's source code; a couple of missed deadlines; and a tumultuous relationship with the game's distributor, Vivendi Universal, we all had to wait until November 2004 to get our hands on this gem.  We benchmarked Half-Life 2 with a new custom timedemo that teams us up with "Dog" through an explosive clash with the enemy. These tests were run at resolutions of 800x600, 1024x768, and 1280x1024 without any AA or aniso and with 4X anti-aliasing and 8X anisotropic filtering enabled concurrently.


At a resolution of 800x600, the Radeon X300 produced the highest average framerate with a score of roughly 65fps. Somewhat close behind, the XFX GeForce 6200TC returned an average framerate of 55fps. Surprisingly enough, the Intel IGP struggled through the test and managed to actually complete the benchmark with a lackluster 5fps. As we would expect, enabling 4x FSAA and 8x AF produced a large performance hit for the XFX card as it lacks color and Z-compression. The Intel IGP rendered the benchmark into a slideshow and could not complete the test. Once again, the ATI card reigned supreme with a very playable average framerate of 40fps.

Raising the resolution to 1024x768, we find a few surprises amongst the expected results. As we would imagine, the Radeon X300 walks away with the fastest average framerate in both the default test and with FSAA and AF enabled. At default settings, the gap between the cards was again roughly 10fps. Surprisingly enough though, the gap between the cards with FSAA and AF enabled was less than 2fps. This is in stark contrast to the previous resolution where the gap was almost 13fps! This shows that there is enough horsepower in the GeForce 6200TC to handle the increase in resolution and that the key missing architecture components (namely color and Z-compression) are to blame for the bottleneck.

Lastly, we punished the cards on hand by raising the resolution to 1280x1024. Here, the Radeon X300 managed to barely break the playability limit of 30fps, with the XFX card coming close at 25fps. Turning on FSAA and AF rendered each card almost unplayable with results well under 20fps. Before we complain, we must remind ourselves that we're dealing with sub $100 graphics card and have little room to ask for more from such an intensive game.



From an architectural standpoint, the GeForce 6200 with TurboCache can be declared a complete and total success. NVIDIA's TurboCache technology looked incredible on paper and its real life performance and actual cost backed up every claim. Granted, you'll never confuse the gaming experience on this card to be the same as on a GeForce 7800GTX or Radeon X850XT. However, for a street price we've seen as low as $58 this XFX GeForce 6200TC is an absolute steal. Throughout our suite of benchmarks, the card OC'd well and met or exceeded the performance of the more-expensive Radeon X300 with 128MB of native memory on the card. For those with a limited budget, the XFX GeForce 6200TC is a very viable option.


When reading about the latest $600 flagship graphics cards, it becomes easy to turn our noses to these budget cards and their performance. However, like most things in life these cards have their niche and do an excellent job of filling them. Take for example, a second PC in your house. Wouldn't it be convenient to have a decent graphics card in that system when a friend comes over and wants to trade bullets with you in Battlefield 2? Perhaps you've got a Shuttle SFF system packed with hard drives and need a nice cool-running card to go frag and browse for "artistic" movies and pics. In either case, the XFX GeForce 6200TC does an exceptional job for the price. They may not have provided the fanciest box or most robust bundle. However, XFX has brought a card to the budget consumer that doesn't fail to impress in both cost and performance. When looking for a cheap card that'll still let you play your favorite titles, you'd do well to put the XFX GeForce 6200TC on your short list.

We Give the XFX GeForce 6200TC 128MB Video Card a Hot Hardware Heat Meter Rating of 9...

Get into HotHardware's PC Hardware Forum Right Now!

Content Property of HotHardware.com