In
November '02, during COMDEX in Las Vegas,
NVIDIA
introduced their eagerly anticipated GeForce FX GPU,
formerly codenamed NV30. The high-end GeForce FX
part on display at COMDEX, was NVIDIA's answer to ATi's
DirectX9 compliant R300, the core that powers the very
popular Radeon 9700 and 9500 product lines. At the
time,
NVIDIA did not disclose final clock speeds, or
specifications for the inevitable GeForce FX derivatives,
but we all knew they would eventually come. Then, in
late January we learned that the high-end part in the
GeForce FX lineup would be branded the 5800 Ultra.
The GeForce FX 5800 Ultra ships with a core clock speed of
500MHz, with DDR-II type memory clocked at an effective
1GHz data rate. The "non-Ultra" GeForce FX 5800
ships with lower core and memory speeds, but it is
essentially the exact same part.
Today, NVIDIA
is unveiling three new products in the GeForce FX product
line, the 5200, the 5200 Ultra and the 5600 Ultra.
These three new products are based on GPUs derived from
the "original" NV30 core, but with some architectural
differences that reduce costs, and ultimately performance.
The GeForce FX 5200 parts are based on the NV34 core,
while the GeForce FX 5600 Ultra is based on the NV31.
With these new parts, NVIDIA is trying to entice
the mainstream consumers that comprise the vast majority of
the buying public. The excitement and buzz generated
by top-of-the-line products like the GeForce FX 5800 Ultra
is intriguing, but the bottom line is, sales of these
high-end products account for only a small percentage of
overall revenue potential. The big money is made by
selling millions of "mainstream" parts to OEMs and PC
users who don't care much about having the latest and
greatest. The new GeForce FX 5200, 5200 Ultra and
5600 Ultra give NVIDIA the ability to offer products with
full DirectX9 capabilities to all market segments.
Unlike last year's GeForce 4 MX release, which garnered
some negative press because they lacked any hardware pixel
or vertex shaders, NVIDIA's new mainstream products don't
skimp on features. The NV31 and NV34 cores have full
GeForce FX feature support, including DirectX9 pixel and
vertex shaders 2.0+. What do you say we take a
closer look?
|
Specifications & Features of the NVIDIA GeForce
FX 5200 GPUs |
DX9
For All! |
|
CLICK ANY IMAGE FOR AN
ENLARGED VIEW
NV34 - First Mainstream
DX9 GPU
Cinematic Shading for the
mainstream
- Full GeForce FX
feature support ? including DX9 vertex & pixel
shader 2.0+
- High precision
rendering ? up to128-bit floating point color
Performance
- 2X GeForce4 MX
performance
- DX9 optimizations
and support
- AGP 8X enables up
to 2.1GB/sec bandwidth
Best-in-class
features and reliability
- Industry-renowned
unified driver architecture (UDA)
- Integrated
TV-encoder, TMDS transmitters
- Dual integrated 350
MHz RAMDACs
- Integrated full
hardware MPEG-2 decoder
- HDTV Support
Engineered for
compatibility with the latest gaming and multimedia
software
- First and only DX9
part in its price category
|
THE GEFORCE FX CORE
A WAFER OF GEFORCE FX
CORES
|
CHART TAKEN FROM NVIDIA
TECHNICAL BRIEF
The first
cards we'll talk about are the GeForce FX 5200 and 5200
Ultra, the products based on the GPU formerly codenamed
NV34. As you can see by the above feature list and
comparison chart, the NV34 employs NVIDIA's "CineFX"
engine, offers support for DirectX 9, AGP8X and has dual
integrated 350MHz RAMDACs, which means they are capable of
powering dual independent displays. (For
a more complete look at the full capabilities of the
GeForce FX GPU, click here.) Unlike the other
GeForce FX products though, the 5200s will be built using TSMC's .15μ manufacturing process. Even though the
GeForce FX 5200s aren't being built using the more
advanced .13μ process, they will still be able to hit
fairly high clock speeds without generating excessive
amounts of heat, because far fewer transistors are needed
to build this GPU. In fact, the 5200 is comprised of
"only" 45 million transistors, making it roughly 64%
smaller than the 125 million transistor GeForce FX 5800!
This relatively low transistor count, coupled with the
fact that the more mature .15μ manufacturing process is
being used, should mean NVIDIA will be able to quickly
ramp up production of the GeForce FX 5200s. The low
transistor count consequently results in lower over all
power consumption and heat radiation as well.
Lower heat
output means less sophisticated cooling is needed to keep
the GPU operating within its temperature limits.
According to the latest information given to us by NVIDIA,
the GeForce FX 5200 Ultra should ship with a core clock
speed of 325MHz, with BGA DDR-I memory also clocked at
325MHz (650MHz DDR). The 5200 Ultra GPU is cooled
using a single-slot, active solution similar to what is
found on the GeForce 4 Ti line of products. Final
clock speeds for the GeForce FX 5200 "non-Ultra" have not
been disclosed just yet, but considering it is passively
cooled, and does not require a secondary power connection,
we'd guess the core clock speed will fall within the
225-275MHz range. The GeForce FX 5200 also uses
standard TSOP memory packaging versus BGAs, so the memory
will probably be clocked much lower as well. Expect
the non-Ultra 5200s to ship with memory speeds hovering
around 400MHz DDR.
Clock speeds
are not the only things that differentiate the GeForce FX
5200s from the other products in the FX line-up. Obviously, with a
transistor count so much lower than the GeForce FX 5800,
something had to be removed from the die. In the
case of the GeForce FX 5200s, the color and z-compression
logic used in the Intellisample Antialiasing engine has
been removed, along with four of the pixel pipelines.
These measures mean less efficient use of memory
bandwidth, and lower fillrates, but the performance of
these cards should still be quite good, especially
considering their projected price points. As you'll
see later, NVIDIA is claiming performance that is twice as
high as the GeForce 4 MX and Radeon 9000.
A Peek At The 5600
Ultra...
|