NVIDIA GeForce GTS 250 Mainstream GPU - HotHardware

NVIDIA GeForce GTS 250 Mainstream GPU

2 thumbs up

NVIDIA has gotten a lot of mileage out of their G92 GPU architecture. Starting with the GeForce 8800 GT, which featured a 65nm variant of the G92 GPU, on up through the GeForce 9800 GTX+, which used an updated version manufactured on a more advanced 55nm process.  The G92 GPU has been featured on no less than seven different GeForce branded desktop graphics cards, not to mention the slew of mobile GeForces based on the G92 that are also in production.

Although it has been around for quite some time now, NVIDIA is launching yet another graphics card based on the G92 today, the GeForce GTS 250. The GeForce GTS 250's name suggests it is something new, but it is not entirely different from some previous GeForce 9800 series graphics cards. Instead, what NVIDIA is doing today is using the GeForce GTS 250 launch to align more of their product stack with the new nomenclature introduced with the GeForce GTX series, and to introduce a value-priced G92-based graphics card with a 1GB frame buffer. Take a look...

EVGA GeForce GTS 250

NVIDIA GeForce GTS 250
Specifications and Features

Looking closely at the spec list above, you'll note that the GeForce GTS 250 is a lot like a GeForce 9800 GTX+, just with a larger frame buffer and a new name.  In terms of the underlying technology behind NVIDIA's G92 family of GPUs, as well as detailed explanations on many of their features, we'd suggest perusing a couple of the following HotHardware articles regarding NVIDIA's previous and current GPU architectures:

Our GeForce 8800 GTX launch article goes in depth on NVIDIA's previous generation G80 GPU architecture, which is fundamentally very similar to the G92, save for its wider memory interface, and explains NVIDIA's CUDA GPGPU technology. Also, our GeForce 8800 GT and 8800 GTS 512MB articles cover much of the technology employed in NVIDIA's G92 GPU architecture, and the GPU PhysX Pack Preview details some of the features and capabilities offered by NVIDIA's PhysX physics acceleration technology.

Article Index:

+ -

I have a hard time feeling like this is anything other than Nvidia trying to sell more old cards. Ever try and explain that a 8800gt and a 9800gt is the same thing to someone. I have. Its annoying.

+ -

Fo sheezy... marketing bs ftl! lol

+ -

It seems like the only place where the 250 was better was at high resolutions with AA enabled, bringing the frame buffer into the equation... though not at playable fps.

I wonder if disabling AA at those resolutions would be playable, and if so, by removing teh frame buffer from the equation, would there be a difference in fps?

Either way, i understand why Nvidia would re-package a G92 ($$$), but still, rather shady to completely rename it.

+ -

Don't forget, we test at the highest image quality settings to better show each card's strengths and weaknesses--simply lowering the AA and aniso levels, and dropping a few in-game image quality settings will boost performance considerably.

+ -

Exactly. My point is that on, say FarCry 2, at 2650X1600, all that AA (not so much teh AF) is killig the 9800GTX+ due to the smaller frme buffer. However, on neither it, nor the GTX260's is the game playable at those settings (13-25fps). SO while I understand WHY you tested with those fps, and I agree it should be done, the point is that it's unplayable.

I agree that droppping the AA will boost fps significantly, but becasue doing so removes the GTX250's hardware advantage form the equation(extra frame buffer), I wonder if there would be much of a difference between teh three cards. My hunch is no.

This is supported at the 1900X1200 resolution where the GTX260 and 9800GTX+ are very close. That resolution with AA enabled does not fill the 512mb frame buffer, and so the equivalent GPU's perform roughly equally.

To me this says: These cards are not made for resolutions over 1900X1200 with any eye candy on, and at that resolution and below, they are nearly the saem card in terms of performance. 

And I assume that the GTX250 will be replaceing the 9800GTX+ at retail, not selling side by side?


Great review, though. I was thinking of getting a 6800 for a physx card, and waws debating waiting for the 250, due to it's smaller size, but this leads me to think the 9800 is the way to go.

+ -

Sure it is pretty cheap, but still in the same price range as the card it is a 'clone' of.  CUDA and PhysX are pretty much worthless.  The 1GB of memory is pretty much worthless since areas where it helps the card is still unplayable with.

Nvidia is going to sell a bunch of these, and a mean a bunch, to little joe average who thinks he is getting a version of the latest and greatest instead of just renamed and packaged 2 year old tech.  I feel really sorry for the poor guy with an 8800GTS who buys one of these thinking "Well it just came out and is in the 200 family, so it has to be faster then my current card."  And all of you *KNOW* this will happen.  Hell, some of the shops (because of Greed or stupid store sales people) will even push that exact upgrade on people in the public.

Rebranding/naming old products as if it was a member of the current generation is wrong since it really only serves the purpose of taking advantage of the less informed public.

The reason I said CUDA and PhysX are pretty much worthless is they do not provide anything to Joe Average or even Joe Enthusist.  As far as GPU computing, CUDA is a more robust and widely adopted solution than ATI Stream. CUDA has made more inroads in the consumer space, and especially in the HPC space than Stream. There aren't that many differences in the programming model, but CUDA for C does have some advantages over Brook+.  However, ATI opens up it's ISA down to the metal (along side a virtual ISA), while NVIDIA only offers a virtual ISA.

The key is adoption: technology is only of value if the end user has a use for it. Nether CUDA or Stream really provide anything of note to end users.  OpenCL most likely will close the gap between NVIDIA and ATI and should put them both on the same playing field.  This is actually pretty important for Nvidia, since they need some means to keep sales up with Intel entering the high end GPU market soon and working with a common standard is easier then trying to force your standard over one or two other companies.  Also a common standard will hopefully have close to the same effect as DirectX, that is, increase the use of the technology to end users since it will run (though with varies performance) no matter whos hardware they have.

+ -

This has Geforce 4MX written all over it.  Sell old tech looking like new tech, profit.  The under pants gnomes would be proud.  It is interesting to see this compared to the 4850 eveywhere as clearly everyone started the reviews before the 4870 price cuts which now are at the GTS 250 level.  So most of these reviews are out dated the moment they were published.  HotHardware was nice enough to include it in their review so it is painfully obvious that the 4870 1GB is the better deal after the price drops.

Login or Register to Comment
Post a Comment
Username:   Password: