NVIDIA GeForce GTX 480: GF100 Has Landed

8 thumbs up

Many of the new feature of GF100 are designed to increase geometric realism, while offering increased image quality, and of course high performance. One of the new engine features of the GF100, like other DirectX 11 class GPUs, is hardware accelerated tessellation.


Tessellation Example

The GF100 has built-in hardware support for tessellation. As we've mentioned in the past, tessellation works by taking a basic polygon mesh and recursively applying a subdivision rule to create a more complex mesh on the fly. It's best used for amplification of animation data, morph targets, or deformation models. And it gives developers the ability to provide data to the GPU at coarser resolution. This saves artists the time it would normally take to create more complex polygonal meshes and reduced the data's memory footprint. Unlike previous tessellator implementations, the one in the GF100 adheres to the DX11 spec, and will not require proprietary code.

  
Hair Demo

To show off the capabilities of GF100, NVIDIA has created a number of interesting demos. As many of you know, properly rendering and animating realistic hair is a difficult task. As such, many games slap helmets or caps on characters, if they even have hair at all. NVIDIA's Hair demo, however, combines tessellation, with geometry shading and and leverages the compute performance of the GF100 to generate flowing hair. The images were realistically lit and smoothly animated, which is a far cry from what is seen in most of today's current games.

 
Water Demo

Another demo NVIDIA created to illustrate tessellation with the GF100 is aptly dubbed the Water Demo. As you can see in the screenshots above, the water demo takes a scene with relatively basic geometry, and through increased tessellation and displacement mapping the detail in the rocks and water is dramatically increased. The demo does not use realistic fluid dynamics, but the effect was nonetheless still very good. The difference in performance between the two modes was roughly 2x--with course geometry the demo ran at about 300FPS and with high-detail it ran at about 150FPS.


  

  
New GF100 Anti-Aliasing Modes

In addition to offering much more compute performance and geometry processing than previous generations, the GF100 also features new anti-aliasing modes. The GF100 will offer higher AA performance than GT200 not only due to having more ROPs but because enhancements have been make to each ROP as well. With GF100 the data compression factor is higher in the ROPs, it can use more samples, and it offers better transparency AA quality thanks to accelerated jittered sampling.

Jittered sampling changes the sampling pattern randomly on a per-pixel basis, which help removes banding and noise, and produces an edge that is more pleasing and natural to the eye. The GF100 also offers a new 32x CSAA mode (8x + 24 color samples) in addition to support for 33 levels of alpha blended transparency. The effect of the new AA mode is much smoother edges, as seen in the screenshots above. The new AA mode also preserves more detail on textures with transparency, that are sometimes rendered incorrectly when viewed at angles, like chain-link fences or railings, for example.

Article Index:

1 2 3 4 5 Next ... Last
0
+ -

And here I was just about to complain that it's Firday and no news on the GTX480 :-P

0
+ -

Shouldn't "High Power Consumption" be labeled under "Hot"? ;)

 

*ducks*

0
+ -

I upgraded my gateway m685e workstation from vista to windows 7. Later, I downgraded the video card drivers downloaded from windows update to the vista compatible drivers on the nvidia website. Rumor is the vista compatible drivers were not powering the fan after a reboot resulting in many dead gpus. I am blind and have already lost an IT job for lack of a working laptop.

0
+ -

canyon1985:

I upgraded my gateway m685e workstation from vista to windows 7. Later, I downgraded the video card drivers downloaded from windows update to the vista compatible drivers on the nvidia website. Rumor is the vista compatible drivers were not powering the fan after a reboot resulting in many dead gpus. I am blind and have already lost an IT job for lack of a working laptop.

I'm not sure if there's an off-topic question in there or what, but if you updated/downgraded your drivers via Windows Update, then you never got the bad (196.75) drivers.

If what I read is true, Microsoft never had them available for download via Windows Update; you would have had to manually download them from nVidia to get them.

0
+ -

Meh, I can't believe that after all the hype. It's only marginally faster then an ATI card and it should of been $50 less for that kind of performance. I'm thinking this is part of a marketing scheme by NVIDIA for them to buy 2 cards and run them in SLI so they can get world class performance and more money, then again I might be crazy.

I'm also surprised by the fact that it makes more heat and consumes more power. I wonder what they thinking when they made the card, at least they could of done an Intel thing and made it more efficient so that it'd leapfrog over ATI just like the Core 2 Duo did with AMD's processors back in 2006.

And a card with 512 cores unlocked... I guess it's going to be 10-20 fps faster then the ATI cards released right now but damn are they going to need a cooler that can cool this card effectively.

I'm going to wait for the GeForce GTX 470 review until I make up my mind but I think NVIDIA has a GeForce FX on it's hands.

0
+ -

I'm pretty surprised with the results... IMHO GTX 480 is meh and I'm gonna buy Ati this time.

0
+ -

I think it is lacking in some areas due to drivers. It's a brand new peice of hardware.

Comparing it the the previous Gen of Nvidia cards, it's hugely impressive. A single chip GTX480 is faster than a Dual Chip GTX295. On top of that, in some bench marks it can nearly hang with the 5970. When paired in SLI it really shined.

While in Single card mode in the HAWX benchmark it was 20% faster than the 5870. Then in  Dual card mode it was 40-60% faster than a pair of 5870s. On top of that, a pair of 5970 is only 20% faster while having twice as many chips to work with.

More evidence of potential for driver optimization becomes apparant here as well since scaling from single card to dual picked up 90% at one resolution, but only 70% at the other. With some optimization Nvidia could squeeze out a few more frames.

When a dual chip Nvidia card comes out.... it'll be a beast.

Also, pay close attention to the Unigine Heaven benches as they can be a bit deceiving. While it looks like the 480 is only slightly faster than the 5870s, it is in actuallity nearly twice as fast. Add to that, that it was faster than the 5970.

I think for this card DX11 is where it's at. This thing will probably, kindly, hand the ATI card it's a$$ in DX11 games w/ Tessellation. And as we all know DX11 is the future. So I don't mind if it's not the best at DX10 as long as it's great at playing the games of the future :-)

0
+ -

Acarzt,

I'll hit the rest of this in the morning, but I'm going to respond to one thing you wrote now. It's physically impossible for NVIDIA to build a dual-GPU GTX 480 on a 40nm process. Board power for the single-GPU flavor of the card is 250W. The maximum amount of power you can feed a PCI-Express card is 300W. NVIDIA isn't going to break the PCI-Express specification (they'd get no support from motherboard vendors, all of whom would have to specifically vet their motherboards against a non-standard component).

It's possible that these first parts are terrible examples of 40nm, so yes, NVIDIA could re-spin the silicon. Statistically, that won't happen for at least six months; NVIDIA's already respun it (and TSMC's yields aren't exactly fabulous even now).  Farther out, Fermi probably will go through a die shrink at some point in 2011 once 28nm tech is ramping, but that's a year away.

Sure, NV could opt to build a svelte mid-range dual-GPU, but that wouldn't be a dual GTX 480. It's very likely that we'll see NVIDIA roll a full 512-core Fermi and I think it's a good bet that they'll build a Fermi 2.0 on an improved 40nm die, but you can only rearchitect so much. If we look at the GTX 285 / GTX 295 as indicative, NV would need to pull the power consumption of a GTX 480 down by 50-60W to have enough room in the PCI-E spec to build a dualie. The only time you see leaps that massive on the same process, maybe, is if you compare A0 hardware to the mature, high-yield silicon you're building 1-2 years later.

0
+ -

Joel that's a good point I did not even think of. While it would be a major difficulty on what you say, I think for the first thing if you run it in SLI it is not only pulling 500 Watt's so forget Tri-SLI. I would also totally flood the PCI-X bus on anything but whats on a 1366 chip socket board as well. Well maybe a 1366 and a AM3. I don't know if the AM3 even has enough bandwidth either though, not to mention $1500 for the video cards alone for either of these with Tri-SLI. Then you get to the PSU in standard SLI the cards would be pulling 500 watts, in Tri they'd be pulling 750 watts which you could get with nothing less than a server power block which would not fit in a standard case anyway (not to mention would probably be at least 500 more dollars. On top of that I doubt a PCI-X would even have the bandwidth available to actually run it, where with 2 5970's it is still fine (greatly because of the reduced energy pull and of course heat ramp).

0
+ -

Well I never expected a dual GPU card based on the GTX480. That would be pretty awesome, but ridiculous to expect.

Even the GTX 295 isn't based off the 280. It's essentially 2 260s on 1 board. And with that it's a good bit faster than any other single card offering of it's Generation at Nvidia.

I was expecting a fermi equivilent to the 260 to be part of the dual gpu card. I would expect a 40-50% increase over the current single gpu 480. I don't think that would be unreasonable to expect :-)

I also don't expect this any time soon lol The GTX 295 didn't come out for some time after the GTX 280 was launched. If i'm remembering the time line correctly, I think it actuall even came out after the refresh of those cards.

1 2 3 4 5 Next ... Last
Login or Register to Comment
Post a Comment
Username:   Password: