NVIDIA GF100 Architecture and Feature Preview

Back in late September of last year, NVIDIA disclosed some information regarding its next generation GPU architecture, codenamed "Fermi". At the time, actual product names and detailed specifications were not disclosed, nor was performance in 3D games, but high-level information about the architecture, its strong focus on compute performance, and broader compatibility with computational applications were discussed.

We covered much of the early information regarding Fermi in this article. Just to recap some of the more pertinent details found there, the GPU codenamed Fermi will feature over 3 billion transistors and be produced using TSMC's 40nm processes. If you remember, AMD's RV870, which is used in the ATI Radeon HD 5870, is comprised of roughly 2.15 billion transistors and is also manufactured at 40nm. Fermi will be outfitted with more than double the number of cores as the current GT200, 512 in total. It will also offer 8x the peak double-precision compute performance as its predecessor, and Fermi will be the first GPU architecture to support ECC. ECC support will allow Fermi to compensate for soft error rate (SER) issues and also potentially allow it to scale to higher densities, mitigating the issue in larger designs. The GPU will also be execute C++ code.

During the GPU Technology conference that took place in San Jose, NVIDIA's CEO Jen-Hsun Huang showed off the first Fermi-based Tesla-branded prototype boards, and talked much of the compute performance of the architecture. Game performance wasn't a focus of Huang's speech, however, which led some to speculate that NVIDIA was forgetting about gamers with this generation of GPUs. That obviously is not the case, however. Fermi is going to be a powerful GPU after all. The simple fact of the matter is, NVIDIA is late with their next-gen GPU architecture and the company chose a different venue--the Consumer Electronic Show--to discuss Fermi's gaming oriented features...

NVIDIA GF100 Architecture and Feature Preview

Via:  HotHardware
Comments
Anakhoresis 4 years ago

From listening to people theoretically more knowledgeable about hardware than I (which really would not be that difficult, to be honest. I'm more into the practical information than the technical information, e.g. This card goes in that slot), I've heard that the boost in double floating point precision is something that is pretty much not utilized (if it's even possible to be) in games, and so it's nothing that will help frame rates/gaming performance, yet is something that is built into the architecture, so it's something that can't just be cut for, say, the Geforce series of Fermi (if they continue that line), to make them cheaper.

Basically, it sounded like the cards will have a large piece of them on there, that will be paid for by the consumer, that won't actually be used by games at all. Something that just raises costs with no benefit for an average gamer that buys one. Could anyone shed light on this?

gibbersome 4 years ago

A lot of technical information, but it's also showcasing some of the things the Nvidia DirectX 11 enabled cards will be able to do. The free-flowing hair and water look incredible.

The higher anti-aliasing modes, ray tracing, tessellation, Nvidia is showing how much more powerful Fermi is than the GT200 series. And I think we're talking multiples, at least 2-3 times the performance in certain areas.

Hard numbers will bear that out, but it's safe to say Nvidia has something very powerful up their sleeve.

rapid1 4 years ago

When you look through the full article and the slides the show check out the one of the hair. I studied that one pretty deep, and it looks considerably similar to the real thing. With something like hair they are minuscule to the point of blending together. In the Nvidia demo of hair you could see thousands of separate hairs in the image. So the detail and construction level of this card looks to be awesome. However; we will have to see how that affects speed of rendering etc for a final verdict.

gibbersome 4 years ago

The hair and the water pics, both look amazing.

Though I thought it was showing off DirectX 11 tessellation, rather than a feature of the Nvidia cards.

The Supersonic Sled demo would be unique to Nvidia because it employs PhysX.

rapid1 4 years ago

That's true but when I looked at that picture first I was like why are the showing a blond wig on here. Then I looked at it closer and scrolled down and read to details and was like wow that almost looks like rl hair. So producing a pic of that much detail that I can see through my current GPU on a webpage image which is not the same GPU is like minus 2-400 percent detail wise at the least.

gibbersome 4 years ago

I see what you mean. Man, but I would loved to see a video demonstration of the free flowing hair and the water. With the speed of news coming out about Fermi, I think we'll have a demo pretty soon!

rapid1 4 years ago

Well what I am saying is that the picture you can see is awesome and the real picture on your PC would be 2-400 times better. So this thing will blow away everything on the market I would imagine, but it also changes the general functionality of a GPU as well. The focus and delivery mechanisms as well as software platform is in many ways totally different, or at least the focus is. I am pretty confident the reason the 5970 is two tweaked 5850 gpu's, is because ATI is working on something new as well. I also think that inn realistic pictures we are on a cusp. Look at Avatar it is animation done by computers almost completely with real actors at the same time. It is a meshing of technologies which I see on your PC in a relatively short amount of time. The 5870 started it this Nvidia hardware expands it, and ATi expands it just like normal. The impact on digital imagery and its availability to the normal person will change though.

gibbersome 4 years ago

I was able to find some video demos on youtube and I posted them a couple of posts above. They're worth checking out! My current graphics card would melt if I tried running any of the demos on it, lol.

The Streaming Multiprocessors on the GF100 have taken a giant leap forward:

  • 32 CUDA Cores (4x compared to GT200)
  • 16 or 48KB of Shared Memory (3x compared to GT200)
  • 16 or 48KB of L1 Cache (There was no L1 on GT200)

We're seeing some major increases in hardware power and we're also seeing real improvements in geometric processing (tessellation and displacement mapping). Rob over at Techgage mentions it in his review:

"While pixel shaders have had an increasing focus from GPU generation to the next, there's been almost no love to the triangle generator. Compared to the GeForce FX (2003), the shading horsepower has increased by 150x, while the geometric processing has increased by only 3x."

 

You're right that the new Nvidia cards will surpass the offerings by ATI, and Nvidia has not tried to hide that fact. Look at this graph they released of tessellation performance(red is ATI):

The 5870's max FPS barely touches the min of the GF100.

rapid1 4 years ago

yeah those video demos are awesome especially the hair one and the water. I still think the detail in the hair is awesome. when the wind blows it looks real

 

3vi1 4 years ago

[quote user="gibbersome"]Though I thought it was showing off DirectX 11 tessellation, rather than a feature of the Nvidia cards.

[/quote]

Tessellation is a feature of the card, not DirectX.  The DirectX 11 standard specifies which adds APIs to control it, and specifies that only hardware that supports it can be called "DirectX 11 compatible".  OpenGL3.2 supports the same tessellation with the same cards, even on Windows XP (and Linux).

 

Tessellation controlled by OpenGL:

[View:http://www.youtube.com/watch?v=C8TKUlMzcbw&feature=channel]

 

I agree about the hair and water vids - really nice looking stuff, though the frame rates on the hair seem a bit troubling considering there's nothing else being rendered in the demo.

gibbersome 4 years ago

Ahh, thanks. But the hardware though is not unique to Nvidia, unlike PhysX which runs off CUDA-enabled GeForce CPU. The hair and water tessellation is possible with any DirectX 11 compatible card. I guess that's the point that I was making.

About the hair vids, I was thinking the 25 FPS was very low as well, but we this kind of detail won't be present in games, not yet anyway. Something even 10% of that would be a huge improvement in realism in games.

rapid1 4 years ago

Nice post 3vi1 I really like the openGL video. And on your "gibbersome" gaming comment yeah that kind of detail would be flat out bad in a game. Thats also another reason I was pointing out the hair thing, can you imagine playing or making a charachter as detailed as that in a multiplayer game that would be sick, but I am sure we'll see it within the next year or two. The next one I am waiting for is SWTOR (Star wars the old republic) which has finally been given a release date of spring 2011. If anyone likes MMO type games check it out  http://www.swtor.com/

Soupstyle 4 years ago

I am waiting for SWtOR too, I am guessing it gets pushed into the Summer to Early Fall of 2011. I think the end product will be worth the wait though.

rapid1 4 years ago

As for the video card thing Nvidia has put this off to long I think. Of course as far as I remember it has actually been over a year since the actually released anything that was not a re-branded (shrunken maybe but same architecture except memory changes) and truly new 100%. Before this it was about every 8-9 months for both companies and would be at least a major update. They have released the 295 but it's just dual gpu the hardware is not super new (maybe Tweaked). Either way as ATI has also known I would imagine considerably more than any of us about what they intended to do for most likely over a year they should be prepared. So I would think we would see something major, not that the 5800 up line is not at least concerning current hardware. However; if you had been in a major back and forth with a competitor, and for some reason they were slowed down for any reason, and you new the basic time line for the slow down as well as what the intended to put out next what would you do?

 

rapid1 4 years ago

As for SwtOR I imagine it will be late spring early summer (I say June first 2 weeks of July). As I had posted on the forum I appreciate there strategy with the release as anything else released in the MMO world even WOW usually takes some time to get straightened out with all foretold features active and running well. I personally am not a WOW player really I tried it for my free month and had like 3 20 level characters. I started in the second week of EQ1, and have been on every beta from original release until Dragons (about 10 straight), and a GM on 3 servers. The graphics are just to low quality for me, not to mention I played for a very, very long time.

For some reason the refer to players like me as hardcore at least in MMO's. I also played Vanguard beta and for quite some time  afterward graphically it is better than anything including Conan (which I also beta'd, I have beta tested almost every major successful MMO release since EQ1 even WOW, but it would never work at first, neither would AOC for that matter DND online just was a joke, Warhammer beat was kind of cool but after a couple weeks following release I was bored, Heros was cool I did not really like Villians much, well enough listing of beta's trust me I have probably beta'd at least 30 MMO's all together some not major releases) Either way VG has pretty much had it's death warrant signed for now as they have not gone past initial release other than some added quests for higher levels but not enough, stability work which was done after 3 months, (Believe it or not VG on release was very demanding graphically and basically would not play right with anything under a 1900+ series ATI or a High end Nvidia card, kind of like Crysis, you also needed as much ram as you could get and a high class CPU as well as a fast HD just for general play at low settings) basically no new abilities a +5 level cap it's a joke.

Either away enough of my wisdom on MMO's except to state that I believe and hope it is true that SWtOR is complete when finished which is actually very rare, and stable on release. Then they can worry about expansion in the normal 1 to 2 year time span, and know BIO I think it will be close to yearly which is more inline with something like WOW where the company (Blizzard Oh yeah I beta'd Diablo and Diablo 2 as well my first betas) actually wants there game to be a success and pushes it.

The thing with Sony online makes no sense to me really as a successful and regularly updated content and technology wise game is nothing but a bank in itself. Your customer base buys the original usually within a week to a month of release, pays you monthly access fees and buys every upgrade you put out for it. Money wise they are the most successful because of the fat loot they make after release which is non stop and recurring.

gibbersome 4 years ago

I know many people are eagerly waiting for SwtOR. Also, if you're an Elder Scrolls fan, it's been leaked that they're developing an MMO as well. After the success of WOW, MMORPG's really seem like all the rage with large game developers. Personally, I'm more of a single RPG guy. I like my games to have a definite beginning and end to them. Baldur's Gate was my first and I've been hooked ever since.

Back to Nvidia, yeah, I agree they may have waited too long. If ATI's supply problems resolve later this month, many more 5xxx series cards will be sold in the upcoming weeks, especially with a moderate price drop with the increased supply. Nvidia recognizes this, and that's why you saw the push to release the lower-mid GT300 series mobile cards. They stood to lose out on the mobile market and may have even tried to undercut ATI.

Again, if Fermi is the game changer that Nvidia has been hyping it up to be then it won't matter in the end. Right now though I get the feeling that Nvidia bit off more than it could chew.

Soupstyle 4 years ago

You are right, that hair demo is pretty impressive too.

gibbersome 4 years ago

I wonder how much of that is going to translate into actual gaming. Remember that hair demo close up was pulling in 25 FPS. While we may not see hair that detailed in games anytime soon, anything one-tenth as good would look awesome.

Quinid 4 years ago

@Anakhoresis

I agree with everything you said with gaming.

As for double floating point precision, this feature could be a big deal for GPU computation projects that use CUDA like GPUID. This feature could save projects wasted time double checking calculations as the GPU may catch the alot of the miscalculations itself.

gibbersome 4 years ago

ATI also has double precision floating support in their 5800 series line up.

Interestingly the 5700 series doesn't, but if you go back to two years ago, the 4770 had it. I'm guessing this was a cost cutting measure for ATI.

 

@Quinid Thanks for answering.

rapid1 4 years ago

Yeah it makes no sense otherwise to leave it out really. Look at the 5970 it's 2 oc'd 5850 cores not 2 5870 cores. So there's little stuff they do to save money. The funny part about it is you could get 2 5850's cheaper than a 5970 or for the same amount, but they outperform it in crossfire.

gibbersome 4 years ago

It's two underclocked 5970's slapped together. Here are the specs, look at the number of stream processors and texture units, they're double the 5870's number.

Being underclocked, ATI leaves it up to the user to overclock as long as they're willing to deal with the extra wattage and heat. 5850's in crossfire just manages to beat this card in most in-game tests, but at a cost of 46 extra Watts.

Bu you're right that in terms of price it's a toss up. Two 5850's will cost around the same or slightly less than one 5970.

rapid1 4 years ago

yeah when they first announced the 5850 and 5870 I was having flashbacks to the 9000 (I think it was 9550 models, but I can't remember). They did that with a couple high end and mid rage cards then. You could download the right bios for the card flash and then it loaded as the highest end card and performed the same as well. If I remember you had to have the ones with a red PCB.

gibbersome 4 years ago

Finally, some delicious news about Fermi!

Thanks Marco, been waiting for this!

realneil 4 years ago

It's good news. Now I wait for the new cards to be introduced, get old, and get less expensive. (they will be frightfully expensive I think) And then, using the tried and true 'Trickle-Down' effect, I'll get one.

Time is on my side. Yes, It is,..............

Soupstyle 4 years ago

The biggest price drop will occur if/when ATI releases a response to the GF100

That being said the 3 card Supersonic Sled demo is awesome, wish I could run it on my computer! (no physix card here ;_; )

:P

Bighorse 4 years ago

I really wish Nvidia was giving us more juicy info, but hey this stuff sounds powerful. Fermi is gonna be so amazing solo, then when you hit SLI and tri-SLI its gonna be heart stopping. Developers better make some sweet stuff to push these new cards to their limits so we can all drool at the beautiful graphics these things can push.

 

Also on a side note, I bet it will play crysis LOL.

Der Meister 4 years ago

very cool... if it come out to be what it claims then I might trade in my 2 275's for one of them...

gibbersome 4 years ago

Yep, this seems like the prelude to a launch.

The good news is that at least we know they have a Fermi based video card on hand.

 

Here are some demo vids:

[View:http://www.youtube.com/watch?v=lTY6vPxccs0&feature=player_embedded]

 

[View:http://www.youtube.com/watch?v=YEddrY0fA4Q&feature=related]

 

[View:http://www.youtube.com/watch?v=K3m9rPltA_s&feature=related]

rapid1 4 years ago

We will see when it gets here. From what I was reading this one is going to be different focus wise all the way around. I hope DX11 is picked up unlike DX10 or 10.1 because it looks like it has a lot of advantages visually especially in the realism sector.

 

InfinityzeN 4 years ago

All this talk is worthless until the thing actually comes out.  Plus, it is going to be really expensive since there is such a large chunk of its hardware that is worthless for gaming and which can not be cut from the core.

The biggest thing though is that ATI could possibly release their next gen card a few months after this thing hits the market.  NVidia is way behind in production/research cycle and we might be seeing out releasing with the leader swinging back and forth every three or four months.

gibbersome 4 years ago

ATI releasing their next-gen tech so soon? Their current high-end cards are already in such high demand that they're selling above MSRP.

I do agree with you, all this is mostly conjecture until we see it backed up by in-game performance.

rapid1 4 years ago

Oh and Gibbersome the reason they were running over msrp was a plant delay upgrading there chip lathe's it is supposed to be over now and they are supposed to be widely available at normal prices anywhere by the end of the month.

InfinityzeN 4 years ago

The point was that NVidia is so far behind in releasing their next product that the next gen ATI part could be released within a few months of it.  If this NVidia part bombs compared to current ATI stuff or even just barely beats it then ATI can hold off for a little while since their parts are cheaper to build and can undercut NVidia. 

If it does what this article says and beats ATI by a huge chunk (50%+) then you can be sure that the next gen ATI part will be coming out ASAP.  With this really long delay I expect that ASAP to be ~3 months.

Marco C 4 years ago

Keep in mind, that slide is provided by NVIDIA, and it's only a specific 60 second snapshot, of a certain part of that benchmark.

bob_on_the_cob 4 years ago

Have you guys seen hardware yet?

Soupstyle 4 years ago

They had them on display at CES, but I doubt they let anyone open the cards up or test them yet.

Marco C 4 years ago

Yes, we saw hardware, but they were pre-production, prototype parts. No word yet on what final specs will be.

rapid1 4 years ago

That is what I was thinking as well Marco. We keep seeing things about them even some supposed benchmarks versus a 285. I am betting these were run internal in Nvidia though. I believe that's about as real as a Unicorn. I wanna see something that's going to be released ( would like to know a price line but not mandatory), and someone like you or Dave testing it before I believe anything.

gibbersome 4 years ago

Nvidia is getting sued by Rambus over their graphics chip technology. Nvidia is just one of many companies that Rambus is pursuing with the goal of gaining lucrative settlements and royalties, they already got $900 million from Samsung over memory chip technology.

3vi1 4 years ago

[quote user="gibbersome"]

Nvidia is getting sued by Rambus over their graphics chip technology. Nvidia is just one of many companies that Rambus is pursuing with the goal of gaining lucrative settlements and royalties, they already got $900 million from Samsung over memory chip technology.

[/quote]

And unfortunately Rambus won their nVidia suit, and are going after Asustek Computer, Asus Computer International, BFG Technologies, Biostar Microtech and Biostar Microtech International, Diablotek, EVGA Corp., G.B.T. , Giga-Byte Technology, Hewlett-Packard, MSI Computer, Micro-Star International, Palit Multimedia, Palit Microsystems, Pine Technology Holdings and Sparkle Computer.

For anyone that doesn't know Rambus, these were the guys who participated in the JEDEC (Joint Electron Device Engineering Council) to develop new memory standards - concurrently working behind the scenes to patent everything (and not letting the other JEDEC members know that the new standard included their patented IP), then withdrawing from the group so as to not have to follow the 'reasonable and non-discriminatory' JEDEC patent policies.

When companies began implementing the new standards, Rambus surfaced and started suing everyone and their grandmother.  I think it says something when your Wikipedia page's "lawsuits" section is four times longer than the one on your technology.

Soupstyle 4 years ago

Yup, Rambus are IP holding company jerks.

Hynix and more companies are also in a lawsuit with them.

gibbersome 4 years ago

[quote user="3vi1"]

And unfortunately Rambus won their nVidia suit, and are going after Asustek Computer, Asus Computer International, BFG Technologies, Biostar Microtech and Biostar Microtech International, Diablotek, EVGA Corp., G.B.T. , Giga-Byte Technology, Hewlett-Packard, MSI Computer, Micro-Star International, Palit Multimedia, Palit Microsystems, Pine Technology Holdings and Sparkle Computer.

For anyone that doesn't know Rambus, these were the guys who participated in the JEDEC (Joint Electron Device Engineering Council) to develop new memory standards - concurrently working behind the scenes to patent everything (and not letting the other JEDEC members know that the new standard included their patented IP), then withdrawing from the group so as to not have to follow the 'reasonable and non-discriminatory' JEDEC patent policies.

When companies began implementing the new standards, Rambus surfaced and started suing everyone and their grandmother.  I think it says something when your Wikipedia page's "lawsuits" section is four times longer than the one on your technology.

[/quote]

Man, that really makes my blood boil. With them going after the big, popular companies, you know the consumer is going to end up paying the eventual cost.

Hmmm, but I did come across another interesting point that someone made on another forum. To summarize:

1. Rambus patents on high efficiency DRAM were used freely for 10 years

2. Hynix, Micron , Samsung, and Infineon attempted to kill Rambus by price fixing, for which they pled guilty.

3. Rambus sued 3 times for payment, and in turn was sued 3 times for delay.

4. Lawsuits were meant to bankrupt Rambus. So Rambus is in court for revenge, or some sort of corporate justice (oxymoron?).

 

I don't know what's going on exactly. Maybe Rambus was wronged initially and have discovered that there's much more money to be made by sucking other companies than by producing worthwhile products. I say it's time we kick out all these Harvard/Stanford/Wharton MBA types and let the engineers do what they do best.

rapid1 4 years ago

Yeah Rambus is as much a committee of scammers as it is a group of professionals. I remember when you had to use there ram on Intel systems it always annoyed me.

rapid1 4 years ago

I totally agree with Infinity Nvidia has gotten behind considerably in a market which moves as fast as GPU's. I am almost positive that before this product is available in quantity ATI is almost guaranteed if not to release a next gen card from there stock line to at least have widespread media about it. As for it's current capabilities being at the top of the market, then the next gen component will blow it away. I think Nvidia has some catch up work to do. That is even if this outperforms ATI's current stock there next gen may be more impressive than this is component and capability wise.

Post a Comment
or Register to comment