Back in late September of last year, NVIDIA disclosed some information regarding its next generation GPU architecture, codenamed "Fermi". At the time, actual product names and detailed specifications were not disclosed, nor was performance in 3D games, but high-level information about the architecture, its strong focus on compute performance, and broader compatibility with computational applications were discussed. We covered much of the early information regarding Fermi in this article. Just to recap some of the more pertinent details found there, the GPU codenamed Fermi will feature over 3 billion transistors and be produced using TSMC's 40nm processes. If you remember, AMD's RV870, which is used in the ATI Radeon HD 5870, is comprised of roughly 2.15 billion transistors and is also manufactured at 40nm. Fermi will be outfitted with more than double the number of cores as the current GT200, 512 in total. It will also offer 8x the peak double-precision compute performance as its predecessor, and Fermi will be the first GPU architecture to support ECC. ECC support will allow Fermi to compensate for soft error rate (SER) issues and also potentially allow it to scale to higher densities, mitigating the issue in larger designs. The GPU will also be execute C++ code. During the GPU Technology conference that took place in San Jose, NVIDIA's CEO Jen-Hsun Huang showed off the first Fermi-based Tesla-branded prototype boards, and talked much of the compute performance of the architecture. Game performance wasn't a focus of Huang's speech, however, which led some to speculate that NVIDIA was forgetting about gamers with this generation of GPUs. That obviously is not the case, however. Fermi is going to be a powerful GPU after all. The simple fact of the matter is, NVIDIA is late with their next-gen GPU architecture and the company chose a different venue--the Consumer Electronic Show--to discuss Fermi's gaming oriented features...NVIDIA GF100 Architecture and Feature Preview
From listening to people theoretically more knowledgeable about hardware than I (which really would not be that difficult, to be honest. I'm more into the practical information than the technical information, e.g. This card goes in that slot), I've heard that the boost in double floating point precision is something that is pretty much not utilized (if it's even possible to be) in games, and so it's nothing that will help frame rates/gaming performance, yet is something that is built into the architecture, so it's something that can't just be cut for, say, the Geforce series of Fermi (if they continue that line), to make them cheaper.
Basically, it sounded like the cards will have a large piece of them on there, that will be paid for by the consumer, that won't actually be used by games at all. Something that just raises costs with no benefit for an average gamer that buys one. Could anyone shed light on this?
Finally, some delicious news about Fermi!
Thanks Marco, been waiting for this!
A lot of technical information, but it's also showcasing some of the things the Nvidia DirectX 11 enabled cards will be able to do. The free-flowing hair and water look incredible.
The higher anti-aliasing modes, ray tracing, tessellation, Nvidia is showing how much more powerful Fermi is than the GT200 series. And I think we're talking multiples, at least 2-3 times the performance in certain areas.
Hard numbers will bear that out, but it's safe to say Nvidia has something very powerful up their sleeve.
It's good news. Now I wait for the new cards to be introduced, get old, and get less expensive. (they will be frightfully expensive I think) And then, using the tried and true 'Trickle-Down' effect, I'll get one.
Time is on my side. Yes, It is,..............
Don't part with your illusions. When they are gone you may still exist, but you have ceased to live.
The biggest price drop will occur if/when ATI releases a response to the GF100
That being said the 3 card Supersonic Sled demo is awesome, wish I could run it on my computer! (no physix card here ;_; )
I really wish Nvidia was giving us more juicy info, but hey this stuff sounds powerful. Fermi is gonna be so amazing solo, then when you hit SLI and tri-SLI its gonna be heart stopping. Developers better make some sweet stuff to push these new cards to their limits so we can all drool at the beautiful graphics these things can push.
Also on a side note, I bet it will play crysis LOL.
very cool... if it come out to be what it claims then I might trade in my 2 275's for one of them...
"Never trust a computer you can't throw out a window."
Z77 GIGABYTE G1.SNIPER
G.Skill Ripjaws X 16gb PC2133
Asus Blu-ray burner
Seasonic X650 PSU
Patriot Pyro 128gb SSD
Yep, this seems like the prelude to a launch.
The good news is that at least we know they have a Fermi based video card on hand.
Here are some demo vids:
We will see when it gets here. From what I was reading this one is going to be different focus wise all the way around. I hope DX11 is picked up unlike DX10 or 10.1 because it looks like it has a lot of advantages visually especially in the realism sector.
When you look through the full article and the slides the show check out the one of the hair. I studied that one pretty deep, and it looks considerably similar to the real thing. With something like hair they are minuscule to the point of blending together. In the Nvidia demo of hair you could see thousands of separate hairs in the image. So the detail and construction level of this card looks to be awesome. However; we will have to see how that affects speed of rendering etc for a final verdict.
The hair and the water pics, both look amazing.
Though I thought it was showing off DirectX 11 tessellation, rather than a feature of the Nvidia cards.
The Supersonic Sled demo would be unique to Nvidia because it employs PhysX.
That's true but when I looked at that picture first I was like why are the showing a blond wig on here. Then I looked at it closer and scrolled down and read to details and was like wow that almost looks like rl hair. So producing a pic of that much detail that I can see through my current GPU on a webpage image which is not the same GPU is like minus 2-400 percent detail wise at the least.
I see what you mean. Man, but I would loved to see a video demonstration of the free flowing hair and the water. With the speed of news coming out about Fermi, I think we'll have a demo pretty soon!
Well what I am saying is that the picture you can see is awesome and the real picture on your PC would be 2-400 times better. So this thing will blow away everything on the market I would imagine, but it also changes the general functionality of a GPU as well. The focus and delivery mechanisms as well as software platform is in many ways totally different, or at least the focus is. I am pretty confident the reason the 5970 is two tweaked 5850 gpu's, is because ATI is working on something new as well. I also think that inn realistic pictures we are on a cusp. Look at Avatar it is animation done by computers almost completely with real actors at the same time. It is a meshing of technologies which I see on your PC in a relatively short amount of time. The 5870 started it this Nvidia hardware expands it, and ATi expands it just like normal. The impact on digital imagery and its availability to the normal person will change though.
I was able to find some video demos on youtube and I posted them a couple of posts above. They're worth checking out! My current graphics card would melt if I tried running any of the demos on it, lol.
The Streaming Multiprocessors on the GF100 have taken a giant leap forward:
We're seeing some major increases in hardware power and we're also seeing real improvements in geometric processing (tessellation and displacement mapping). Rob over at Techgage mentions it in his review:
"While pixel shaders have had an increasing focus from GPU generation to
the next, there's been almost no love to the triangle generator.
Compared to the GeForce FX (2003), the shading horsepower has increased
by 150x, while the geometric processing has increased by only 3x."
You're right that the new Nvidia cards will surpass the offerings by ATI, and Nvidia has not tried to hide that fact. Look at this graph they released of tessellation performance(red is ATI):
The 5870's max FPS barely touches the min of the GF100.
yeah those video demos are awesome especially the hair one and the water. I still think the detail in the hair is awesome. when the wind blows it looks real
You are right, that hair demo is pretty impressive too.
I wonder how much of that is going to translate into actual gaming. Remember that hair demo close up was pulling in 25 FPS. While we may not see hair that detailed in games anytime soon, anything one-tenth as good would look awesome.
All this talk is worthless until the thing actually comes out. Plus, it is going to be really expensive since there is such a large chunk of its hardware that is worthless for gaming and which can not be cut from the core.
The biggest thing though is that ATI could possibly release their next gen card a few months after this thing hits the market. NVidia is way behind in production/research cycle and we might be seeing out releasing with the leader swinging back and forth every three or four months.
Smooth Creations LANShark "Blue Flame" + ASUS G73JH-A2 + ASUS EeePC S101H
"I frag therefore I am!"
Keep in mind, that slide is provided by NVIDIA, and it's only a specific 60 second snapshot, of a certain part of that benchmark.
Marco ChiappettaManaging Editor @ HotHardware.com
Follow Marco on Twitter
Have you guys seen hardware yet?
They had them on display at CES, but I doubt they let anyone open the cards up or test them yet.
ATI releasing their next-gen tech so soon? Their current high-end cards are already in such high demand that they're selling above MSRP.
I do agree with you, all this is mostly conjecture until we see it backed up by in-game performance.
I totally agree with Infinity Nvidia has gotten behind considerably in a market which moves as fast as GPU's. I am almost positive that before this product is available in quantity ATI is almost guaranteed if not to release a next gen card from there stock line to at least have widespread media about it. As for it's current capabilities being at the top of the market, then the next gen component will blow it away. I think Nvidia has some catch up work to do. That is even if this outperforms ATI's current stock there next gen may be more impressive than this is component and capability wise.
NEWS TIPS |
This site is intended for informational and entertainment purposes only. The contents are the views and opinion of the author and/or hisassociates. All products and trademarks are the property of their respective owners. All content and graphical elements areCopyright © 1999 - 2013 David Altavilla and HotHardware.com, LLC. All rights reserved. Privacy and Terms