NVIDIA Unleashes Quadro 6000 and 5000 Series GPUs

6 thumbs up

Not long ago, we reviewed the entire FirePro workstation graphics card lineup from ATI. With the V8800, our testing revealed considerable performance gains over the previous generation V8750, coupled with a lower price point. Surely, that's a combination that consumers can appreciate, especially for those looking to upgrade sooner, rather than later. But, at the time, the market was not yet settled as we anxiously awaited a response to ATI's FirePro products from NVIDIA. Thankfully, the wait is over as the launch of a new series of professional graphics cards from NVIDIA based on the company's Fermi architecture has just arrived.   

Three new models arrive today to bolster the Quadro lineup and affirm NVIDIA's commitment to the professional workstation crowd. As we've mentioned, the new cards are based on NVIDIA's Fermi architecture which brings a new set of cutting edge features to the table. First, take note of the change in naming convention. These cards no longer use the FX designation, and are simply branded Quadro followed by the model number.


NVIDIA Quadro 6000 and 5000 Workstation Graphics Cards

Today, we'll be looking at the two ultra high end models from NVIDIA, the Quadro 6000 and Quadro 5000 graphics cards. Both models offer an impressive list of specifications and features, but flaunt large price tags as well. Before we go into those details, we'd like to mention the card we aren't reviewing in this article, but is launching today as well. The Quadro 4000 offers 256 CUDA cores with 2GB of GDDR5 and offers 89.6GB/s memory bandwidth through a 256-bit interface. At $1,199, the 4000 replaces the FX 3800 within NVIDIA's lineup, while offering considerably more features. Out of these three cards, the Quadro 4000 is the most affordable Fermi-based option. Now for highlights on the Quadro 6000 and 5000 models, check out the chart below to find out what they have to offer...

NVIDIA Quadro Professional Graphics
Professional Workstation Models




One thing is certain looking at their specs--these cards mean business. They were specifically designed to meet the demands of professional designers, engineers, and scientists with a strong emphasis on visual computing with certified applications such as Maya, 3D Studio Max, and SolidWorks. When compared to mainstream GF100 based gaming video cards, we find the core and memory clock speeds here are much lower. We suspect this was done to ensure stability and extend longevity of the hardware components, and to provide quieter operation. But, we also find that NVIDIA has significantly beefed up memory capacity to extreme levels. So how does this combination of lower clocks and increased memory translate to the world of professional graphics? Before we get to the performance results, let's take a closer look at these cards themselves and find out what makes them different from the previous generation of Quadro graphics cards.

Article Index:

1 2 3 Next
0
+ -

I'd never thought another Quadro card would see the light of day, I'm amazed at the fact that it failed some tests but I'm sure future drivers will help improve that score.

0
+ -

Not too bad of video cards, but they still draw more power and heat up much more than the ATI cards. They both seem to trade blows on diferent programs, but the conclusion states they are substantially faster than ATI's cards. I wouldn't go as far as to say that though. They are a big improvement over previous generation cards, but ATI is already what, 6 months ahead of Nvidia?

If there was a power (as in electricity) to peroformance ratio, the ATI cards would most likely win that.

0
+ -

I sorta disagree. While ATI is ahead of the curb, NVIDIA seems to have the best cards performance-wise. If I were Pixar then I would want to use the most powerful rendering solutions known to man.

But you are right on the fact that they take a lot of power, that seems to be the biggest flaw of the GF100 chip.

0
+ -

I was going to say they finally have an answer to the FirePro's? Yet not being able to use three monitors? Most of us would like to have at least two plus our Wacoms. I guess the next question would be, does it support more of you have a second card or does the porting still limit you to two?

I understand the Cinebench marks look better for the ATI. Yet when it comes to frame rate testing, in the view port anything around a 40 is good enough for smooth work flow. Unless you really need to see scientific fluid dynamics live on screen? The CATIA and EnSight levels prove this point very well! Now I know why NVidia has a better reputation for being better integrated to many DCC apps!

"ATI's FirePro cards have been out for months so we aren't surprised to see more mature drivers from them."

I am not so sure about that! I have had the V8800 for a while and they have not released a new driver since the last and only update on 4/25/10?! They even told me how lucky I am to have good driver support, yet didn't answer any concerns about driver updates. I would now say that when it comes to CGI, Nvidia has better performance when it comes to rendering. An animated scene with high resolution textures, motion blur and Photometric lighting is better handled from a company that has been at the forefront of that industry for the past twenty years.

I agree that yes...the cost really offsets productivity! Nvidia has a better reputation for workstation support and knowledgeable integration for 3D graphics. Now I am finding that out the hard way! If you have an awesome high performance system to begin with then you wont notice many troubles when switching to a higher workstation card, except within the veiwports. Yet like they do in these reviews, you will have to start off with a fresh build when switching to something like the Quadros. I have always supported ATI, and now I know why it is that I am not as productive as I want to be!! In studio, I usually have an Nvidia solution, and ATI at home. Since I am usually using scenes already modeled, lite and textured, I have never really noticed much of a difference until rendertime. That, I always caulked up the difference to not having a render farm. Now I know better!!

The PhysX and Cuda features are something that should drive professionals to these cards. It would also be interesting to see how these get integrated with a Tesla c1060? I guess the best way to sum the two companies up?! ATI develops for the end entertainment, down towards the development of content... only as a side note! Nvidia develops for the developer, knowing the end result is to make the best product for entertainment!

So if Y'all are going to just toss this test system into the test-Bin toybox, maybe pitch it my way :) Or I could trade you a FirePro V8800 for one of these Quadros? I would gladly switch the four monitor support for smoother integrated performance:)

0
+ -

The new Quadros support two monitors per card (just like the GTX 4-series)


You can download the new FirePro driver 8.762 from ATI right now.  Choose 'FirePro Beta' from the drop down menu.

 

What problems are you having with the V8800?

 

0
+ -

Thanks Raid, Ill give it a try.

The 8800 has problems in Max on stability in rendering issues. It also doesn't like when you load the Direct X driver. It only seems stable if I use Open GL. At times the wireframe gets about four pixels larger, like it is being drawn in crayon. When I load the X driver it has two of the viewports going blank.  There is still the globals marker for any selected object yet nothing else. I can still switch any of the working veiwports to any view and they still work, just two are MIA.

In Maya the blend shapes sometimes work and most other times don't. It gets frustrating when you move the slider and you think the Blend doesn't actually work! Also any sub-object selection usually does not update when you are moving them or making any kind of adjustments. In both programs the material mapping tends to not update when changing UV's. That only seems to work half of the time. I end up having to zoom in and out decides to refresh the view, then it catches up and adjusts the map.

 

0
+ -

To animatortom

At SIGGRAPH I saw many demos by Autodesk of both Maya and 3ds Max 2011 in the AMD booth using Eyefinity w/ 3 displays. They demo’ed every day with no issues. Also as Max really only supports DX (they quit adding features to OpenGL code 4 or 5 years ago), so it was all DX and worked flawlessly for several days of demos.

So wonder if you were using an old driver?

0
+ -

"I saw many demos ....several days of demos."

That is the key word,....DEMO!

It is one thing to set up a demo system with demo files. When run through properly set up files it can be made to look like it works well. In practical application, it is a whole other matter. If they had a completely tricked out system with twin Xenons and 128GB of RAM with dual 8800's then yes it would look like it rocks. They haven't had to do much to OpenGl because the development has been complete and for the basics of 3D is working well after years of development.

DX is a new direction for DCC.  The thing with Max, is that it requires a ton of RAM and a powerful CPU. If you have a six-core with around 64GB or RAM then with just the Firepro 4800, it would rock! the others are overkill unless you have a strong system to begin with. Maya relies more on the GPU so it benefits as long as it has more than 1GB on the GPU.  With a crappy system, the 8800 is fairly useless in Max. In Maya it is almost three times as fast.

So if they are demoing it, then I can guarantee, they would have it running on a completely tricked out system inside a simple looking case just for show.

0
+ -

@ animatortom

I would suggest you to try Linux - Fedora or Debian for example. There aren't any problems with ATI OpenGL Maya/Linux drivers. Cheers, g.

0
+ -

@Animatortom
"I would now say that when it comes to CGI, Nvidia has better performance when it comes to rendering...is better handled from a company that has been at the forefront of that industry for the past twenty years."

nV hasn't been at the Forefront that long, they filled the gaps Matrox and 3DLabs left behind 10 years ago. If anything ATi's been in it longer, nV just did a better job of capitalizing on the failure of others (including 3Dfx).

The PhysX and Cuda features are something that should drive professionals to these cards.

What exactly is the benefit of PhysX here... or anywhere? Confused

With OpenCL and Direct compute both of those features will start to become marginalized. Proprietary is fine if you can develop a compelling reason, but even Adobe has said they are going to OpenCL, not CUDA for the next build to get the more global CPU & GPU benefits of OpenCL.

CUDA like Brook was a nice bridge solution, but their time has come and gone, just like Cg on which Cuda was based.

"I guess the best way to sum the two companies up?! ATI develops for the end entertainment, down towards the development of content... only as a side note! Nvidia develops for the developer, knowing the end result is to make the best product for entertainment!"

That would be a way to sum it up, but not the 'best way', it's a little Myopic. If anything this review shows that ATi makes better hardware, but they are held back by their much MUCH weaker software/driver/dev_relations teams. AMD was supposed to improve that but they have done little other than a tiny improvement in Linux and only able to at best equal nV.

* Dang, hate the quoting tool here. Even after editing out remnants remain.*

1 2 3 Next
Login or Register to Comment
Post a Comment
Username:   Password: