NVIDIA Unleashes Quadro 6000 and 5000 Series GPUs - HotHardware

NVIDIA Unleashes Quadro 6000 and 5000 Series GPUs

6 thumbs up

Performance Summary:  Before we go over the numbers, we would like to note the critical impact of drivers in all of this. A day before the launch of this review, ATI released a new workstation driver which promised to significantly improve performance in several key applications, specifically those used in the new SPECviewperf 11 testing suite. Although we already finished up all the testing, created graphs, and came to our conclusions based on that data, we felt it would be worth the effort to re-test the FirePro cards to see how much of a difference the new driver would make while providing you with the most up to date information possible. In short, it made a significant improvement for the FirePro cards in both Lightwave 3D and SolidWorks. It still wasn't enough to overtake the Quadros, but definitely closed the gap.

The point is to take these comparison numbers with a grain of salt, especially when comparing scores from different camps. As time goes on, expect to see increased performance in certain software programs due to driver optimization. ATI's FirePro cards have been out for months so we aren't surprised to see more mature drivers from them. And since Fermi-based hardware is new to the professional market, we fully expect to see a boost in performance as new drivers are released.



Note: After posting our review on launch day, we looked over the results and determined our Quadro scores were lower than expected in several tests. After some troubleshooting, we tracked down the issue. Although SPECviewperf 11 doesn't offer the option to toggle Vsync on or off, it was enabled in the NVIDIA control panel and we needed to disable it for the new benchmark. We re-tested the cards and have updated the article to reflect the new Quadro scores. 

With that said, let's analyze the data we have in front of us. Looking at SPECviewperf 11 results and the real world applications it makes use of, the Quadro 6000 videocard demonstrated the highest performance in all eight benchmarks that make up SPECviewperf 11. On average, we found it to be 103% faster than its ATI's flagship model, the FirePro V8800. At the same time, the Quadro 6000 came in at 26% faster than the Quadro 5000, a much closer comparison. Without a doubt, the Quadro 5000 is also a very strong high end card. Throughout SPEC testing, it averaged a remarkable 74% performance increase over the Quadro FX 4800, and a 61% boost over the V8800. Here's a scary thought. Both of these cards are extremely fast and will only improve with every new driver release.

Let's talk about pricing. We've seen our share of $2,000 workstation graphics cards across these pages through the years, but this is definitely an eye opener. Although we mentioned it on the first and second pages of this article, its worth noting again here. The Quadro 6000 retails for $4,999. That's a whole lot of cheddar, even for a workstation card. Amazingly, the 5000 model can be had for less than half of that, at $2,249. From our performance results, we can't say the 6000 model is worth twice as much as the 5000 card, but there is a considerable performance upgrade to be had. We saw an average difference of 26% separate the two cards throughout SPEC testing. Granted, we only used a single 30" monitor and could not have come close to maxing out the card's 6GB of on-board memory. And of course, those who desire a product with no compromises have the option of purchasing the 6000 model, while relying on future driver updates to provide performance enhancements. But if top end performance is the goal, it might be worth getting two Quadro 5000's and using them in SLI, at roughly the same cost of one Quadro 6000.

In conclusion, the Quadro 6000 and 5000 graphics cards represent two of the most powerful workstation products currently on the market. Both are substantially faster than ATI's top card in most of the real world applications we tested, but they have their drawbacks. The Quadros support a maximum of two displays per card, while most of the FirePro models provide ATI's triple monitor EyeFinity feature. The V8800 actually offers four video display outputs. Also, both Quadro cards produce significant heat and consume a considerable amount of power, irrefutable characteristics of the GF100 GPU. And of course, you must be willing to pay in order to make use of their features and performance. But in the world of workstation graphics, the cost of hardware can be quickly offset by the benefits and production throughput these products provide. If you can get past these limitations, what you're left with is a powerful set of videocards capable of increasing the productivity of users across many different applications. That's the bottom line folks. In the real world, the bottom line is usually the deciding factor. If you're looking for the highest level of performance possible, check out the new Quadro 6000 and 5000 workstation graphics cards from NVIDIA.  

  • Untouchable performance
  • Quiet operation
  • PhysX and CUDA support
  • Very expensive
  • Only two monitor support per card
  • Power hungry
  • Runs hot under full load

 

Article Index:

1 2 3 Next
0
+ -

I'd never thought another Quadro card would see the light of day, I'm amazed at the fact that it failed some tests but I'm sure future drivers will help improve that score.

0
+ -

Not too bad of video cards, but they still draw more power and heat up much more than the ATI cards. They both seem to trade blows on diferent programs, but the conclusion states they are substantially faster than ATI's cards. I wouldn't go as far as to say that though. They are a big improvement over previous generation cards, but ATI is already what, 6 months ahead of Nvidia?

If there was a power (as in electricity) to peroformance ratio, the ATI cards would most likely win that.

0
+ -

I sorta disagree. While ATI is ahead of the curb, NVIDIA seems to have the best cards performance-wise. If I were Pixar then I would want to use the most powerful rendering solutions known to man.

But you are right on the fact that they take a lot of power, that seems to be the biggest flaw of the GF100 chip.

0
+ -

I was going to say they finally have an answer to the FirePro's? Yet not being able to use three monitors? Most of us would like to have at least two plus our Wacoms. I guess the next question would be, does it support more of you have a second card or does the porting still limit you to two?

I understand the Cinebench marks look better for the ATI. Yet when it comes to frame rate testing, in the view port anything around a 40 is good enough for smooth work flow. Unless you really need to see scientific fluid dynamics live on screen? The CATIA and EnSight levels prove this point very well! Now I know why NVidia has a better reputation for being better integrated to many DCC apps!

"ATI's FirePro cards have been out for months so we aren't surprised to see more mature drivers from them."

I am not so sure about that! I have had the V8800 for a while and they have not released a new driver since the last and only update on 4/25/10?! They even told me how lucky I am to have good driver support, yet didn't answer any concerns about driver updates. I would now say that when it comes to CGI, Nvidia has better performance when it comes to rendering. An animated scene with high resolution textures, motion blur and Photometric lighting is better handled from a company that has been at the forefront of that industry for the past twenty years.

I agree that yes...the cost really offsets productivity! Nvidia has a better reputation for workstation support and knowledgeable integration for 3D graphics. Now I am finding that out the hard way! If you have an awesome high performance system to begin with then you wont notice many troubles when switching to a higher workstation card, except within the veiwports. Yet like they do in these reviews, you will have to start off with a fresh build when switching to something like the Quadros. I have always supported ATI, and now I know why it is that I am not as productive as I want to be!! In studio, I usually have an Nvidia solution, and ATI at home. Since I am usually using scenes already modeled, lite and textured, I have never really noticed much of a difference until rendertime. That, I always caulked up the difference to not having a render farm. Now I know better!!

The PhysX and Cuda features are something that should drive professionals to these cards. It would also be interesting to see how these get integrated with a Tesla c1060? I guess the best way to sum the two companies up?! ATI develops for the end entertainment, down towards the development of content... only as a side note! Nvidia develops for the developer, knowing the end result is to make the best product for entertainment!

So if Y'all are going to just toss this test system into the test-Bin toybox, maybe pitch it my way :) Or I could trade you a FirePro V8800 for one of these Quadros? I would gladly switch the four monitor support for smoother integrated performance:)

0
+ -

The new Quadros support two monitors per card (just like the GTX 4-series)


You can download the new FirePro driver 8.762 from ATI right now.  Choose 'FirePro Beta' from the drop down menu.

 

What problems are you having with the V8800?

 

0
+ -

Thanks Raid, Ill give it a try.

The 8800 has problems in Max on stability in rendering issues. It also doesn't like when you load the Direct X driver. It only seems stable if I use Open GL. At times the wireframe gets about four pixels larger, like it is being drawn in crayon. When I load the X driver it has two of the viewports going blank.  There is still the globals marker for any selected object yet nothing else. I can still switch any of the working veiwports to any view and they still work, just two are MIA.

In Maya the blend shapes sometimes work and most other times don't. It gets frustrating when you move the slider and you think the Blend doesn't actually work! Also any sub-object selection usually does not update when you are moving them or making any kind of adjustments. In both programs the material mapping tends to not update when changing UV's. That only seems to work half of the time. I end up having to zoom in and out decides to refresh the view, then it catches up and adjusts the map.

 

0
+ -

To animatortom

At SIGGRAPH I saw many demos by Autodesk of both Maya and 3ds Max 2011 in the AMD booth using Eyefinity w/ 3 displays. They demo’ed every day with no issues. Also as Max really only supports DX (they quit adding features to OpenGL code 4 or 5 years ago), so it was all DX and worked flawlessly for several days of demos.

So wonder if you were using an old driver?

0
+ -

"I saw many demos ....several days of demos."

That is the key word,....DEMO!

It is one thing to set up a demo system with demo files. When run through properly set up files it can be made to look like it works well. In practical application, it is a whole other matter. If they had a completely tricked out system with twin Xenons and 128GB of RAM with dual 8800's then yes it would look like it rocks. They haven't had to do much to OpenGl because the development has been complete and for the basics of 3D is working well after years of development.

DX is a new direction for DCC.  The thing with Max, is that it requires a ton of RAM and a powerful CPU. If you have a six-core with around 64GB or RAM then with just the Firepro 4800, it would rock! the others are overkill unless you have a strong system to begin with. Maya relies more on the GPU so it benefits as long as it has more than 1GB on the GPU.  With a crappy system, the 8800 is fairly useless in Max. In Maya it is almost three times as fast.

So if they are demoing it, then I can guarantee, they would have it running on a completely tricked out system inside a simple looking case just for show.

0
+ -

@ animatortom

I would suggest you to try Linux - Fedora or Debian for example. There aren't any problems with ATI OpenGL Maya/Linux drivers. Cheers, g.

0
+ -

@Animatortom
"I would now say that when it comes to CGI, Nvidia has better performance when it comes to rendering...is better handled from a company that has been at the forefront of that industry for the past twenty years."

nV hasn't been at the Forefront that long, they filled the gaps Matrox and 3DLabs left behind 10 years ago. If anything ATi's been in it longer, nV just did a better job of capitalizing on the failure of others (including 3Dfx).

The PhysX and Cuda features are something that should drive professionals to these cards.

What exactly is the benefit of PhysX here... or anywhere? Confused

With OpenCL and Direct compute both of those features will start to become marginalized. Proprietary is fine if you can develop a compelling reason, but even Adobe has said they are going to OpenCL, not CUDA for the next build to get the more global CPU & GPU benefits of OpenCL.

CUDA like Brook was a nice bridge solution, but their time has come and gone, just like Cg on which Cuda was based.

"I guess the best way to sum the two companies up?! ATI develops for the end entertainment, down towards the development of content... only as a side note! Nvidia develops for the developer, knowing the end result is to make the best product for entertainment!"

That would be a way to sum it up, but not the 'best way', it's a little Myopic. If anything this review shows that ATi makes better hardware, but they are held back by their much MUCH weaker software/driver/dev_relations teams. AMD was supposed to improve that but they have done little other than a tiny improvement in Linux and only able to at best equal nV.

* Dang, hate the quoting tool here. Even after editing out remnants remain.*

1 2 3 Next
Login or Register to Comment
Post a Comment
Username:   Password: