NVIDIA Sheds Light On Lack Of PhysX CPU Optimizations

0 thumbs up
We spoke to NVIDIA regarding the state of their PhysX SDK and why Kanter's evaluation shows so little vectorization. If you don't want to dig through all the details, the screenshot below from The Incredibles summarizes NVIDIA's response quite well.


We're not happy, Dave. Not. Happy.

Those of you who want a more detailed explanation, keep reading:

PhysX Evolution
In 2004, Ageia acquired a physics middleware company named NovodeX. Back then, what we now call PhysX was a software-only solution, similar to Havok. Ageia's next step was to build a PPU (Physics Processing Unit) that could accelerate PhysX in hardware. This hardware-accelerated version of the SDK was labeled Version 2, but while it added PPU acceleration, the underlying engine was still using NovodeX code. According to the former Ageia employees still on staff at NVIDIA, NovodeX had begun building the original SDK as far back as 2002-2003.

By the time NVIDIA bought Ageia in 2008, the company had already ported PhysX to platforms like the XBox 360 and the PS3. NVIDIA's first goal was to port PhysX over to the GPU and it logically focused its development in that area. According to NVIDIA, it's done some work to improve the SDK's multithreading capabilities and general performance, but there's a limit to how much it can do to optimize an eight-year-old engine without breaking backwards compatibility.

Why The Timeline Matters:
If we accept NVIDIA's version of events, the limitations Kanter noted make more sense. Back in 2002-2003, Intel was still talking about 10GHz Pentium 4's, multi-core processors were a dim shadow on the horizon, and the a significant chunk of gamers/developers owned processors that didn't support SSE and/or SSE2.

One thing NVIDIA admitted to us when we talked to the company's PhysX team is that it's spent significantly more time optimizing PhysX to run on the XBox 360's Xenon and PS3's Cell processor as compared to the x86 platform. As far as Cell is concerned, there's good technological reasons to do so. If you hand a Cell code that's been properly tuned and tweaked, it can blow past the fastest x86 processors by an order of magnitude. If these optimizations aren't performed, however, the Broadband Engine's throughput might make you wish for a 486.


In theory, properly optimized PhysX could make the image on the left look much more like the GPU- PhysX image created on the right.

Other factors include the fact that the majority of game development is done with consoles in mind, and the simple reason that NVIDIA wants PC users to buy GPUs because of PhysX, which does make it less interested in optimizing CPU PhysX.

Modernized SDK Under Development:
It'll be awhile, but we'll eventually find out whether NVIDIA is purposefully maintaining deprecated standards, or if the problem has more to do with the age of the company's development API. NV isn't giving out any release dates, but the company is hard at work on a new version of the PhysX SDK. Rather than trying to continually patch new capabilities into an old code base, the PhysX team is "rearchitecting" the entire development platform.

In theory, this revamp will address all of the issues that have been raised regarding x86 performance, though it may still be the developer's responsibility to use and optimize certain capabilities. Even after version 3.xx is available, we'll have to wait for games that make full use of it, but if NVIDIA's been sincere, we'll see a difference in how modern CPUs perform. 

Article Index:

1 2 3 4 5 Next
0
+ -

Hmm, not sure what to make of this. I know any company's goal is to do anything within the boundaries of the law (and outside w/o getting caught) to make money, but at some point public sentiment towards their practices has to factor in or they're going to bleed customers. I think Nvidia desperately needs a ***-storm committee for their ideas.

"Will disabling physX when both an ATI card and Nvidia card are present cause a ***-storm?" "Yes" "Ok nix that"

"Will developing physX in an archaic manner just to avoid poeple using it w/o our hardware cause a ***-storm when they find out about it?" "Yes" "Damn, back to the drawing board!"

0
+ -

Sackyhack: Keep in mind that while we can't guarantee NVIDIA didn't purposely avoid some optimizations, the basic claim that there's only a certain amount of optimization and updating that can be done with a given code base. There comes a point when your programmers are spending more time figuring out how to kludge new features into old software than they are actually building the new features themselves.

NVIDIA's statements do make a certain amount of sense, but we'll have to wait and see how developers use the upcoming 3.0 SDK in order to make a better guess at whether or not the company is avoiding x86 optimizations deliberately.

0
+ -

I just don't see why they could just make it work on x86 CPU's? If they're promoting CUDA and PhysX as standards then why don't they promote it as such, why do they tie it to NVIDIA and then force everybody to buy their products? I swear corporations will do anything to gain a quick buck, even if it means delaying CPU PhysX for years. CPU's are advanced enough to take advantage of multiple threads and therefore even the CPU's with hyper-threading should be able to utilize the multiple-threads to offer similar performance to the GPU PhysX.

I don't they why they they have to rewrite the entire architecture, games like UT3 might be broken by the new architecture and other popular games that rely on PhysX so how are they going to do it without breaking compatibility? Many questions that remain unanswered due to NVIDIA's corporate greed.

0
+ -

TaylorKarras:

I just don't see why they could just make it work on x86 CPU's? If they're promoting CUDA and PhysX as standards then why don't they promote it as such, why do they tie it to NVIDIA and then force everybody to buy their products? I swear corporations will do anything to gain a quick buck, even if it means delaying CPU PhysX for years. 

They're promoting CUDA and PhysX only insofar as they will drive sales of NVDIA GPUs.  Corporations exist to make a profit; all other considerations are secondary.  And in this case with PhysX, it has been a lengthy road of development and marketing; there was nothing quick about this buck.

0
+ -

Taylor,

You don't understand the situation properly. PhysX is a physics middleware engine. It runs on CPUs. It runs on GPUs. Most games in development are console games and PhysX is executed using the CPU on both the XBox 360 and the PS3. This is a point we keep coming back to again and again because it seems so poorly understood. PhysX is a hardware AND a software solution. When we talk about hardware PhysX, we're talking about GPU-executed PhysX. That's a very small chunk of the total Physx base.

0
+ -

Physx actually works very well in at least 1 PC game I know of, regardless of platform.

Metro 2033. Altho the game is an incredible resource hog otherwise. Physx seems to have no additional impact on one platform over an other. And there are plenty of examples of Physx in the game as well.

So reall I think it comes down to how well it is coded into the game.

Other games like Batman AA will slow a system to a crawl if you do not have hardware physx running on an Nvidia card.

0
+ -

What the poop! Are we all talking about my new 480 card I have in my PC?!  I was quite ignorant thinking that Nvidia's main  tackle was improving PC graphics, I mean it's what they do right. Well there's money in optimizing consoles seeing that it's the majority of gaming. I'm hopeful for a revamp of there driver's using x86. Hopefully it won't take them to much time. It boggles my mind that this card could perform magnitudes eh , but still better, and Nvidia hasn't perused to do it yet. Talk about not caring. I have a case badge of you guys!!!graahh

0
+ -

I think you missed the point MrBrownSound.

This has nothing to do with the performance of your video card.

It has to do with Physx support across multiple platforms.

The big debate is that Physx is only optimized for Nvidia hardware.

0
+ -

Acartz,

I've actually never been able to catch a difference in PhysX performance between having it on and off in any CPU in Metro 2033. Visually it does make a difference (although a subtle one). So I do agree--there appears to be some solid optimization there.

We could test it, I suppose, by turning PhysX on and off on a dual-core or even single-core CPU in M2033. That might be interesting.

0
+ -

That would be pretty awesome Joel! I'd be very interested in seeing those results!

1 2 3 4 5 Next
Login or Register to Comment
Post a Comment
Username:   Password: