NVIDIA Sheds Light On Lack Of PhysX CPU Optimizations - HotHardware

NVIDIA Sheds Light On Lack Of PhysX CPU Optimizations

0 thumbs up
One point everyone agrees on is that NVIDIA has no obligation to AMD or Intel to optimize PhysX for CPUs and/or competing GPUs. It wouldn't surprise us if NV feels quite strongly on this point; the company has spent the last four years pushing for GPU-accelerated physics, software, and consumer applications. Whether you like the concept or not, NVIDIA was unquestionably out in front when it came to offering tools for GPU programming;  we can't blame them for feeling a bit like the Little Red Green Hen with her just-cooked loaf of bread.


I bought the company, I built the hype, and I wrote all the new code!

Unfortunately, NVIDIA is caught within a form of the Prisoner's Dilemma. The PD is a game theory model for describing how individuals acting in what they perceive to be their own best interest, can arrive at the worst outcome. In this case, NVIDIA wants to monetize PhysX/CUDA, and the best way to do that is to encourage developers to use them. The snag is that software developers have a very long history of only using languages and features that they know will be supported by as much hardware as is possible.

The best way to encourage people to buy NVIDIA GPUs is to ensure that the special effects are amazing and only available to NVIDIA customers. Optimizing PhysX to run on an x86 CPU potentially dilutes the attractiveness of an NVIDIA GPU, and increases the chance that customers will keep their existing cards or use a competitor's product. It could also have an impact on the company's nascent Tegra platform; NVIDIA has good reason not to optimize PhysX for Atom.

Except it's not that simple. We've already said that developers tend to support standards that a wide range of hardware can utilize, which means it is in NVIDIA's best interest to optimize PhysX for all sorts of hardware. The more platforms that run PhysX well, the more developers will use it. The better PhysX runs on the CPU...the smaller the chance that developers will go to extra effort to utilize the hardware-accelerated flavor, and the fewer consumers will opt to buy a GPU for whiz-bang special effects.

NVIDIA's claims about improvements notwithstanding, benchmarks and Kanter's investigation have confirmed that the vast majority of games that use hardware PhysX today aren't optimized for CPU execution and drop to a stuttering crawl when tasked to do so. Whose fault that is, NVIDIA's or the developer's, is still an open question. The larger meaning is that NVIDIA may soon have to choose between establishing CUDA and PhysX as a ruling standard or attempting to use them as a selling point for GPUs. Thus far, the company has tried to do both simultaneously, but we're wondering if they can do so for much longer. 

Article Index:

1 2 3 4 5 Next
0
+ -

Hmm, not sure what to make of this. I know any company's goal is to do anything within the boundaries of the law (and outside w/o getting caught) to make money, but at some point public sentiment towards their practices has to factor in or they're going to bleed customers. I think Nvidia desperately needs a ***-storm committee for their ideas.

"Will disabling physX when both an ATI card and Nvidia card are present cause a ***-storm?" "Yes" "Ok nix that"

"Will developing physX in an archaic manner just to avoid poeple using it w/o our hardware cause a ***-storm when they find out about it?" "Yes" "Damn, back to the drawing board!"

0
+ -

Sackyhack: Keep in mind that while we can't guarantee NVIDIA didn't purposely avoid some optimizations, the basic claim that there's only a certain amount of optimization and updating that can be done with a given code base. There comes a point when your programmers are spending more time figuring out how to kludge new features into old software than they are actually building the new features themselves.

NVIDIA's statements do make a certain amount of sense, but we'll have to wait and see how developers use the upcoming 3.0 SDK in order to make a better guess at whether or not the company is avoiding x86 optimizations deliberately.

0
+ -

I just don't see why they could just make it work on x86 CPU's? If they're promoting CUDA and PhysX as standards then why don't they promote it as such, why do they tie it to NVIDIA and then force everybody to buy their products? I swear corporations will do anything to gain a quick buck, even if it means delaying CPU PhysX for years. CPU's are advanced enough to take advantage of multiple threads and therefore even the CPU's with hyper-threading should be able to utilize the multiple-threads to offer similar performance to the GPU PhysX.

I don't they why they they have to rewrite the entire architecture, games like UT3 might be broken by the new architecture and other popular games that rely on PhysX so how are they going to do it without breaking compatibility? Many questions that remain unanswered due to NVIDIA's corporate greed.

0
+ -

TaylorKarras:

I just don't see why they could just make it work on x86 CPU's? If they're promoting CUDA and PhysX as standards then why don't they promote it as such, why do they tie it to NVIDIA and then force everybody to buy their products? I swear corporations will do anything to gain a quick buck, even if it means delaying CPU PhysX for years. 

They're promoting CUDA and PhysX only insofar as they will drive sales of NVDIA GPUs.  Corporations exist to make a profit; all other considerations are secondary.  And in this case with PhysX, it has been a lengthy road of development and marketing; there was nothing quick about this buck.

0
+ -

Taylor,

You don't understand the situation properly. PhysX is a physics middleware engine. It runs on CPUs. It runs on GPUs. Most games in development are console games and PhysX is executed using the CPU on both the XBox 360 and the PS3. This is a point we keep coming back to again and again because it seems so poorly understood. PhysX is a hardware AND a software solution. When we talk about hardware PhysX, we're talking about GPU-executed PhysX. That's a very small chunk of the total Physx base.

0
+ -

Physx actually works very well in at least 1 PC game I know of, regardless of platform.

Metro 2033. Altho the game is an incredible resource hog otherwise. Physx seems to have no additional impact on one platform over an other. And there are plenty of examples of Physx in the game as well.

So reall I think it comes down to how well it is coded into the game.

Other games like Batman AA will slow a system to a crawl if you do not have hardware physx running on an Nvidia card.

0
+ -

What the poop! Are we all talking about my new 480 card I have in my PC?!  I was quite ignorant thinking that Nvidia's main  tackle was improving PC graphics, I mean it's what they do right. Well there's money in optimizing consoles seeing that it's the majority of gaming. I'm hopeful for a revamp of there driver's using x86. Hopefully it won't take them to much time. It boggles my mind that this card could perform magnitudes eh , but still better, and Nvidia hasn't perused to do it yet. Talk about not caring. I have a case badge of you guys!!!graahh

0
+ -

I think you missed the point MrBrownSound.

This has nothing to do with the performance of your video card.

It has to do with Physx support across multiple platforms.

The big debate is that Physx is only optimized for Nvidia hardware.

0
+ -

Acartz,

I've actually never been able to catch a difference in PhysX performance between having it on and off in any CPU in Metro 2033. Visually it does make a difference (although a subtle one). So I do agree--there appears to be some solid optimization there.

We could test it, I suppose, by turning PhysX on and off on a dual-core or even single-core CPU in M2033. That might be interesting.

0
+ -

That would be pretty awesome Joel! I'd be very interested in seeing those results!

1 2 3 4 5 Next
Login or Register to Comment
Post a Comment
Username:   Password: