NVIDIA Sheds Light On Lack Of PhysX CPU Optimizations - HotHardware

NVIDIA Sheds Light On Lack Of PhysX CPU Optimizations

0 thumbs up
About four months ago, we covered the latest round of shin-kicking between ATI and NVIDIA, with ATI claiming that NVIDIA purposefully crippled CPU performance when running PhysX code and coerced developers to make use of it. NVIDIA denied all such claims, particularly those that implied it used its "The Way It's Meant To Be Played" program as a bludgeon to force hardware PhysX on developers or gamers.

A new report from David Kanter at Real World Technologies has dug into how PhysX is executed on a standard x86 CPU; his analysis confirms some of AMD's earlier statements. In many cases, the PhysX code that runs in a given title is both single-threaded and decidedly non-optimized. And instead of taking advantage of the SSE/SSE2 vectorization capabilities at the heart of every x86 processor sold since ~2005, PhysX calculations are done using ancient x87 instructions.

When in doubt, blame the PPU.

Before the introduction of SIMD sets like SSE and SSE2, if you wanted to do floating point calculations on an x86 processor, you used the x87 series of commands. In the past 11 years, however, Intel, AMD, and VIA have all three adopted SSE and SSE2. Both allow for much higher throughput than the classic x87 instruction set—given the ubiquity of support across the PC market, it's hard to tell why NVIDIA hasn't specifically mandated their use.

As RWT's analysis shows, however, virtually all of the applicable uops in both Cryostasis and Soft Body Physics use x87; SSE accounts for just a tiny percentage of the whole. Toss in the fact that CPU PhysX is typically single-threaded while GPU PhysX absolutely isn't, and Kanter's data suggests that NVIDIA has consciously chosen to avoid any CPU optimizations, and, in so doing, has artificially widened the gap between CPU and GPU performance. If that allegation sounds familiar, it's because we talked about it just a few weeks back, after Intel presented a whitepaper claiming that many of NVIDIA's test cases when claiming huge GPU performance advantages were unfairly optimized.
 

Article Index:

1 2 3 4 5 Next
0
+ -

Hmm, not sure what to make of this. I know any company's goal is to do anything within the boundaries of the law (and outside w/o getting caught) to make money, but at some point public sentiment towards their practices has to factor in or they're going to bleed customers. I think Nvidia desperately needs a ***-storm committee for their ideas.

"Will disabling physX when both an ATI card and Nvidia card are present cause a ***-storm?" "Yes" "Ok nix that"

"Will developing physX in an archaic manner just to avoid poeple using it w/o our hardware cause a ***-storm when they find out about it?" "Yes" "Damn, back to the drawing board!"

0
+ -

Sackyhack: Keep in mind that while we can't guarantee NVIDIA didn't purposely avoid some optimizations, the basic claim that there's only a certain amount of optimization and updating that can be done with a given code base. There comes a point when your programmers are spending more time figuring out how to kludge new features into old software than they are actually building the new features themselves.

NVIDIA's statements do make a certain amount of sense, but we'll have to wait and see how developers use the upcoming 3.0 SDK in order to make a better guess at whether or not the company is avoiding x86 optimizations deliberately.

0
+ -

I just don't see why they could just make it work on x86 CPU's? If they're promoting CUDA and PhysX as standards then why don't they promote it as such, why do they tie it to NVIDIA and then force everybody to buy their products? I swear corporations will do anything to gain a quick buck, even if it means delaying CPU PhysX for years. CPU's are advanced enough to take advantage of multiple threads and therefore even the CPU's with hyper-threading should be able to utilize the multiple-threads to offer similar performance to the GPU PhysX.

I don't they why they they have to rewrite the entire architecture, games like UT3 might be broken by the new architecture and other popular games that rely on PhysX so how are they going to do it without breaking compatibility? Many questions that remain unanswered due to NVIDIA's corporate greed.

0
+ -

TaylorKarras:

I just don't see why they could just make it work on x86 CPU's? If they're promoting CUDA and PhysX as standards then why don't they promote it as such, why do they tie it to NVIDIA and then force everybody to buy their products? I swear corporations will do anything to gain a quick buck, even if it means delaying CPU PhysX for years. 

They're promoting CUDA and PhysX only insofar as they will drive sales of NVDIA GPUs.  Corporations exist to make a profit; all other considerations are secondary.  And in this case with PhysX, it has been a lengthy road of development and marketing; there was nothing quick about this buck.

0
+ -

Taylor,

You don't understand the situation properly. PhysX is a physics middleware engine. It runs on CPUs. It runs on GPUs. Most games in development are console games and PhysX is executed using the CPU on both the XBox 360 and the PS3. This is a point we keep coming back to again and again because it seems so poorly understood. PhysX is a hardware AND a software solution. When we talk about hardware PhysX, we're talking about GPU-executed PhysX. That's a very small chunk of the total Physx base.

0
+ -

Physx actually works very well in at least 1 PC game I know of, regardless of platform.

Metro 2033. Altho the game is an incredible resource hog otherwise. Physx seems to have no additional impact on one platform over an other. And there are plenty of examples of Physx in the game as well.

So reall I think it comes down to how well it is coded into the game.

Other games like Batman AA will slow a system to a crawl if you do not have hardware physx running on an Nvidia card.

0
+ -

What the poop! Are we all talking about my new 480 card I have in my PC?!  I was quite ignorant thinking that Nvidia's main  tackle was improving PC graphics, I mean it's what they do right. Well there's money in optimizing consoles seeing that it's the majority of gaming. I'm hopeful for a revamp of there driver's using x86. Hopefully it won't take them to much time. It boggles my mind that this card could perform magnitudes eh , but still better, and Nvidia hasn't perused to do it yet. Talk about not caring. I have a case badge of you guys!!!graahh

0
+ -

I think you missed the point MrBrownSound.

This has nothing to do with the performance of your video card.

It has to do with Physx support across multiple platforms.

The big debate is that Physx is only optimized for Nvidia hardware.

0
+ -

Acartz,

I've actually never been able to catch a difference in PhysX performance between having it on and off in any CPU in Metro 2033. Visually it does make a difference (although a subtle one). So I do agree--there appears to be some solid optimization there.

We could test it, I suppose, by turning PhysX on and off on a dual-core or even single-core CPU in M2033. That might be interesting.

0
+ -

That would be pretty awesome Joel! I'd be very interested in seeing those results!

1 2 3 4 5 Next
Login or Register to Comment
Post a Comment
Username:   Password: