NVIDIA Sheds Light On Lack Of PhysX CPU Optimizations
I bought the company, I built the hype, and I wrote all the new code!
Unfortunately, NVIDIA is caught within a form of the Prisoner's Dilemma. The PD is a game theory model for describing how individuals acting in what they perceive to be their own best interest, can arrive at the worst outcome. In this case, NVIDIA wants to monetize PhysX/CUDA, and the best way to do that is to encourage developers to use them. The snag is that software developers have a very long history of only using languages and features that they know will be supported by as much hardware as is possible.
The best way to encourage people to buy NVIDIA GPUs is to ensure that the special effects are amazing and only available to NVIDIA customers. Optimizing PhysX to run on an x86 CPU potentially dilutes the attractiveness of an NVIDIA GPU, and increases the chance that customers will keep their existing cards or use a competitor's product. It could also have an impact on the company's nascent Tegra platform; NVIDIA has good reason not to optimize PhysX for Atom.
Except it's not that simple. We've already said that developers tend to support standards that a wide range of hardware can utilize, which means it is in NVIDIA's best interest to optimize PhysX for all sorts of hardware. The more platforms that run PhysX well, the more developers will use it. The better PhysX runs on the CPU...the smaller the chance that developers will go to extra effort to utilize the hardware-accelerated flavor, and the fewer consumers will opt to buy a GPU for whiz-bang special effects.
NVIDIA's claims about improvements notwithstanding, benchmarks and Kanter's investigation have confirmed that the vast majority of games that use hardware PhysX today aren't optimized for CPU execution and drop to a stuttering crawl when tasked to do so. Whose fault that is, NVIDIA's or the developer's, is still an open question. The larger meaning is that NVIDIA may soon have to choose between establishing CUDA and PhysX as a ruling standard or attempting to use them as a selling point for GPUs. Thus far, the company has tried to do both simultaneously, but we're wondering if they can do so for much longer.