Nvidia Offers Peek Into Advanced Design Evaluation
There's a significant difference between software simulation and hardware emulation. In simulation, software tools are used to simulate the logic of a new design or product respin. The advantage to simulation is that most such tools are mature, flexible, and inexpensive. The disadvantage is that software emulation is slow. Only very simple microprocessors can be simulated at a speed sufficient to test application software.
FPGAs (Field Programmable Gate Arrays) can be used to build hardware prototypes, but have limited debugging capabilities and cannot be reprogrammed as quickly as a simulation. There's an inherent validation gap between initial software simulation and final hardware prototyping that neither technique can bridge efficiently. Nvidia's hardware emulation lab addresses the limitations of both simulation and FPGA prototyping.
The Best of Both Worlds
Hardware emulators are specialized systems that can be programmed to emulate any specific architecture. In Nvidia's case, a standard x86 system is connected to a hardware emulator that's been pre-programmed to emulate a GeForce GPU that's still under design. The testbed generates the code in question and sends it over to the emulator, which then executes and returns the output. NV's designs use a specialized cable to handle the transmission between the two systems, shown above.
"Today’s GPUs, which are some of the world’s most complex devices, have billions of transistors," said Narendra Konda, NVIDIA engineering and emulation lab director. "There’s no way around the fact that cutting-edge design tools like hardware emulators are essential for designing, verifying, developing software drivers and integrating software and hardware components of GPUs and mobile processors."
"Deploying and managing these complex tools requires a very skilled and committed engineering team,” Konda said. “The great work that the emulation team does keeps this state-of-the-art lab humming along."
The emulators can be connected for SLI-style scalability, though we doubt Nvidia regularly taps the full power of the lab for a single project. The nearly ten-year disparity in emulation performance and capacity implies it makes more sense to keep the older emulators like Nile and Rhine (not pictured) handling low-end or mobile parts while only the higher-end equipment is used for cutting-edge designs.
According to Nvidia, Indus was designed to handle Kepler, Fermi's successor. It took NV and Cadence 3.5 years just to design the emulator. Given that the Indus array isn't nearly as large as the Tigris that precededed it, we suspect the Palladium XP is designed to easily accommodate further systems deployed in parallel.
A Concrete Example of An Oft-Discussed Abstraction -
We've noted on many occasions that Nvidia has invested more in GPGPU computing than any other company, but rarely discussed what that means in terms of physical purchases or production equipment. Nvidia's emulator strategy benefits the company's entire range of products, but we suspect its primarily focused on boosting the performance and attractiveness of Tesla solutions. Kepler and Intel's Knight's Corner may come face to face in the HPC market; Team Green's investment in Indus suggests it takes that scenario quite seriously.