Logo   Banner   TopRight
NVIDIA's Road Ahead: Ion, Tegra and The Future of The Company
Date: Aug 28, 2009
Author: Joel Hruska

NVIDIA has built its brand and reputation as a GPU designer since the company was founded in 1993, but recent comments by the company have implied that it believes platforms like Tegra and ION will be key revenue generators in the future. We've previously discussed NVIDIA's ongoing emphasis on the GPU as a massively parallel processor capable of handling workloads and programs far outside the realm of video games, but to date, reviewers and analysts alike have treated Tegra as more of a side project than a future core competency.

The two core components of NVIDIA's mobile strategy: ION and Tegra

Given how difficult the last twelve months have been for NVIDIA, it's easy to wonder if the company's decision to focus on Tegra is correct. To date, the GPU designer has spent some $600 million on Tegra development, with nary a cent in revenue to show for it just yet. Viewed strictly in the short term, it might seem that NVIDIA has mistakenly pumped a good deal of cash into a niche product at a time when it could ill afford to do so. Longer-term, however, there's good reason to think that Tegra really could grow to become a revenue pillar. To understand why, we'll first need to examine the GPU market as a whole.

NVIDIA has spent the better part of a decade establishing itself as a major GPU player in everything from notebooks to workstations, but the imminent introduction of new products and technologies from competitors like Intel could detrimentally impact the company's bottom line, particularly as these competing products transition to smaller process nodes and more advanced designs. Up until now, every CPU in existence had to be paired with a GPU that was either integrated into the motherboard or sold separately as a discrete solution; NVIDIA competes in both of these markets with its various integrated chipsets and discrete cards.

Business As Usual Is Not An Option
Westmere, Intel's budget/mainstream processor that pairs a 32nm dual-core CPU with a 45nm integrated GPU, will challenge that model. Westmere isn't expected to wow the world with excellent graphics performance—even the weakest discrete solutions will likely outperform it—but an integrated CPU+GPU combination will appeal to both Taiwanese motherboard manufacturers and OEMs alike. Moving the GPU to the processor packaging will simplify motherboard design, reduce motherboard costs, and move a potential failure point / warranty cost away from the platform. For the various mobo companies, that's a win-win-win scenario. OEMs like Dell or HP may not see a cost-of-warranty benefit, but should still be able to take advantage of shorter design cycles and cheaper hardware.

Actual Westmere on the left, NVIDIA's 9400G on the right. Images are scaled to the same size, but this is *not* a side-by-side photo. The larger of the two dies on the Westmere core is presumably the GPU.

Westmere's integrated GPU isn't the only Intel-branded headache NVIDIA will have to deal with in the next few years; Intel has given guidance that it expects to ship Larrabee silicon in the first half of 2010. It is, of course, possible that Larrabee will arrive with all the attractiveness of a week-dead walrus: overpriced, underperforming, hot, noisy, and unable to deliver on its lofty promises of real time ray-tracing (RTRT). Good companies, however, don't make future plans on the assumption that their competitors will screw up, which means NVIDIA has to plan for a future in which Larrabee is actively competing for the desktop / workstation market. Intel, after all, won't just throw down its toys and go home, even if first-generation Larrabee parts end up sold as software-development models rather than retail hardware. And by the time Microsoft, Sony, and Nintendo start taking bids on their next-generation consoles, we could be looking at a three-way race for their respective video processors.

 The physical layout of Larrabee's die. General consensus is that the above is a 32-core part with 8 TMUs (Texture Mapping Units) and 4 ROP units. The small square attached to each core is the L2 cache; each core is equipped with 256K of exclusive cache for a total of 8MB of L2 on-die.

If Intel successfully establishes itself as a major player in the discrete GPU market, both NVIDIA and AMD will be faced with an unwelcome third opponent with financial resources that dwarf the two of them combined. As the dominant company in both desktop and workstation graphics, NVIDIA literally has more to lose from such a confrontation, and it's the only one of the three that does not possess an x86 license or an established CPU brand. This leaves the corporation at a distinct disadvantage compared to AMD; the latter can combine a CPU and GPU into a single package and / or design itself a graphics core based on the x86 architecture. With no simple way to address these issues, NVIDIA is exploring a separate market altogether, and that's where Tegra comes in.

Why Tegra Matters, Conclusion

NVIDIA's ION platform will help siphon Atom-driven revenue into the company's coffers, but it leaves the GPU manufacturer entirely dependent on Intel's release calendar, pricing, and product delivery. ION is important, as it provides NVIDIA with an ultra-low-power platform that's x86-compatible, but good feelings between the two Santa Clara-based companies are most likely at or near a historic low.

Tegra's design combines a proven, well-documented, low-power CPU architecture (ARM11 MPCore) with NVIDIA's own graphics technology. At present, Tegra is being floated as a solution for GPS units, automotive computing, smartbooks, and MIDs—and it'll power the upcoming Zune HD—but we can expect Team Green to continue to evolve Tegra's capabilities the same way Intel is evolving Atom's.

If NVIDIA can persuade more OEMs to design Tegra-based products and successfully overcome the device's inability to run x86 code, the long-term profit potential could be enormous. According to Intel's financial reports, Atom was responsible for $581 million in company revenue in the first two quarters of 2009. Even if we assume Atom merely repeated that performance in the second half of the year (i.e., no seasonal uptick), Intel's Atom-derived revenue for all of 2009 would be nearly $1.2 billion. That's small potatoes compared to Intel's net revenue of $37.6 billion in 2008, but it would be great performance for a product family that won't be even two years old by then. Keep in mind, actual Atom revenue for all of 2009 should be significantly higher.

Microsoft's Zune HD

Even a small slice of Atom's current revenue would make a meaningful difference on NVIDIA's balance sheet, and Intel doesn't exactly plan to limit the small processor's growth potential. Intel's oft-repeated goal is to push Atom from netbooks to MIDs to smartphones, pausing along the way to launch devices at every commercially viable size and speed. If NVIDIA can tap that growth with a mixture of x86 platforms (ION) and its own solutions (Tegra), the company could establish itself as a major player in a variety of new markets.

CUDA (Compute Unified Device Architecture) doesn't appear to be supported on the first-generation of Tegra devices, but it's hard to imagine NVIDIA not including it (possibly in a limited way) on subsequent iterations of the platform if the first proves successful. NVIDIA has made a great deal of noise about CUDA as the future of GPU software development, and the ability to offload programs to the graphics processor could give NVIDIA its own secret sauce when competing against future iterations of Atom.

A quick word on CUDA. While it's often discussed as if it was a programming language, this is incorrect; NVIDIA defines the term as "a revolutionary parallel computing architecture that delivers the performance of NVIDIA's...graphics processor technology to general purpose GPU Computing." CUDA is best understood as a platform capable of running C code (with optional CUDA extensions), OpenCL, and Microsoft's DirectCompute. NVIDIA recently

released a DirectCompute-compatible driver in anticipation of Windows 7's launch date, while Apple's upcoming Snow Leopard will use OpenCL to pass code to the GPU for execution.

There's no inherent reason why all three standards can't coexist the same way various CPU programming languages have shared space for decades. CUDA may be at a disadvantage, insomuch as it's currently limited to NVIDIA video cards, but that fact does not, in and of itself, doom CUDA. If NVIDIA can demonstrate benefits to using CUDA as opposed to DirectCompute or
OpenCL, especially in the HPC market, it will remain a viable option. Even if CUDA is deprecated in the long run, deploying its own standard gave NVIDIA and its developer partners a significant headstart in developing programs that effectively utilize the GPU.


The last twenty-five years are littered with examples of companies who claimed Intel (and, by extension, the x86 architecture) couldn't possibly challenge the performance or scalability of their various processors or products. Faced with a future where integrated CPU / GPU hybrids chip away at its budget products and Larrabee challenges the midrange (at least), NVIDIA is pursuing the barely tapped market for smartbooks, UMPCs, MIDs, and next-generation smart phones. The company's lack of an x86 license could prove to be a disadvantage, but the market space Tegra is targeting is the only one where a non-x86 architecture actually has a chance of succeeding.

Wars aren't won by sitting at home and waiting for the enemy to come to you, especially when your foe has ten times your revenue and far-reaching connections. Graphics and GPU design will remain a critical part of the company's future—you don't pump two years into creating the concept of "visual computing" only to quit—but NVIDIA's decision to capitalize on the on the same market opportunities Intel is working to create in Atom's target market is, at the very least, strategically sound.

Content Property of HotHardware.com