Logo   Banner   TopRight
TopUnder
Transparent
GTC '14: NVIDIA Outs Pascal, Titan Z, Tegra Updates
Transparent
Date: Mar 25, 2014
Section:Graphics/Sound
Author: Dave Altavilla
Transparent
NVIDIA NV Link, 3D Memory, Pascal GPU

NVIDIA's 2014 GTC (GPU Technology Conference) kicked off with a bang this morning in San Jose California, with NVIDIA CEO Jen-Hsun Huang offering up a healthy dose of new information on next generation NVIDIA GPU technologies for the professional workstation, big data analytics, cloud computing, gaming and mobile markets.

 

As members of the press took their seats front and center, we were treated to a spectacle of sight and sound, NVIDIA-style and Jen-Hsun wasted no time diving in on the latest NVIDIA cutting-edge GPU technology advancements. 

First, we've pulled together sort of a highlight reel of NVIDIA CEO Jen-Hsun Huang's presentation and some of the more impactful overviews and tech demos.  Have a gander and then we'll dive into the specifics a bit more...


Two important NVIDIA technology innovations will be employed in NVIDIA's next-gen GPU technology, now know by its code named "Pascal."  First, there's a new serial interconnect known as NVLink for GPU-to-CPU and GPU-to-GPU communication.

Though details were sparse, apparently NVLink is a serial interconnect that employs differential signaling with embedded clock and it allows for unified memory architectures and eventually cache coherency.  It's similar to PCI Express in terms of command set and programming model but NVLink will offer a massive 5 - 12X boost in bandwidth up to 80GB/sec.

NVLink will allow for better, more efficient multi-GPU scaling and alleviate some of the intrinsic system bottlenecks that exist today with legacy memory and CPU interfaces.

 

Enter - NVIDIA's Pascal

The second important technology to power NVIDIA's forthcoming Pascal GPU is 3D stacked DRAM technology.  NVIDIA has shown this memory manufacturing technique in the past but this is the first real implementation we've seen fleshed out by NVIDIA. The technique employs through-silicon vias that allow the ability to stack DRAM die on top of each other and thus provide much more density in the same PCB footprint for the DRAM package.  If you look closely at the Pascal GPU module here, you can see that stacked DRAM (likely GDDR5 or next gen) on the side edges of the board.  There also appears to be memory embedded on the GPU substrate itself and in fact could be a more localized DRAM cache, similar to what Intel has done with Iris Pro Graphics but likely much higher density.

Pascal is an exciting product in that it should address a memory and IO connectivity bandwidth limitations that exist in current implementations that employ PCI Express and traditional DRAM components/interfaces.  With a 5X multiple increase in available bandwidth for memory and IO, what NVIDIA has built for Pascal should carry 3D graphics performance and capabilities well into the next generation of systems and applications that demand this kind of horsepower.  It should also enable new usage models for GPU computing and even more impressive visual effects for gamers, content creation professionals and consumers alike.  Unfortunately, it looks like we'll have to wait until at least early 2016 before Pascal takes flight, though NVIDIA is prototyping the design currently.

Transparent
GTX Titan Z, IRAY VCA, Erista Tegra

GeForce GTX Titan Z:
Jen-Hsun also used his opening keynote to show off NVIDIA’s most powerful graphics card to date, the absolutely monstrous GeForce GTX Titan Z.

NVIDIA claims the Titan Z is designed for “next-generation 5K and multi-monitor gaming”. We haven’t seen any hard performance data just yet, but if the preliminary specifications are anything to go by, the Titan Z is going to have no trouble powering though the latest games and resolutions beyond 4K should be no problem.

The upcoming GeForce GTX Titan Z is powered by a pair of GK110 GPUs, the same chips that power the GeForce GTX Titan Black and GTX 780 Ti. All told, the card features 5,760 CUDA cores (2,880 per GPU) and 12GB of frame buffer memory—6GB per GPU. NVIDIA also said that the Titan Z’s GPUs are tuned to run at the same clock speed, and feature dynamic power balancing so neither GPU creates a performance bottleneck. NVIDIA also claims the card runs cool and quiet, thanks in part to low-profile components and ducted baseplate channels that minimize turbulence and improves the acoustic qualities of the cooler.

Jen-Hsun said this of the GeForce GTX Titan Z, “If you’re in desperate need of a supercomputer that you need to fit under your desk, we have just the card for you!” And NVIDIA will only charge you $2,999 for one of these monsters. We’ll take two, thank you...


NVIDIA Iray VCA:
Also unveiled at GTC was the NVIDIA Iray VCA. According to NVIDIA, the Iray Visual Computing Appliance (VCA) “combines hardware and software to greatly accelerate the work of NVIDIA Iray -- a photorealistic renderer integrated into leading design tools like Dassault Systèmes' CATIA and Autodesk's 3ds Max.”

Because the appliance is scalable, multiple units can be linked together, speeding up the simulation of light bouncing off surfaces in the real world, i.e. ray tracing. Each Iray VCA features 8 GPUs (totaling 23,000 cores), each paired to 12GB of memory. And the appliance themselves are linked via 10Gbe and Infiniband.

To demonstrate the capabilities of Iray VCA, Jen-Hsun brought out an engineer from Honda who showed off a highly-detailed Honda Accord being rendered in real time, using 19 VCAs. "For our styling design requirements, we developed specialized tools that run alongside our RTT global standard platform," said Daisuke Ide, system engineer at Honda Research and Development. "Our TOPS tool, which uses NVIDIA Iray on our NVIDIA GPU cluster, enables us to evaluate our original design data as if it were real. This allows us to explore more designs so we can create better designs faster and more affordably."

Iray VCA systems will be available sometime this summer though certified system integrators, and will include CADnetwork, Fluidyna, IGI and migenius. The appliance will be priced at $50,000 in North America, and will include an Iray license and the first year of maintenance and updates.
 

Machine Learning:
Another topic discussed at GTC was machine learning. Jen-Hsun talked about a number of companies, like Adobe, Baidu, Netflix, and Yandex that use NVIDIA CUDA-based GPU accelerators to search and analyze huge datasets, to provide things like intelligent image analysis and personalized movie recommendations.



  

Machine learning algorithms are used to train computers to essentially teach themselves by sifting through mountains of data and making intelligent comparisons. For example, a machine learning computer can learn to identify a fox by analyzing lots of images of dogs, ferrets, jackals, raccoons and other animals, including foxes, in the same way that humans learn. A demo which featured photos of random dogs tweeted to the presenter was used to show how the machine learning algo running on an array of NVIDIA Teslas could identify the actual breed, very quickly, not simply identify the animal as a dog.

Jetsen TK1 and Erista:
No GTC would be complete without discussing Tegra and its roadmap. NVIDIA’s CEO also used the opening keynote address to show off the Tegra K1 based Jetson TK1 devkit and announce Erista, the codename for a future Tegra-branded SoC. (Perhaps the Tegra M1?)

The $192 Jetsen TK1 devkit features a Tegra K1 SoC and includes 2GB memory and I/O connectors for USB 3.0, HDMI 1.4, Gigabit Ethernet, audio, SATA, miniPCIe and an SD card slot. Jetson TK1 Developer Kit also includes a full C/C++ toolkit that leverages NVIDIA CUDA technology and it supports NVIDIA’s VisionWorks toolkit as well, which provides a rich set of computer vision and image processing algorithms. The Jetsen TK1 is designed to bring CUDA’s capabilities to areas such as robotics, augmented reality, computational photography, human-computer interface and advanced driver assistance systems (ADAS).

There weren’t many details given on Erista, but Jen-Hsun did say that it was “right around the corner” and that it would feature a Maxwell-based GPU. Through enhancements to the CPU and GPU architecture and presumably its manufacturing process, Erista will be higher performance and more energy efficient than any previous Tegra SoC. Availability wasn’t specifically discussed, but the roadmap slide shows Erista arriving sometime before the end of 2014 or in early 2015.

We'll be out here covering GTC for the next few days, so stay tuned to HotHardware for more news from the conference as it breaks.
 



Content Property of HotHardware.com