NVIDIA Tesla T4 GPU Accelerator Rocks Turing, 2560 CUDA Cores And 64 TFLOPs FP16
NVIDIA knows how to make an entrance, that's for sure. The GPU maker stole the spotlight at Gamescom last month by unveiling its hotly anticipated GeForce RTX graphics cards with real-time ray tracing support, which will ship later this month. In the meantime, NVIDIA just announced another product at its GPU Technology Conference (GTC) in Japan, the Tesla T4.
The Tesla T4 is a burly accelerator built for data centers that will enable the next wave of AI-powered services. It's based on NVIDIA's Turing GPU architecture and is being billed as the most advanced inference accelerator ever built, with NVIDIA claiming up to 40X better low-latency throughput compared to Intel's Xeon Gold 6140 CPU.
Indeed, NVIDIA has been promoting GPUs over CPUs as better fits for professional workloads, particularly machine learning and AI chores. On paper, the Tesla T4 brings the goods. It has 320 Turing Tensor Cores and 2,560 CUDA cores capable of delivering up to 64 teraflops of peak FP16 performance, 130 TFLOPS for INT8, and 260 FLOPS for INT4.
The accelerator also comes packed with 16GB of GDDR6 memory, offering up 320GB/s of memory bandwidth (higher in some cases). It all comes packaged in an energy-efficient 75-watt, small PCIe form factor card that is optimized for scale-out servers.
NVIDIA's new Tesla T4 is part of a larger TensorRT Hyperscale Platform. It works in conjunction with NVIDIA's TensorRT 5, an inference optimizer and runtime engine that supports Turing Tensor Cores.
"Our customers are racing toward a future where every product and service will be touched and improved by AI," said Ian Buck, vice president and general manager of Accelerated Business at NVIDIA. "The NVIDIA TensorRT Hyperscale Platform has been built to bring this to reality—faster and more efficiently than had been previously thought possible."
It's easy to see why NVIDIA is excited about the Tesla T4. Machine learning and AI are pervasive technologies. By NVIDIA's estimation, the AI inference industry is poised to grow in the next five years into a $20 billion market.
Several server makers are planning to incorporate the Tesla T4 accelerator into their systems, including Fujitsu, HPE, IBM, and Supermicro.