IBM Power8-powered Linux Servers Use Blazing Fast NVIDIA NVLink For AI, Deep Learning
We've been hearing about NVIDIA's NVLink for quite a while -- ever since the original announcement of Pascal -- but we've still not seen it put into broad use. That changes with NVIDIA's super-high-end Tesla P100, but it's still not shipping in huge quantities. What might come first is IBM's newest Linux-based servers, which also employ the use of NVLink for accelerated AI and deep-learning research.
As you might suspect with the use of NVLink, IBM's latest Linux servers are GPU-focused, and because of that, IBM says that they can offer up to 80% better performance-per-dollar than solely x86-based servers. It's no secret that GPUs are highly parallel beasts, and NVIDIA itself has touted the benefits of them to AI and deep-learning researchers.
While that 80% ROI improvement is important, another interesting comparison is made: with the use of NVLink, data movement can be improved fivefold compared to "any competing platform". That's not surprising given what NVIDIA's told us in the past, but it's still incredibly impressive.
If you need even more prodding, an experience of Tencent is given. Tencent is one of the largest ISPs in China, and after testing with IBM's new servers, it found that it could churn through data-intensive workloads three times faster than its previous setup. The ROI gains with this are obvious: faster processing means quicker results.
The new Power LC lineup is set to become available later this month, with pricing beginning at $5,999. What's included with that $5,999 won't be known until they hit the shelves, but we do know that it will feature at least a dual-CPU platform that can also support two GPUs.