Today’s opening keynote at the Intel Developers Forum focused on a number of forward-looking AI, deep learning, connectivity and networking technologies, like
5G and
Silicon Photonics. But late in the address, Intel’s Vice President and General Manager of its Data Center Group (DCG), Diane Bryant, quickly dropped a few details regarding the company’s next-generation
Xeon Phi processor, codenamed Knights Mill.
Knights Mill is designed for high-performance machine learning and artificial intelligence workloads, and is currently slated for release sometime in 2017. According to Bryant, Knights Mill is optimized for scale-out analytics implementations and it will include architectural enhancements and new instructions for variable precision floating point algorithms, targeted at deep learning training.
Intel’s Xeon Phi is in competition with high-end GPUs from
NVIDIA and
AMD that excel at computing highly parallel workloads. GPUs in deep learning servers essentially act as co-processors, however, and must be installed alongside traditional CPUs. Knights Mill Xeon Phi is a many core processor that is also bootable, though – it’s the main processor and co-processor in one solution. Intel also points out that Xeon Phi has an advantage over GPUs thanks to the flexible memory configuration of Xeon Phi processors.
According to Intel, 7% of all servers deployed last year were in support of machine learning, and Intel processors were in 97% of those servers. How many of those servers also included multiple
GPUs wasn’t mentioned, but you can bet it was a large majority. With Knights Mill, Intel aims to change that.
We should know more about Knights Mill in the coming months.