Intel’s AI Future Banks On Nervana And Knights Mill Processors For 100-Fold Performance Boost

Intel Sign
Intel is laying a roadmap for to advance artificial intelligence performance across the board. Nervana Systems, a company that Intel acquired just a few months ago, will play a pivotal role in the company’s efforts to make waves in an industry dominated by GPU-based solutions.

Intel’s Nervana chips incorporate technology (which involves a fully-optimized software and hardware stack) that is specially tasked with reducing the amount of time required to train deep learning models. Nervana hardware will initially be available as an add-in card that plugs into a PCIe slot, which is the quickest way for Intel to get this technology to customers. The first Nervana silicon, codenamed Lake Crest, will make its way to select Intel customers in H1 2017.

Nervana Processor
Nervana claims their processors, which employ HBM technology, achieve "unprecedented compute density and an order of magnitude more raw computing power than today’s state-of-the-art GPUs."

“We expect the Intel Nervana platform to produce breakthrough performance and dramatic reductions in the time to train complex neural networks,” said Diane Bryant, Executive VP of Intel’s Data Center Group. “We expect Nervana’s technologies to produce a breakthrough 100-fold increase in performance in the next three years to train complex neural networks, enabling data scientists to solve their biggest AI challenges faster,” added Intel CEO Brian Krzanich.

Intel AI Ecosystem

Intel is also talking about Knights Mill, which is the next generation of the Xeon Phi processor family. The company promised that Knights Mill will deliver a 4x increase in deep learning performance compared to existing Xeon Phi processors, and the combined solution with Nervana will offer orders of magnitude gains in overall deep learning performance. 

3rd Gen Xeon Phi

Intel’s goal is to provide robust hardware to compete with GPU-based solutions from the likes of NVIDIA and Google that have been dominating in artificial intelligence computing. Intel would argue that not having a GPU solution doesn’t put it at a disadvantage, and that its scalable architecture is more future-proof.

“GPGPU architecture is not uniquely advantageous for AI, and as AI continues to evolve, both deep learning and machine learning will need highly scalable architectures,” said Krzanich. “Intel architecture can support larger models and offer a consistent architecture from edge to the data center. This is where a broad product portfolio with a holistic ecosystem is a strategic advantage.

Intel Xeon Phi

“Intel has the vision, the technologies and the commitment to harness the power of AI to deliver a better world. Now is the time to inspire and innovate for the future.

Intel was caught flat-footed when the smartphone age was quickly thrust upon us. The company, which had become a household name with its processors for desktops, notebooks and servers was simply too late to the party to really take on mobile SoC heavyweights like Qualcomm and Samsung. As we see a similar rise in the need for hardware to power artificial intelligence projects, Intel doesn’t want to make the same mistake again.