Intel's Raptor Lake Mobile VPU: What Is It And What Does It Do
by
Zak Killian
—
Wednesday, September 14, 2022, 05:05 PM EDT
Machine learning and AI demand enormous amounts of horsepower, if you're doing it on general-purpose processors, which is why companies like Meta and Alphabet have spent billions developing their own specialized neural network accelerators. Intel wants a piece of the AI-acceleration pie of course as well, and that's likely part of why the company is going to start shipping such a feature in its upcoming CPUs, beginning with its 13th-generation mobile parts.
Intel already has CPUs with AI acceleration features, but they take the form of specialized SIMD extensions in the general-purpose CPU cores. Known as "DLBoost," these extensions are a part of AVX-512, and as a result, not supported on the company's latest desktop processors even though they were partially supported on its 11th-gen CPUs—and will be supported on AMD's Zen 4.
According to a slide shared on Twitter by Bob O'Donnell from TECHnalysis Research, Intel will be adding a different kind of neural-network processor to its 13th-generation Raptor Lake and 14th-generation Meteor Lake CPUs. The presence of a so-called "VPU" in Meteor Lake has been rumored since late last year, but this is the first we've heard regarding Raptor Lake.
The slide indicates "Lead Partner Co-development," which sounds to us a lot like the VPU will only appear in products co-developed with Intel's partners. That generally means that only select laptops will see the feature, so we can expect that specific SKUs of laptops sporting Raptor Lake mobile will include the processor as a separate device. Meanwhile, it's expected to be a standard part of Meteor Lake; Intel says it will have "broad market accessibility" and "processor integration".
Intel has created some interesting AI chips of its own, like its Loihi 2 neuromorphic computers, but the company has also made numerous acquisitions in its pursuit of industry-leading AI compute performance: Altera's FPGAs, Nervana's Neural Network Processors, Habana Labs' Gaudi accelerators, and Movidius' computer vision chips, among others.
It seems like the last entry in that list is the source for Intel's VPUs, which means they are likely known most properly as "Visual Processing Units." However, some sources think Intel will take to calling them "Versatile Processing Units" instead. That name comes from a kernel patch committed by Intel Linux developer Jacek Lawrynowicz at the end of July.
The last thing we heard out of Intel's Movidius division was the release of the Neural Compute Stick 2, a revamped version of the company's original product. That's a flash drive-sized USB device that can be bus-powered on USB 2.0 and offers an impressive amount of neural-network acceleration for a one-watt device.
With that in mind, Movidius' VPU is a perfect fit for a low-power client-focused AI accelerator. The company's tech is focused on AI inference, not training, as it's intended to be used to apply pre-trained AI models to difficult problems in computer vision. This could have applications in image processing and computational photography or be used to enhance security with user presence detection and more. Presumably the "Neural Compute Subsystem" in the Movidius processors can be applied to other neural-network inferencing tasks, too, so the "versatile" moniker might not be completely off-base.