Taken down to the basics, Project Trillium consists of three primary components. The first is the actual machine learning processor, which will be available for ARM partners starting in the second half of 2018 (an early preview will be available in April). The second component is an object detection processor, which should be available by the end of Q1, while the third component consists of new software libraries dedicated to processing neural networks.
ARM claims that its hardware chip was designed from the ground-up for machine learning tasks, with performance rated at 4.6 TOPs with an efficiency rating of 3 TOPs per watt. The complementary detection processor is able to process objects as small as 50x60 pixels on up to Full HD resolution at 60 frames per second in real-time. This enables real-time facial recognition, allowing the machine learning processor to analyze fewer pixels for faster, fine-grain object recognition.
ARM says that its object detection sensor offers an 80x uplift in performance compared to a traditional smartphone digital signal processor (DSP), and is capable of detecting an "almost unlimited number" of objects in a frame.
ARM's neural network software, which can be used with the ARM Compute Library and CMSIS-NN, is said to "[bridge] the gap between NN frameworks such as TensorFlow, Caffe, and Android NN and the full range of Arm Cortex CPUs, Arm Mali GPUs, and ML processors."
“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint. To meet this demand, Arm is announcing its new ML platform, Project Trillium,” said Rene Haas, ARM president for the IP Products Group. “New devices will require the high-performance ML and AI capabilities these new processors deliver. Combined with the high degree of flexibility and scalability that our platform provides, our partners can push the boundaries of what will be possible across a broad range of devices.”