AMD Doubles Down On AI With Acquisition Of Open-Source Dev Nod.Ai
AMD CEO Dr. Lisa Su has been quite vocal about the company’s intention to aggressively focus on gaining AI compute market share moving forward. In recent weeks, Dr. Su noted that AI will be a part of every product at AMD in the next five years, and that she expects AI to be a significant growth driver. News also broke in September that LaminiAI has been building LLMs (Large Language Models) on AMD GPUs for the past year, and over the last few months we learned that the AMD Instinct MI300A AI accelerator was already sampling, and that the powerful Instinct MI300X was coming down the pipeline as well. AMD’s Ryzen AI engine has also been shipping in Ryzen 7040U series processors for months now, not to mention the wealth of AI accelerators in AMD's Xilinx portfolio.
“The acquisition of Nod.ai is expected to significantly enhance our ability to provide AI customers with open software that allows them to easily deploy highly performant AI models tuned for AMD hardware,” said Vamsi Boppana, senior vice president, Artificial Intelligence Group at AMD. “The addition of the talented Nod.ai team accelerates our ability to advance open-source compiler technology and enable portable, high-performance AI solutions across the AMD product portfolio. Nod.ai’s technologies are already widely deployed in the cloud, at the edge and across a broad range of end point devices today."
AMD has already made great strides with its ROCm open software platform for GPU compute, which features open source languages, compilers, libraries and tools designed to accelerate code development. In fact, LaminiAI’s CTO stated recently that ROCm has “achieved parity” with NVIDIA’s CUDA for LLMs. AMD also gained significant AI software expertise when it acquired the Vitis AI integrated development environment as part of the Xilinx acquisition. Today’s move to acquire Nod.ai adds yet another piece to the puzzle.
Nod.ai has clearly been working closely with AMD for quite a while. At CES this year, the company showed off Stable Diffusion running on RDNA3 with a Radeon RX 7900 XTX. And at NAB 2023, Nod.ai demo’d Generative AI-based rendering in Blender with its SHARK complier and runtime on the AMD Radeon PRO W7900 GPU. Nod.ai explains SHARK as an “…open source cross platform (Windows, macOS and Linux) Machine Learning Distribution packaged with torch-mlir (for seamless PyTorch integration), LLVM/MLIR for re-targetable compiler technologies, along with IREE (for efficient codegen, compilation and runtime) and Nod.ai’s tuning.”
With AI permeating virtually very part of computing at the moment, and the massive opportunities in the space, we’re sure this isn’t the last we’ll hear from AMD on this front.