Meta Claims Its New MTIA Chip Brings A 3X Performance Lift For The AI Arms Race

meta ai chip hero
Meta continues to work on its custom silicon, with the second version of its Meta Training and Inference Accelerator (MTIA). It’s a chip the company says is “designed specifically for our deep learning recommendation models that are improving a variety of experiences across our apps and technologies.” This new processor will be integrated into Meta’s large-scale infrastructure, and part of the effort to have a full-stack solution to serve the company and developers.

Meta says its latest iteration of MTIA “has already improved performance by 3x over our first generation chip across four key models we evaluated.” The company worked hard to ensure that this new silicon has as good a balance between compute, memory bandwidth and memory capacity to best serve its models. These models are the basis for the rankings and recommendations that are an important part of Meta’s products.

meta ai chip body

As AI continues to play an increasingly bigger role in tech companies’ offerings, Meta is looking to squeeze the most out of the hardware it utilizes. MTIA v2 is an opportunity for the company to get ever more efficiency for what it refers to as its “unique workloads.” This is key as the amount of compute necessary to deliver the AI experiences users expect will continue to grow.

Meta sees MTIA as an important part of the company’s success going forward. “We’re designing our custom silicon to work in cooperation with our existing infrastructure as well as with new, more advanced hardware (including next-generation GPUs) that we may leverage in the future,” said the company.

With this newest version of MTIA Meta is showing its commitment to developing in-house silicon. It also seems as if the path forward for AI is hyper custom silicon that can achieve next level efficiency.
Tags:  AI, meta, (nasdaq:meta), mtia