OpenAI Taps Amazon AWS To Scale AI Compute On NVIDIA Blackwell In Massive $38B Deal

Data center server racks.
The AI arms race is afoot and in a bid to stay in front of the 8-ball, OpenAI is forming a multi-year partnership with Amazon Web Services (AWS) to rapidly scale agentic workloads. As part of the massive $38 billion deal, OpenAI will have access to AWS compute comprising hundreds of thousands of NVIDIA GPUs, and potentially tens of millions of CPUs.

In a blog post, Amazon says it's deploying a sophisticated infrastructure for OpenAI that's optimized for efficient and performant AI processing. That includes clusters of NVIDIA GB300 and GB200 GPUs via Amazon EC2 UltraServers on the same network for low latency performance across interconnected systems.

"Scaling frontier AI requires massive, reliable compute," said OpenAI co-founder and CEO Sam Altman. "Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone."

The partnership begins in earnest right away, with OpenAI having immediate access to AWS compute for tasks like serving inference for ChatGPT and training next-generation AI models (notably, OpenAI launched GPT-5 three months ago). Amazon and OpenAI anticipate deploying full capacity by the end of next year, with an ability for further expansion into 2027 and beyond.

"As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions," said Matt Garman, CEO of AWS. "The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads."
News of OpenAI's deal with AWS comes on the heels of OpenAI taking a 10% stake in AMD, with a commitment to buy several generations of Instinct AI chips. As part of that arrangement, AMD and OpenAI will strategically plan large-scale deployments of AMD's chips, including both Instinct IMI450 series GPUs and rack-scale AI solutions for years to come.

That's not the extent of OpenAI's recent spending spree. In September, OpenAI and NVIDIA announced a $100 billion partnership with a plan to deploy at least 10 gigawatts of NVIDIA systems dedicated to training and running OpenAI's future models. It's safe to say that OpenAI is spending big on its futures and not putting all its investment eggs into a single basket.