If you thought
AMD would only talk about dollars and cents at its Financial Analyst Day event, think again. Sure, AMD CEO Dr. Lisa Su shared plenty of data on TAM growth opportunities and ongoing investments, but it’s all predicated on a wide-ranging portfolio of compute and graphics products for both consumer and data center clients. We already
covered the consumer side yesterday with details on RDNA 3 chiplets, 3D V-cache for Zen 4, and Phoenix Point laptop chips. In this article, we’ll go over AMDs’ data center announcements and claims. Let’s get started.
AMD 4th Gen EPYC Server CPU Roadmap And Performance Claims
Starting with its CPU silicon roadmap, AMD talked about what’s in the pipeline to succeed its 3rd generation EPYC Milan (general purpose) and Milan-X (technical) processors. As you can see in the above roadmap, AMD’s 4th gen EPYC family will be comprised of Genoa (general purpose), Bergamo (cloud native), Genoa-X (technical), and Siena (telecommunications), followed by its 5th gen EPYC Turin launch in 2024.
Not all of this is new information, as AMD has talked about its
Genoa and Bergamo platforms before. Genoa is intended to extend AMD’s lead at the socket level and per-performance level. It will feature up to 96 Zen 4 cores in 5nm, support for 12-channel DDR5 memory and PCI Express 5.0, and a new
CXL interface for clever memory expansions.
According to AMD, Genoa is poised to be the highest-performance general-purpose server processor on the market—the company claims its top-of-the-stack silicon will deliver a greater than 75 percent uplift in enterprise Java performance compared to its top 3rd gen EPYC processor.
Genoa is on track to launch in the fourth quarter. Then in the first half of 2023, AMD will release its Bergamo stack powered by what the company is dubbing Zen 4c. Likewise, AMD is pitching Bergamo as the highest-performance server processors for cloud native computing.
Bergamo is based on the Zen 4 ISA, but will throw more cores and threads at cloud applications. Specifically, Bergamo will max out at 128 cores and 256 threads.
AMD previously said Bergamo will remain fully software compatible and utilize the same socket platform as Genoa, and support the same technologies (like DDR5 and PCIe 5). However, it will also include a set of cloud enhancements, such as a density-optimized cache hierarchy and better power efficiency for cloud-native workloads.
Looking a little further down the line, AMD revealed it will have two other 4th gen EPYC iterations in 2023, those being Genoa-X and Siena. Genoa-X come with up to 96 cores and 192 threads just like Genoa, and utilize the same SP5 socket (LGA 6096), but will feature 1GB or more of L3 cache per socket. Like Milan-X, AMD is targeting customers who need to power through workloads that thrive on L3 cache.
As for Siena, it will wield up to 64 Zen 4 cores, though beyond that we don’t have a ton of details to share. All we really know is what’s stated in the slide, which is that it’s optimized for performance-per-watt and will be pitched as a lower-cost platform for intelligent edge and telecommunications markets.
Look for more details on Genoa-X and Siena as their respective launches come closer into view.
AMD Instinct MI300 Fuses Zen 4 CPU Cores With CDNA 3 Graphics
One of the messages AMD delivered strong and clear during the event that it's graphics IP is strongly positioned across practically every market segment, including mobile, embedded, consoles, PC, and of course the data center. Not only does this give AMD a broad market to address, it also enables products that tap into both its CPU and GPU architectures, as is the case with its upcoming Instinct MI300 based in part on its upcoming 5nm CDNA 3 architecture.
The Instinct MI300 is being billed as the world's first data center APU. Leveraging what AMD describes as a "groundbreaking 3D chiplet design," the data center APU combines CDNA 3 graphics cores with Zen 4 CPU cores on the same package, along with both cache memory and HBM chiplets to round things out.
According to AMD, the net gain of this combination is a massive eight-fold increase in AI training performance compared to its Instinct MI200 accelerator. Make no mistake, AMD sees itself making a major splash in the high performance computing (HPC) market and it will be interesting to see how it fares against
NVIDIA's Grace Hopper and
Intel's Falcon Shores produces.
Just as RDNA 3 is coming to deliver a new generation of desktop and laptop graphics solutions for gaming, CDNA 3 will follow suit in the data center sometime next year, with AMD promising a five-fold jump in performance-per-watt on AI workloads compared to CDNA 2.
CDNA 3 will be manufactured on a 5nm process and utilize 3D chiplet packaging. It will also mark a shift away from a coherent memory architecture to a unified memory APU architecture, which promises to make it more efficient. Going this route eliminates redundant memory copies by taking away the need to copy data from one pool to the other.
Finally, AMD talked a little about XDNA, the foundational architecture IP it possesses after
acquiring Xilinix earlier this year. It consists of key technologies, including the FPGA fabric and AI Engine (AIE).
"The FPGA fabric combines an adaptive interconnect with FPGA logic and local memory, while the AIE provides a dataflow architecture optimized for high performance and energy efficient AI and signal processing applications. AMD plans to integrate AMD XDNA IP across multiple products in the future, starting with AMD Ryzen processors planned for 2023," AMD says.
Looking at the sum of all these parts, it's clear AMD does not intend to let it's foot off the gas pedal. Broadly speaking, it has
strong roadmaps in place for both consumer and data center markets, and it will be interesting to see how it all unfolds.