AMD Unveils EPYC 7000 Series Processors And Platform To Take On Intel In the Data Center
AMD EPYC 7000 Performance Claims Vs. Intel Xeon And Our Analysis
AMD is claiming a 2P EPYC 7601-based system can offer roughly 47% higher integer performance than a 2P Intel Xeon E5-2699A V4 based machine, up to 75% higher floating point performance, and a huge 2.5x increase in memory bandwidth. It's important to point out that the Xeon model referenced here is a 22-core chip with a Turbo Boost frequency of 3.6GHz and 55MB of L3 cache. It's currently one of Intel's fastest processors in the Xeon E5 v4 family for two socket servers.
AMD also showed some comparisons at various price points up and down the EPYC 7000 series product stack, versus comparable Xeon-based systems. Actual EPYC 7000 series pricing for the entire line-up hasn't been provided by AMD just yet, but the Y-axis in the table lists approximate price ranges for the particular CPU configurations. AMD did slip in a note about EPYC 7601 1KU pricing in a back-up slide that had the processor at $4,200 each.
In both 1P and 2P configurations, at price points ranging from $400 on up to over $4,000, AMD is claiming performance advantages for EPYC in a wide but impressive 21% to 70% performance advantage range, with the largest gains happening at the higher-volume $600 - $800 price points. It should be clearly noted, however, that footnote detail references that these "scores are estimates based on SPECint_rate_base2006." This is standard notation procedure for the most part with server products, but worth mentioning.
AMD Zen-based EPYC 7000 series processors also offer huge performance benefits over AMD’s previous generation Bulldozer-based processors. In virtualization specific comparisons, EPYC offers a 49% - 50% latency reduction for common operations and task switching.
AMD had a number of live performance demos running at the EPYC launch event today as well. There were an array of compute and storage performance demos on hand, that showed EPYC-based servers offering up more compute throughput, more IOPs, higher bandwidth than comparable Intel-based systems. Various systems highlighted NVMe storage configurations pushing north of 9 million 4K IOPs, EPYC servers offering up more memory and compute bandwidth, running aerodynamics simulations in VR, compiling Linux instantiations dramatically faster, and also hosting more virtual machines per server. In short, AMD's application advantages that were on display, were a direct result of the platform's additional cores per socket, higher memory bandwidth and broader PCIe connectivity direct to the CPU root complex.
AMD EPYC 7000 Series Platform Adoption
A handful of AMD’s partners also participated in the launch event. An array of servers and components were on-hand, including a single-socket Tyan motherboard and a number of 2P servers. What's impressive to see is the 2P setup noted above, with all 32 of its DDR4 DIMM sockets loaded up. That's a lot of memory, depending on the size of registered DIMMs that are installed, and a lot of aggregate memory bandwidth.Specific hardware ecosystem partners that AMD is leading with for EPYC read like a who's who list of top suppliers, from ASUS to Gigabyte and Tyan, to Dell/EMC, HP, and Microsoft.
For years, data center processing has required not only deep compute resources, but vast memory pools, and endless amounts of storage. Many-core CPUs with larger configurations of higher bandwidth system memory have solved some of the issues, but it has become clear that specialized co-processing accelerators like GPUs (including AMD's own Radeon Instinct line) will play a major role as well. Here, EPYC's connectivity advantage over Intel with its copious PCI Express lanes should lend itself to very flexible, more balanced server configurations, especially when you consider single socket implementations, but in 2P systems as well.
Further, though AMD is clearly playing catch-up in the GPU-compute arena versus NVIDIA, it is also the only company to have both major blocks of modern data center processing platforms in its arsenal -- both the CPU and the GPU. Intel's alternative approach is a combined FGPA and CPU platform, and it also has the Xeon Phi, but GPU adoption, ironically due in no small part to AMD's rival, is currently very strong. So, in reality, the promise of EPYC and AMD's Naples server platform offers a major soup-to-nuts opportunity for both AMD and its customer base, if the company can pull it all together cohesively.
So far it appears that AMD has all the tools it needs to make major inroads in data center computing and propel the company back to a competitive position versus Intel in the enterprise. Chipping off even 10 percent of Intel's big iron Xeon business would deliver a major financial impact for the company on the order of billions in new business lifeblood.
Time will tell how the new EPYC 7000 processor family pays off for AMD and we'll know at lot more as we get opportunities to look at performance in real-world mission critical applications. EPYC processors are shipping to AMD ecosystem partners currently, and the company notes that EPYC is available "today." However, full server solution timing in market will vary by OEM. Regardless, from what we've seen so far on paper and in demonstrations from AMD, EPYC seems well-poised to deliver and ultimately could prove to offer a value proposition of "EPYC" proportions in the data center.