Google May Make Disruptive Migration To AMD EPYC In Cloud Data Centers

AMD Epyc Processor
AMD is on a roll with its Zen CPU and Navi GPU architectures, the former of which is proving particularly compelling to customers across the board. Intel still dominates in overall semiconductor sales, but AMD is not exactly playing second fiddle, or at least it's trying not to. In what could be a major boost to AMD's bottom line, there is speculation that Google may migrate its cloud data centers to EPYC hardware.

Bear in mind that the data center is the real high stakes battlefield for AMD and Intel, where margins are much higher compared to desktop and laptop products. AMD is on the cusp of introducing the world's first 7-nanometer x86 data center CPU, a next-gen EPYC product line based on its recently launched Zen 2 CPU architecture.

As that day comes, Lynx Equity Research analysts said they have heard "rumblings" that Google is not happy with Intel's server platform. This prompted Seeking Alpha to posit in a headline, "Google data center switching to AMD?"

It's not clear how much legitimate weight this rumor carries, though Lynx Equity Research believes that Google-specific server boards are being made for AMD's EPYC CPUs, based on its research into the hardware supply chain.

Will it actually happen, though? It's hard to tell, though "switching" is probably the wrong word to use in this case. It's not Google is going to retire thousands of Xeon servers overnight, but adopting EPYC moving forward seems completely plausible, especially with PCIe 4 and nearly double the cores per 2P platform at its disposable. Core count and thread count per socket is a significant advantage for AMD currently. Also, improved IPC throughput and clock scaling of Zen 2 EPYC (Rome) is significantly better than previous gen. However, server architecture migrations have a long gestation period—think 9–12 months and even then, legacy technology will remain. Regardless, AMD getting a nod (if this rumor is true) from Google Cloud is huge, to say the least.

As it pertains to performance and the architecture advantages, AMD has been keen to note all this stuff in the lead up to the launch of its next-gen EPYC processors. Back at AMD's Next Horizon event, company CEO Dr. Lisa Su noted that the top-end Rome part will be a 64-core/-128-thread beast that is socket-compatible to the previous gen EPYC platforms, as well as its next-gen Milan server platform (which supports PCIe 4). So, the claim is 2X overall performance per socket and 4X the floating point performance.


AMD also teed up a demo of a dual-socket Intel Platinum 8180M server (56 cores total, 28 per socket, 112 threads) competing against a single-socket AMD Rome EPYC 64-core server C-Ray, which is a ray tracing benchmark designed to showcase floating point CPU performance. As one would expect (given the advantage in cores and threads), the single-socket EPYC part edged out Intel's Platinum-class dual-socket server, as you can see in the video embedded above.

How things actually shake out between AMD's next-gen EPYC processors and Intel's Xeon silicon will have to wait for another day. In the meantime, Intel fired back at AMD's demo, claiming in essence that it was misleading because it left out NAMD optimizations on its hardware.

Intel Xeon NAMD Benchmarks
Click to Enlarge (Source: Intel)

When enabling NAMD optimizations, Intel's own benchmarking data shows its Xeon Platinum 8280 setup scoring 30 percent higher, at 12.65ns. AMD still wins in this scenario, but the difference is far less dramatic, especially when factoring in the core and thread disadvantages that Intel's platform is working with in this comparison.

Interestingly, Intel notes that its testing may not represent all publicly available security updates. That's important because Spectre and Meltdown mitigations can have a negative impact on performance.

True comparisons by independent reviewers (such as HotHardware) will have to wait until AMD ships its next-gen EPYC processors. That said, the fact that Google making a migration to AMD can even be a plausible rumor speaks volumes about how far AMD has come.
Show comments blog comments powered by Disqus