SK Hynix Flexes 24GB HBM3 DRAM With 819GB/s Of Bandwidth To Boost ML Workloads

SK Hynix is on cloud nine today on claims it has developed the first-ever High Bandwidth Memory 3 (HBM3) DRAM solution, beating other memory makers to the punch. According to SK Hynix, HBM3 is the world's best-performing DRAM, with the ability to process 819 gigabytes per second for a delightful performance bump over previous iterations.

Speaking of which, HBM3 is technically a fourth-generation implementation of HBM, with the previous three in ascending order being HBM, HBM2, and HBM2E. That latter one is an update to the HBM2 specification, with more bandwidth and capacity on tap—SK Hynix introduced its first HBM2E product in August 2019, with 460GB/s of bandwidth, and began mass producing it in July last year.


The sizable bump in bandwidth translates to being able to transmit 163 Full HD 1080p movies at 5GB each in just one second, according to SK Hynix. That is a massive 78 percent jump in data processing speed compared with HBM2E. In addition to being much faster, on-die error correction code (ECC) means it is more reliable to boot, SK Hynix says. It's also faster than what SK Hynix teased this past summer, when it tossed out 665GB/s as a figure.

"Since its launch of the world’s first HBM DRAM, SK hynix has succeeded in developing the industry’s first HBM3 after leading the HBM2E market," said Seon-yong Cha, Executive Vice President in charge of the DRAM development. "We will continue our efforts to solidify our leadership in the premium memory market and help boost the values of our customers by providing products that are in line with the ESG management standards."

Part of that effort involves offering HBM3 in 16GB and 24GB capacities, the latter of which is the industry's biggest. This involves stacking a dozen 16-gigabit (Gb) DRAM chips using through silicon via (TSA) technology. Each one is around 30 micrometers, which is about a third of the thickness of a standard piece of paper.

These stacks pipe data to and form the host over an ultra-wide 1,024-bit memory bus. Bearing in mind products are bound to feature multiple stacks, we're looking at gobs of bandwidth, perhaps as much as 4.9TB/s (if employing six HBM3 stacks).

The mind boggles at the prospect of HBM3 on next-gen GPUs for gaming, though SK Hynix says the primary destination will be high-performance data centers and machine learning platforms tasked with AI and super computing chores such as analyzing climate change.