Back in October of last year,
SK hynix announced it was developing fourth generation high bandwidth memory DRAM dubbed HBM3, and now just seven months later it has entered the mass production phase. NVIDIA will be the first to deploy HBM3 after having completed its performance evaluation, though don't expect to see HBM3 on its next-gen GeForce RTX 40 series.
HBM never made much traction in the consumer space because the of the relatively high cost compared to GDDR memory. However, it's a different story in the data center. AI, machine learning, and intense simulations feast on memory bandwidth, making it far easier to justify the added cost.
As such, NVIDIA is bolting SK Hynix's HBM3 memory to its
Hopper H100 accelerators and DGX H100 systems that were formally introduced a few months ago. Technically, a full-fat H100 GPU sports 96GB of HBM3, though the accessible amount is 80GB of ECC-supported HBM3 tied to a 5120-bit bus on the same package.
"We aim to become a solution provider that deeply understands and addresses our customers’ needs through continuous open collaboration," said Kevin (Jongwon) Noh, president and chief marketing officer at SK hynix.
HBM3 is considered a fourth generation product because it follows HBM, HBM2, and HBM2E, the latter of which was an update to the HBM2 specification with increased bandwidth and capacities. It serves up a whopping 819GB/s of memory bandwidth. That's a nearly 78 percent increase versus HBM2E. To put it into perspective, that kind of memory bandwidth is equivalent to transmitting 163 Full HD movies (5GB each) in a single second.
SK hynix says it will
expand its HBM3 volume in the first half of next year in accordance with NVIDIA's schedule.