Samsung HBM3 Gets Faster, Wider, More Efficient And 64GB On-Package For 2019 Debut
While HBM2 isn’t even shipping in volume for mainstream (or even enthusiast-class) applications, that isn’t stopping Samsung from laying out its roadmap for HBM3. HBM3 is expected to hit production in the 2019 to 2020 time frame and take the “bigger, faster, stronger” approach compared to HBM2. Each die will now be capable of supporting 16Gb (2GB) per layer, while greater than 8 layers could be stacked within a single chip. Keeping this in mind, we could see high-end graphics cards with up to a whopping 64GB of memory installed.
While HBM2 offers 256GB/of bandwidth per layer, HBM3 doubles that to 512GB/sec. That could lead us to graphics cards with upwards of aggregate bandwidth in the 2+ terabytes/second realm. If that wasn’t enough, core voltage will also be lower than the 1.2V seen with HBM2.
In the meantime, while HBM2 is prohibitively expensive (as witnessed by its appearance in the Tesla P100), Samsung is looking to bring the tech closer to a mainstream audience thanks to some architectural changes. Namely, the company will axe ECC support and buffer die, and reduce the number through-silicon vias (TSVs). However, it will also offset those reductions with an increase in pin speeds from 2Gb/sec to 3Gb/sec. Overall bandwidth will fall from 256 Gb/sec per die to just 200 Gb/sec per die, but there will be a much greater drop in cost, which is all that gamers want to hear at this point when it comes to next generation graphics cards.
Last but not least, Samsung is also working on the successor to GDDR5X: GDDR6. GDDR6 will include improvements to power efficiency while also boosting per-pin bandwidth from 10Gbps to 14+ Gbps. This memory technology is on track for a 2018 release.