GeForce RTX 5090 Tipped For A Huge Memory Bandwidth Boost With A 512-Bit Bus
by
Zak Killian
—
Thursday, July 27, 2023, 02:47 PM EDT
Remember the GeForce GTX 280? The "GT200" GPU was a gigantic beast that maxed out the limits of the "Tesla" design it was based on. It boasted 32 ROPs and something almost unseen in graphics cards before or since: a 512-bit GDDR memory bus. Well, NVIDIA could be bringing the half-kilobit bus back when the next generation of GeForce cards hits the market, at least if the latest leak from kopite7kimi is accurate.
You can see the tweet below in its entirety, but it's fairly simple on the face of it: the top-end card from NVIDIA's next GPU family will purportedly pack a 512-bit memory bus. This is wider than anything from the green team since the GeForce GTX 285, and it's a curious step from the GeForce RTX 4090. That card uses a 384-bit memory bus with hot-clocked GDDR6X memory to achieve 1008 GB/sec of memory bandwidth—nearly 1TB-per-second of throughput.
If you'll allow us to wax hypothetical for a bit, let's examine the possibilities. A potential GeForce RTX 5090 with a 512-bit bus and the very same GDDR6X memory would have 1,344 GB/sec of memory bandwidth, the highest-bandwidth client GPU memory interface to date.
We have to specify "client" because NVIDIA's H100 "Hopper" GPU tops out at 3352 GB/second, and AMD's Instinct MI250X module hits a surprisingly similar 3277 GB/sec. Those products are using the descriptively-named High Bandwidth Memory, which, while expensive, offers exactly what it says on the tin: extremely high bandwidth. It's excellent for graphics products, but not cost-effective for use in consumer GPUs that have to sell at prices mere mortals can afford.
However, a 512-bit memory bus is also going to be very expensive. Fabricating the complex multi-layer PCBs required to maintain signal integrity with a massive memory bus operating at even GDDR6 transfer rates is not cheap, and it only gets more difficult when we start talking about GDDR6X or the new hotness, GDDR7. It's possible that NVIDIA could be using Samsung's GDDR6W, which is wider by its nature and offers improved efficiency.
The really interesting part of this rumor is that, if NVIDIA's biggest GPU of next-generation is using a 512-bit memory bus, it could mean wider memory interfaces once more for the lower-end graphics cards. NVIDIA's current-generation Ada Lovelace GPUs have faced harsh criticism for their relatively meager memory bandwidth, which NVIDIA notes is compensated to some degree by Ada's extremely large caches.
If we imagine the top-end card with a 512-bit bus, then we'd see a 384-bit bus on the x80 card and a 256-bit bus on the x70 and possibly "x60 Ti" cards—assuming the same hierarchy as Ada Lovelace. That would be a huge step up from the narrow 128-bit bus found on the GeForce RTX 4060 Ti and RTX 4060, and should drastically improve performance in higher resolutions compared to those cards.