Micron Samples 256GB DDR5 RDIMMs With Blistering 9200 MT/s Speeds for AI

If you thought your shiny new 192GB DDR5 rig was bleeding-edge, it's time to take a seat. Micron just announced that it is sampling 256GB DDR5 RDIMM modules pushing a blistering 9200 MT/s. Yes, that is indeed a single stick of memory packing a quarter of a terabyte of RAM while operating at speeds that make current-generation server hardware look like it is walking through molasses.

The massive leaps in both density and speed are coming because AI is hungry. Large Language Models (LLMs), agentic AI, and other real-time inference workloads are constantly begging for more memory capacity and bandwidth per CPU socket. To keep up, hyperscalers need memory that doesn't just store more data, but also moves it much faster. According to Micron, these new modules offer a massive 40% performance leap over the 6400 MT/s modules currently ruling volume production in datacenters around the globe.

How do you cram 256GB onto a standard server memory format without melting the motherboard? Micron achieved this density by utilizing its latest 1-gamma DRAM technology, of course, but the real magic lies in the advanced packaging. Micron is utilizing 3D stacking, connecting multiple memory dies vertically using through-silicon vias (TSVs), just like with HBM or AMD's 3D V-Cache. This essentially builds little skyscrapers of memory on a single footprint, allowing for monstrous data density without breaking standard server dimensional limits.

micron socamm2 module
These SOCAMM2 modules run even faster, at 9600 MT/s.

Interestingly, these modules don't appear to to use onboard clock drivers (CUDIMM), rank multiplexing (MRDIMM), or any kind of exotic form factor (LPCAMM, SOCAMM). As far as we can tell, these are simply standard DDR5 RDIMMs that support both massive density and furious speed. They do it without sucking down massive power, too, which is important, because in the server world, power efficiency is just as critical as raw speed.

Scaling up AI infrastructure usually means your electricity consumption scales up exponentially, too. Micron's new modules do use more power than previous-generation 128GB modules, but drastically less power by bit; by replacing two 128GB modules (with combined draw of 19.4W per Micron) with a single 256GB module operating at just 11.1W, server architects can achieve the exact same memory capacity while slashing operating power by more than 40%. For hyperscalers running tens or hundreds of thousands of these modules, that power savings translates to millions of dollars per annum and vastly improved thermal management.

Right now, Micron is shipping samples of these 9200 MT/s monsters to key ecosystem partners for co-validation across current and next-gen server platforms. Expect to see these powering the next wave of AI breakthroughs when they hit volume production later this year.
Zak Killian

Zak Killian

A 30-year PC building veteran, Zak is a modern-day Renaissance man who may not be an expert on anything, but knows just a little about nearly everything.