Modern PCs are inching closer and closer to having 10 GB/sec or more usable memory bandwidth, and we haven't really had any complaints about the steady increase until we heard that Rambus was working on technology that could enable 1 TB/sec of memory bandwidth.
The applications for graphics cards and consoles are certainly interesting, but we think that desktop and server CPUs might also end up benefiting from such an increase. Consider that when Intel released the P4, its theoretical memory bandwidth was 6.4 GB/sec, and that was 4 years ago. Today's CPUs have 4 cores, and it is safe to say each core is faster overall and thus could use more bandwidth.
So how does Rambus plan to increase bandwidth? Obviously increasing memory clock speeds that dramatically would be very difficult, so they've come up with another idea.
“Rather than simply increasing the clock speed of memory to achieve higher output, Rambus looks to boost bandwidth with a 32X data rate. Just as DDR memory technologies doubles transfer on a single, full clock signal cycle, Rambus’ proposed technology is able to data at 32 times the reference clock frequency. With 32X technology, the memory company is targeting a bandwidth of 16Gbps per DQ link with memory running at 500MHz. In contrast, today’s DDR3 at 500MHz achieves a bandwidth of 1Gbps.
“We're really excited about the Terabyte Bandwidth Initiative and the technologies that we've developed,” said Steven Woo, a senior principle engineer at Rambus. “The work of a large team of our scientists and engineers is pushing memory signaling technology to new levels of performance.”
Of course, it requires a little explanation on how a technology that enables a DQ link 16Gbps of bandwidth could result in a Terabyte of throughput. Rambus’ aim for the technology is to grant Terabyte bandwidth to a system on a chip (SoC) architecture, and such may be achieved with 16 DRAMs operating at 16Gbps, 4-bytes wide per device.”