Samsung's Next-Gen HBM3 And DDR5 Modules To Incorporate In-Memory AI Processing

Samsung HBM-PIM
Do you think maybe the tech industry is hyper-focused on artificial intelligence (AI) technologies? It certainly is, and to some extent AI is practically everywhere these days, from servers and high-performance computing, to autonomous vehicles and everyday consumer devices, and everything in, around, and between. So it's not really shocking that Samsung is touting its latest advancements in processing-in-memory (PIM) technology at the Hot Chips 33 conference.

Some of what Samsung is discussing has to do with past announcements. For example, back in February Samsung introduced the industry's first high bandwidth memory PIM, Aqualbolt-XL, with AI processing built into its HBM2 Aquabolt to bolster high-speed data processing in supercomputers and AI applications.

Since making that announcement, Samsung says its HBM-PIM has been tested in the Xilinx Virtex Ultrascale+ (Alveo) AI accelerator, where the company is claiming a nearly 2.5X system performance gain, while cutting energy consumption by a whopping 60 percent.

According to Samsung, this kind of testing paves the way for AI to be injected into its future HBM3 products, as well as mobile memory solutions.

"HBM-PIM is the industry’s first AI-tailored memory solution being tested in customer AI-accelerator systems, demonstrating tremendous commercial potential," said Nam Sung Kim, senior vice president of DRAM Product & Technology at Samsung Electronics. "Through standardization of the technology, applications will become numerous, expanding into HBM3 for next-generation supercomputers and AI applications, and even into mobile memory for on-device AI as well as for memory modules used in data centers."

Samsung Touts DDR5 With Processing-In-Memory (PIM)

Samsung AXDIMM Buffer
AI capabilities also being infused into next-generation DDR5 memory products. Samsung's Acceleration DIMM (AXDIMM) enables processing on the DRAM module, to better handle the movement of large pools of data between the CPU and DRAM, to ultimately boost the energy efficiency of AI accelerator systems, Samsung says.

The AI engine that helps drive the tech is built right inside the buffer chip. This enables the AXDIMM to perform parallel processing of multiple memory ranks, or sets of DRAM chips, rather than being limited to accessing them one at a time. And importantly, it's compatible with existing systems.

"Since the module can retain its traditional DIMM form factor, the AXDIMM facilitates drop-in replacement without requiring system modifications. Currently being tested on customer servers, the AXDIMM can offer approximately twice the performance in AI-based recommendation applications and a 40 percent decrease in system-wide energy usage," Samsung says.

One of Samsung's partners in testing its AXDIMM solutions is SAP. To what extent is not known, but SAP said it expects finalized solutions and implementations to deliver "significant performance improvements for in-memory database management system (IMDBMS)" tasks.

Samsung also made the claim that LPDDR5-PIM mobile memory solutions will be able to offload AI capabilities without data center connectivity, with simulation tests apparently showing more than double the performance at the same 60 percent gain in power efficiency, in things like voice recognition and translation tools (presumably real-time).

According to Samsung, these solutions will solidify a place in its product portfolio in the first half of next year.