AMD & Samsung deepen AI memory, packaging alliance
AMD and Samsung Electronics have broadened their partnership on memory and packaging for future AI and data centre systems. The agreement covers HBM4 supply for a forthcoming AMD accelerator and DDR5 memory work for the next generation of AMD EPYC server processors.
A memorandum of understanding sets out joint work on advanced memory for AMD's upcoming AI and data centre platforms. The scope includes HBM4 for the next-generation AMD Instinct MI455X GPU and DRAM work tied to 6th Gen AMD EPYC processors, codenamed "Venice".
The agreement also connects the memory work to AMD's rack-scale systems plans, including the AMD Helios platform, which combines AMD Instinct GPUs and AMD EPYC CPUs in a rack-scale architecture.
The announcement coincided with a visit by AMD Chief Executive Dr Lisa Su to South Korea. The companies held a signing ceremony at Samsung's semiconductor campus in Pyeongtaek, attended by Dr Su and Samsung Vice Chairman and Chief Executive Young Hyun Jun.
Memory focus
The collaboration centres on memory bandwidth and power efficiency, which have grown in importance as AI models and data centre deployments scale. The companies say the work will enable tighter optimisation for AI training and inference across future platforms that use both AMD accelerators and server CPUs.
HBM, or high-bandwidth memory, is used in AI accelerators to feed data to processors at high speed. HBM4 is the next iteration of the technology and is expected to be used in future generations of AI hardware.
Samsung said its HBM4 uses its sixth-generation 10-nanometre-class DRAM process, known as "1c", and a 4nm logic base die. It cited processing speeds of up to 13 gigabits per second and bandwidth of up to 3.3 terabytes per second.
Under the agreement, Samsung and AMD will align on primary HBM4 supply for the AMD Instinct MI455X, described by the companies as AMD's next-generation AI accelerator GPU.
Server roadmap
Alongside HBM4, the memorandum covers advanced DRAM solutions for AMD's next EPYC generation. DDR5 remains the mainstream memory standard for current server platforms, and the companies plan to work on DDR5 optimisation for the "Venice" EPYC line.
In large-scale AI systems, server CPUs coordinate data movement and manage a wide range of workloads around accelerator clusters. Memory performance and power consumption at the CPU platform level can affect overall system efficiency, particularly in racks configured for dense compute deployments.
The companies linked the DDR5 work to systems built around the Helios rack-scale architecture, positioning Helios as a system-level building block for next-generation AI infrastructure.
Packaging and foundry
Samsung also highlighted advanced packaging as part of the broader relationship. Packaging has become a key differentiator in AI chips and systems because it enables dense integration of compute, memory, and interconnect while meeting yield and thermal requirements.
The memorandum also includes discussions on potential foundry partnership opportunities under which Samsung could provide foundry services for future AMD products. Neither company disclosed specific products, timelines, or manufacturing volumes.
Samsung and AMD have worked together for close to two decades across graphics, mobile, and computing. More recently, Samsung has served as a primary HBM3E partner for AMD's Instinct MI350X and MI355X accelerators, according to the companies.
Young Hyun Jun described the new agreement as a sign of a broader relationship that spans memory, foundry, and packaging.
"Samsung and AMD share a commitment to advancing AI computing, and this agreement reflects the growing scope of our collaboration," said Young Hyun Jun, Vice Chairman & CEO, Samsung Electronics. "From industry-leading HBM4 and next-generation memory architectures to cutting-edge foundry and advanced packaging, Samsung is uniquely positioned to deliver unrivaled turnkey capabilities that support AMD's evolving AI roadmap."
Dr Su described the partnership as part of a wider industry effort to build next-generation AI systems that span components and system design.
"Powering the next generation of AI infrastructure requires deep collaboration across the industry," said Dr Lisa Su, Chair and CEO, AMD. "We are thrilled to expand our work with Samsung, bringing together their leadership in advanced memory with our Instinct GPUs, EPYC CPUs and rack-scale platforms. Integration across the full computing stack, from silicon to system to rack, is essential to accelerating AI innovation that translates into real-world impact at scale."
Engineering work will cover HBM4 for the MI455X, DDR5 memory for the 6th Gen EPYC "Venice" processors, and system-level alignment with AMD's Helios rack-scale architecture, the companies said.