Samsung has begun shipping its 6th-generation HBM4 memory designed for AI computing. These 10 nm-plus 1C chips built on a 4nm logic base die reach speeds up to 11.7 Gbps—well above the industry standard. Positioned to support advanced AI platforms such as NVIDIA’s upcoming Vera Rubin chips, this release marks a significant advance in memory technology.  

Main details of Samsung’s HBM4: 

  • Performance: Achieves 11.7 Gbps per pin, with potential for 13 Gbps. This is 46% faster than the industry standard for HBM4 (8 Gbps per pin).  
  • Capacity: 12-layer strikes offer 24GB to 36GB. Semicolon 16-layer strikes may reach 48GB.  
  • This is a 6th‑Gen 10nm‑class DRAM core and a 4nm logic‑based die. Bandwidth per stack is 2.7× that of HBM3E, up to 3.3 TB/s.  
  • Efficiency: power is 40 times better, thermal resistance is up 10%, and heat dissipation is up 30%.  
  • Manufacturing: The first shipments have gone out to customers who will use them in advanced accelerators, marking a milestone in adoption.  

Samsung aims to lead the AI memory market with HBM4, having solved earlier production issues to meet high demand. This strategic focus sets the context for its recent progress.  

Samsung is expected to officially launch HBM4 at NVIDIA GTC 2026 in March, following the successful completion of NVIDIA’s final tests and as part of the major Rubin AI platform announcement.  

Having passed NVIDIA’s verification, Samsung’s HBM4 will debut at NVIDIA GTC 2026 for the Vera Rubin AI platform.  

Over the past few years, Samsung has experienced issues with its HBM memory division. During this period, its South Korean competitor SK Hynix became NVIDIA’s exclusive supplier of HBM3 and HBM3E memory, highlighting a shift in competitive dynamics. In response, Samsung has overhauled its HBM and semiconductor divisions and is now seeing results from those changes.  

NVIDIA will reportedly use its first allotment of HBM4 memory from Samsung’s Vera Rubin, as Samsung’s new HBM4 memory is the best of the HBM4 offerings compared to rivals SK Hynix and US-based Micron. Samsung’s new HBM4 memory is rated above 11 Gbps, well above JEDEC standards for HBM4, and was pushed and requested by NVIDIA at those higher pin speeds.  

Samsung and NVIDIA are collaborating on HBM4. Reflecting on this, Samsung stated in a press release that its cutting-edge HBM solutions’ high bandwidth and energy efficiency should help accelerate future AI development and support the manufacturing infrastructure built on these technologies.  

Samsung employs 6th-generation 10 nm-class DRAM and a 4 nm logic-based die with HBM4 speeds up to 11 Gbps. They plan to continue advancing their memory and foundry services to support global AI expansion.  

Samsung announced mass production of HBM4 and initial customer shipments, becoming the industry’s first to do so.  

Samsung used its cutting-edge 6th-generation 10nm DRAM processor (1C) to achieve stable yields and top performance right from the start of mass production. This was done smoothly, without any extra redesigns, to achieve stable yields and top performance right from the start of mass production. This was done smoothly without needing any extra redesigns.  

Instead of taking the conventional path of using proven designs, Samsung took the leap and adopted the most advanced nodes, such as 1C DRAM and a 4nm logic process, for HBM4, said Sang Joon Hwang, Executive Vice President and Head of Memory Development at Samsung Electronics. By leveraging our process, competitiveness, and design optimization, we can deliver significant performance gains, enabling us to meet our customers’ escalating demands for higher performance when they need it.  

Raising the Standard for Effectiveness and Efficiency 

Samsung’s HBM4 runs at a steady 11.7 Gbps, about 46% faster than the industry-standard 8 Gbps. This is a 1.22x increase over HBM3E’s maximum of 9.6 Gbps. HBM4 can potentially reach up to 13 Gbps, helping to reduce data bottlenecks as AI models grow.  

The memory bandwidth per stack is now 2.7x that of HBM3 (1.2 TB/s), reaching up to 3.3 TB/s. HBM4 comes in capacities ranging from 24 GB to 36 GB and will expand to 48 GB with 16-layer stacking, further exceeding HBM3E’s previous capacities.  

To manage higher power and heat, Samsung added a low-power design to the core die. HBM4 is 40% more efficient, offers 10% better thermal resistance, and provides 30% better heat dissipation than HBM3E.  

By delivering leading performance, energy efficiency, and reliability, Samsung’s HBM4 positions customers to maximize their GPU investments and meet growing data center demands with confidence.

Source: Samsung Ships Industry-First Commercial HBM4 With Ultimate Performance for AI Computing 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *