Samsung Electronics unveiled its 7th-generation high-bandwidth HBM4E at NVIDIA GTX 2026 in San Jose on Monday and plans to ship samples by mid-2026.  

This marks Samsung’s first demonstration of its next-generation memory technology before mass production, strengthening its position as a supplier of advanced memory AI systems.  

According to Samsung, its HBM4E chips each support a read data rate of 16 Gbps per pin. This is 36% faster than the previous generation’s 11 Gbps and twice as fast as the current industry standard of 8 Gbps. The chips also offer 4 TB/s of bandwidth, the maximum rate at which the memory can transfer data.  

Samsung expects to deliver samples of the standard HBM4E products to customers in mid-2026. Custom versions of the chips, designed for specific processors and called application-specific integrated circuits (ASIC) solutions, are set to begin initial VFAP production on several projects in the second half of the year.  

Samsung also confirmed that its 6th-generation HBM4 memory, which is distinct from the neural-oriented HBM4E, will be used in Vera Rubin and Media’s AI computing platform. Mass production of HBM4 (not HBM4E) began in February.  

The HBM4 uses advanced DRAM and logic processes for memory control.  

The sixth-generation chips reach 11.7 Gbps and may be further optimized.  

Samsung demonstrated hybrid copper bonding, a new packaging technology that improves die-to-die connectivity and heat dissipation methods compared to current methods.  

Samsung’s booth featured a special media gallery to highlight its supplier partnership. In addition to HBM4, Samsung displayed several products for AI infrastructure, including SoCAMM2, non-power-server DRAM modules, and the PM1763 solid‑state drive.  

The company also introduced LPDDR5X and LPDDR6 mobile memory for AI workloads on personal devices. LPDDR5X reaches speeds up to 25 GB/s and reduces power use by up to 15%. LPDDR6 increases memory bandwidth to 30–35 Gbps.  

For a hands-on experience and to learn more about Samsung’s innovative AI solutions, we invite you to visit booth 1207 at GTC 2026.  

The highlight of Samsung’s display at NVIDIA GTC 2026 is the new 6th-generation HBM4, now in mass production and made for NVIDIA. Variable-bin platform. HBM4 is set to speed up future AI applications, offering steady data rates of 11.6 gigabits per second (Gbps) above the industry standard of 8 Gbps, and can be boosted to 13 Gbps.  

Samsung’s advanced 10nm-class DRAM process powers HBM4 and the next-generation HBM4E, delivering top-tier quality and performance. Both products will be on display at GTC 2026.  

Samsung will showcase hybrid copper bonding technology, which enables next-generation HPM to scale to more layers and lowers heat resistance.  

An Alliance Taking the AI Era to a New Level 

Samsung and NVIDIA’s close partnership will be featured in a special NVIDIA gallery at the booth. The gallery showcases a wide range of Samsung’s latest technologies, including HBM4, SoCAMM2, and the PM1763 SSD, in our design for in-air infrastructure.  

Samsung’s SoCAMM2, now in mass production, is the industry’s first low-power DRAM server memory model, delivering efficient, scalable AI system performance with high bandwidth and flexible integration for next-gen infrastructure.  

The PM1763 SSD leverages PCIe 6.0 for fast data transfers and capacity demonstrated on servers using NVIDIA’s programming model.  

Samsung’s PM1763 SSD is part of the new NVIDIA BlueField-4 STX Reference Architecture, a blueprint for building AI-optimized storage systems to deliver faster storage on the NVIDIA Vera Rubin platform. It will demonstrate how it improves energy efficiency and system performance for inference workloads tasks where AI systems generate outputs based on data.  

Memory Architecture To Scale, Intelligent Manufacturing 

At GTC 2026, Samsung will showcase its work with NVIDIA on developing an AI factory and an automated, AI-assisted chip manufacturing system. This includes plans to use NVIDIA-accelerated computing and specialized processing for AI tasks to expand Samsung’s AI factory and speed up digital-twin manufacturing, where digital copies of factories are used for simulation with NVIDIA libraries and tools for virtual collaboration. Together, they support one of the world’s most complete chip manufacturing systems covering memory, logic, foundry, and advanced packaging.  

Yong Ho Song, Executive Vice President at Samsung Electronics, will discuss the company’s strategic partnership and its AI and digital twin initiatives in transforming semiconductor manufacturing in his GTC 2026 talk.  

Effective Memory For Local Intelligence 

Samsung’s memory solutions also enable high-efficiency local AI on personal devices. At GTC 2026, Samsung will present customized and efficient options for personal AI supercomputers, including the PM9E3 and PM9E1. NAND types of flash memory for Nvidia DJX Spark  

Samsung will also showcase DRAM (Dynamic Random Access Memory) Solutions, including LPDDR5X and LPDDR6, low-power DDR (double data rate) memory designed for easy use in smartphones, tablets, and other devices. These offer faster data speeds and lower latency. LPDDR5X reaches up to 25 Gbps and reduces power use by up to 15%, enabling fast mobile experiences, high-resolution gaming, and AI features without draining the battery.  

Building on this, LPDDR6 increases bandwidth to a scalable 32–35 Gbps per pin and adds advanced power-management features, including productive voltage scaling and dynamic refresh control. These features deliver the performance required for upcoming edge AI workloads. 

Source

Samsung Unveils HBM4E, Showcasing Comprehensive AI Solutions, NVIDIA Partnership and Vision at NVIDIA GTC 2026

Samsung unveils HBM4E at Nvidia GTC, raises bar for AI memory

Mass production of HBM4 commences with a consistent transfer speed of 11.7 Gbps with a maximum of 13 Gbps.  

Leading Edge DRAM with a 4nm logic-based die maximizes performance, reliability, and energy efficiency for next-gen data centers.  

Secure Process Technology and Supply Capabilities Strengthen Samsung’s HBM Roadmap Beyond HBM4.  

Samsung Electronics announced it has started mass production of its HBM4 memory and has shipped products to its customers. This makes Samsung the first company in the industry to reach this milestone and take an early lead in the HBM4 market.  

By using its cutting-edge 6th-generation 10-nanometer (NM)-class DRAM Process (1C), Samsung achieved stable yields and top performance from the start of mass production without needing any extra redesigns.  

Instead of using proven designs, Samsung chose the most advanced nodes, such as 1C DRAM and a 4nm logic process, for HBM4, said Sang Joon Hwang, Executive Vice President and Head of Memory Development. By leveraging our process strengths and design optimization, we deliver greater performance and can meet customers’ growing needs for higher performance when they need it.  

Setting the Bar for Maximum Effectiveness and Efficiency 

Samsung’s HBM4 operates at a data rate of 11.7 Gbps, which is approximately 46% faster than the prevailing industry standard of 8 Gbps. This represents a 22% increase over HBM3E’s maximum data rate of 9.6 Gbps. HBM4 can achieve peak speeds of up to 13 Gbps, alleviating bandwidth constraints as AI model sizes increase.  

The aggregate bandwidth per HBM4 stack is now 2.7 times the weight of HBM3E, reaching up to 3.3 terabytes per second (TB/s) across all I/O pins. Samsung’s HBM4 uses a 12-layer 3D stacking approach with capacities ranging from 24 GB to 36 GB per stack. Future 16-layer stacks will support modules up to 48 GB, enabling scalability based on system and customer requirements.  

To support increased power and heat from doubling data I/O from 1024 to 2048 signal pins, Samsung integrated advanced low-power circuitry at the core die. HBM4 achieves 40% higher power efficiency through low voltage, through-silicon vias (TSVs), and power distribution network (PDN) tuning, and realizes a 10% gain in thermal resistance and 30% better heat dissipation versus HBM3E. Samsung’s HBM4 delivers high performance, efficiency, and reliability, helping customers get more from their GPUs and control costs in new data centers.  

Comprehensive Yet Agile Manufacturing Capacities 

Samsung will build on its large-scale manufacturing resources to advance HBM4 and future technologies. The company will roll out enhancements and revisions to the roadmap in upcoming product generations.  

Close collaboration between Samsung’s factory and memory teams through design-technology co-optimization (DTCO) supports maintaining high quality and yield. Their in-house expertise in advanced packaging also shortens production cycles and lead times.  

Samsung also plans to expand its technical partnerships. The company works closely with global GPU makers and hyperscalers on next-generation ASIC development.  

Samsung expects its HBM sales to more than triple in 2026 compared to 2025. The company is expanding HBM4 production with HBM4E sampling to begin in the second half of 2026, following the HBM4 launch. Custom HBM samples will be delivered to customers in 2027 as needed, outlining a sequential roadmap: first launch, then sampling, then HBM delivery.

SourceSamsung Ships Industry-First Commercial HBM4 With Ultimate Performance for AI Computing