At NVIDIA GTC 2026, Samsung Electronics made waves by unveiling its next-generation HBM4E (High Bandwidth Memory 4E) and announcing mass production of its 6th-generation HBM4 for NVIDIA’s upcoming Vera Rubin platform. This milestone marks a major technical triumph for Samsung, driving AI processing rates and ensuring a strong memory supply for NVIDIA’s next-gen AI infrastructure.
Key Breakthroughs And Technical Specifications
- HBM4 6th Gen now in production. Samsung’s HBM4 delivers 11.7 Gbps per pin for NVIDIA’s Vera Rubin AI platform.
- HBM4E 7th Gen recently unveiled. HBM4E reaches 16 Gbps per pin and 4.0 TB/s bandwidth.
- Hybrid copper bonding enables 16-plus layers and cuts heat resistance by over 20%, improving efficiency.
- Process node: both HBM products use an advanced 10nm-class DRAM (1C) process for high performance.
Strategic Impact on NVIDIA Partnership
- Samsung’s introduction of HBM4 and HBM4e is strategically aligned to enhance NVIDIA’s high-performance AI accelerators. This ensures improved AI training and inference capabilities for NVIDIA’s Vera Rubin platform, directly supporting NVIDIA’s competitive edge in AI infrastructure.
- Diversified supply chain by adopting Samsung’s HBM4. NVIDIA fortifies its next-generation GPU platforms with a more resilient and diversified supply chain, reducing the strategic risk of dependence on a single supplier.
- Total AI solution by offering an integrated turnkey service across memory HBM4, HBM4e, logic design foundry, and advanced packaging. Samsung aims to position itself as a total AI solution provider for NVIDIA, enabling NVIDIA to accelerate development and streamline supply with a single partner
GTC 2026 Showcase
Samsung’s presence at GTC 2026 highlighted a comprehensive AI alliance featuring:
- NVIDIA Gallery, a special section featuring Samsung’s HBM4, SOC-AMM2, and PM1763 SSDs all optimized for Samsung AI infrastructure.
- AI Factory Cooperation: Implementation of N-Media Accelerated Computing to scale Samsung’s AI Factory and expedite manufacturing with digital twins powered by N-Media Omniverse.
- The products improve energy efficiency and system performance for inference workloads.
Samsung Electronics, recognized for its leadership in advanced microchip technology, has announced the AI computing technologies it will present at NVIDIA GTC 2026 in San Jose, California, from March 16 to 19. As the only semiconductor company in the industry to supply a comprehensive AI solution encompassing memory, logic, foundry, and advanced packaging, Samsung will display a complete portfolio of products and solutions that support the design and development of advanced AI systems. Additional information about Samsung’s AI solutions will be available at the company’s GTC 2026 booth (#1207).
The primary focus of Samsung’s presentation at NVIDIA GTC 2026 will be the 6th generation HBM4, now in mass production and designed for the NVIDIA Vera Rubin platform. Samsung’s HBM4 is projected to advance the development of future AI applications by delivering consistent data rates of 11.67 Gbps, surpassing the industry standard of 8 GB/s and enabling potential enhancements up to 13 GB/s.
Furthermore, utilizing the 6th generation 10nm NM-class DRAM process, Samsung has achieved stable yields and high performance. The company’s next-generation HBM4E, which delivers 16 Gbps per pin and 4.0 TB/s bandwidth, will also be exhibited for the first time at GTC 2026
In addition to its HBM portfolio, Samsung will also present its Hybrid Copper Bonding (H3B) technology, a new chip connection method that lets next-generation HBM reach 16 or more stacked memory layers while lowering thermal resistance and making cooling more effective by more than 20% compared to the traditional Thermal Compression Bonding (TCB) method.
Advancing AI Through Tactical Collaboration
The teamwork between Samsung and NVIDIA will be highlighted in a special NVIDIA gallery inside the booth. This area will show a range of Samsung technologies including HBM4, SoCAMM2, a server memory module and PM1763 SSD, a storage device all built for NVIDIA AI systems.
To further meet the requirements for efficiency and expandability in AI systems, Samsung’s SoCAMM2, based on low-power DRAM, serves as a server memory module offering high bandwidth and flexible system integration for next-gen AI infrastructure. SoCAMM2 is currently in mass production, denoting an industry-first achievement.
Samsung’s PM1763 SSD, designed for next-gen AI storage solutions, uses the PCIe 6.0 interface to deliver fast data transfers and high capacities. The performance of the PM1763 will be demonstrated on servers running the NVIDIA BlueField-4 STX reference architecture for accelerated storage infrastructure on the NVIDIA Vera Rubin platform. Samsung’s PM1753 SSD will demonstrate its contribution to increased energy efficiency and system performance for inference workloads.
Memory Architecture to Scale
At GTC 2026, Samsung will present its collaboration with NVIDIA on AI factory development, including plans to utilize NVIDIA accelerated computing to expand Samsung’s AI factory and expedite digital twin manufacturing using NVIDIA Omniverse libraries. This partnership supports a comprehensive chip manufacturing infrastructure that includes memory, logic, foundry, and advanced packaging.
Yong Ho Song, Executive Vice President and Head of AI Center at Samsung Electronics, will discuss the strategic cooperation between the two companies during his speaker session on March 17, 2026. The session titled Transforming Semiconductor Manufacturing with Agentic AI from Design and Engineering to Production will detail the AI Factory and present real-world use cases where AI and digital twins are advancing semiconductor manufacturing, including development chain, electronic design automation (EDA), computational lithography, and the operation of advanced manufacturing facilities powered by NVIDIA.
Turning to local AI, Samsung’s memory solutions are engineered to maximize efficiency for local AI workloads on personal devices. At GTC 2026, Samsung will present customized solutions for personal AI supercomputers including the PM9E3 and PM9E1 NAND for NVIDIA DJX Spark Display DRAM solutions LPDDR5X and LPDDR6 designed for embedding in smartphones, tablets and wearable devices providing increased data throughput and reduced latency. LPDDR5X achieves speeds up to 25 Gbps per pin and reduces power consumption by up to 15%, supporting responsive mobile experiences, high-resolution gaming, and advanced AI-enhanced applications while maintaining battery life. LPDDR6 offers further bandwidth, scaling to 30–35 Gbps per pin, and provides advanced power management features, for example adaptive voltage scaling and dynamic refresh control, which together provide the performance needed for next-gen edge AI workloads.










