At the Open Compute Project (OCP) Global Summit in San Jose, California, Meta announced new specifications for an open rack for AI with an open rack-wide (ORW) form factor. This constitutes a significant step forward in open infrastructure innovation.
The OCP specification defines an open, double-wide rack designed for the power, cooling, and service needs of next-generation AI systems. This shift helps standardize the scale data center design across the industry.
AMD is working with Meta and the Open Compute Project community to advance this vision through Helios. AMD’s most advanced rack-scale reference system, built on the ORW open standard, Helios extends AMD’s commitment to openness from silicon to large-scale clusters, embodying the open hardware principles of the ORW specification.
Helios: Turning Open Standards into Rack Scale Reality
The AMD Helios AI rack follows the open design blueprint Meta submitted to OCP 2025, aiming for optimized, ready-to-deploy performance in AI data centers. With next-generation AMD Instinct MI450 series GPUs, Helios sets a new standard for open-rack-scale AI infrastructure.
Each MI450 series GPU built on AMD CDNA architecture offers up to 432 GB of HBM4 memory and 19.6 TB of bandwidth, supporting demanding AI models. A Helios rack with 72 GPUs can reach up to 1.4 exaflops of FP8 and 2.8 exaflops of FBO4 performance, with 31 TB of memory and 1.4 PB of bandwidth. This leap enables training trillion-parameter models and large-scale AI inference.
Helios includes up to 260 TB of scale-up interconnect bandwidth and 43 TB of Ethernet-based scale-out bandwidth, guaranteeing smooth communication between GPUs, nodes, and racks. It delivers up to 36 times higher performance than previous generations and offers 50% more memory capacity than NVIDIA’s Vera Rubin system.
X-scale systems like Helios are key for the next generation of AI, where performance relies on optimized communication between thousands of accelerators. AMD’s leadership in open standards like OCP (Ultra-Accelerator Link, UAL) and the Ultra Ethernet Consortium (UEC) supports industry collaboration. These efforts help create interoperable, energy-efficient infrastructure for the AI era.
Driving Open Innovation Across the Ecosystem
The Helios Rack is more than simply a hardware reference; it functions as a blueprint for collaboration in the AI ecosystem.
Built on the ORW specification submitted by Meta to OCP, Helios enables OEM and ODM partners to:
- Adapt and extend the Helios reference design to accelerate time-to-market for new AI systems.
- Integrate AMD Instinct GPUs, EPYC CPUs, and Pensando DPUs with their own differentiated solutions.
- Participate in an open standards-based ecosystem that drives interoperability, scalability, and long-term innovation.
By adopting the ORW specification, the industry gains a shared open foundation for Rack-Scaled AI deployments. This reduces fragmentation and eliminates the inefficiencies of proprietary one-off designs.
Purpose-Built For Contemporary Data Center Realities
AI data centers are evolving rapidly and require architectures that deliver better performance, efficiency, and serviceability at scale. Helios is designed to meet these needs with features that make deployment faster, improve management, and sustain performance in dense AI environments:
- Higher scale-out throughput and HBM bandwidth compared to previous generations
- Enable faster model training and inference.
- Double-wide layout reduces weight density and improves serviceability.
- Standards-based Ethernet scale-out ensures multi-path resiliency and seamless interoperability.
- Backside quick disconnect liquid cooling provides sustained, efficient thermal performance at high density.
These features make the AMD Helios Rack a deployable, production-ready system for customers scaling to exascale AI. It delivers breakthrough performance, operational capability, and sustainability.
Enabling The Openness In AI Infrastructure Revolution
With Helios, AMD brings its open hardware and software leadership to the forefront, combining silicon innovation with open, industry-driven design principles.
For OEMs and ODMs, Helios provides a ready-made OCP-aligned system to build differentiated AI infrastructure.
For customers, it means faster deployment, lower risk, and more flexibility in scaling compute for AI, HPC, and sovereign initiatives. Lume deployment is expected in 2026. As an open OCP-aligned design, Helios creates new opportunities for the ecosystem to collaborate on the future of AI infrastructure, one built on openness, interoperability, and shared innovation.
Built on the ORW specifications submitted by META to the Open Compute Project, Helios demonstrates AMD’s commitment to open, collaborative innovation. It helps the next phase of AI infrastructure and shows that when the industry works together, everyone moves forward.










