Austin, TX
Atomic answer- The use of RDHX has been compulsory for all OCI Superclusters set up with Oracle to handle the high density of 120kW in NVIDIA Blackwell racks. The shift in Oracle’s infrastructure has necessitated the use of direct-liquid cooling, rather than the former raised-floor air cooling, to prevent a disaster from overheating.
AI infrastructure is revolutionizing data center design today. As enterprises use more advanced GPU clusters for both training and inference workloads, current cooling systems have reached their limit, as they are not designed to handle the extremely high densities of today’s AI environment. Oracle OCI Supercluster RDHX liquid-cooling initiatives are redefining how hyperscale cloud infrastructure handles thermal management.
Oracle has decided to address this problem by upgrading its infrastructure to align with its new cloud rollout strategy. The tech firm revealed that its upcoming OCI Supercluster environments for next-gen AI applications would require liquid-cooling systems to handle the high thermal density generated by modern GPU clusters.
This decision is directly related to the emergence of NVIDIA Blackwell racks, which generate extreme heat while performing large-scale inference and AI training.
Oracle’s infrastructure upgrade is indicative of the importance of cooling systems in future cloud growth.
Why AI Factories Are Revolutionizing Data Centers
As generative AI rapidly advances, a new generation of data centers, referred to as AI factories, is emerging.
Whereas enterprise data centers were designed for general computing operations, AI factories focus on dense GPU configurations, ultra-fast networks, and non-stop, high-performance environments.
AI factories operate at significantly higher power and heat levels than existing cloud data center designs.
Key aspects of AI-infrastructure include:
- High-density GPU racks
- Inference workloads running continuously
- Special network needs
- Higher power usage by data centers
- Intense heat management
The rise of 120kW Blackwell rack thermal HVAC upgrade mandate requirements highlights how existing raised-floor cooling architectures are becoming insufficient for modern AI deployments.
This is particularly applicable to Blackwell data centers, which may require higher rack density than before.
The recent infrastructure shift by Oracle underscores that even cloud providers have no choice but to redesign their data center setups.
NVIDIA Blackwell Enhances Heat Stress
One key factor driving Oracle’s need to switch from standard cooling to more advanced solutions is the use of NVIDIA Blackwell technology.
Blackwell technology delivers incredible AI performance boosts, yet it also produces heat levels that standard cooling solutions struggle to handle. This intensifies the importance of the Oracle sovereign cloud liquid-to-chip cooling retrofit strategy now being adopted across advanced cloud facilities.
Oracle’s new Supercluster facilities will have to accommodate rack heat densities of up to 120kW, which is much higher than what is typically seen in enterprise IT infrastructure.
There are several issues associated with running such an environment:
- Heat stress increases
- Higher power costs for cooling
- Greater stress on infrastructure
- Risk of performance throttling
- Higher likelihood of hardware malfunction
To safely use such facilities, Oracle is making it mandatory to deploy advanced liquid-cooling systems integrated into AI infrastructure.
Introduction of Rear Door Heat Exchanger Systems
Perhaps one of the most significant infrastructure changes with the rollout of Oracle’s Superclusters is the requirement that all new builds feature rear-door heat exchanger (RDHX) systems.
Such systems collect and dissipate heat from high-density racks before it is distributed throughout the rest of the facility. The implementation of Oracle OCI Supercluster RDHX liquid cooling 2026 deployments reflects the growing importance of localized thermal management within hyperscale AI environments.
A number of benefits arise from using such a system:
- Greater energy efficiency
- Improved capacity for ultradense AI racks
- Lower probability of overheating hardware components
- Reduction in cooling load
- Increased stability when making inference runs for prolonged periods of time
The installation of RDHX systems enables operators to deploy more powerful GPU clusters without relying solely on air-cooled facilities.
The shift is also accelerating partnerships involving Vertiv Schneider Electric OCI compatible liquid loop infrastructure systems designed specifically for hyperscale AI deployments.
Liquid Cooling Increases Thermal Capital Expenditures
The transition to liquid-cooled infrastructure is also forcing companies to spend more capital on thermal expenses.
Traditionally, companies spent less on cooling infrastructure than on computing equipment. But things have changed with the implementation of AI.
Today’s AI infrastructure demands organizations to make significant investments in the following cooling components:
- Cooling equipment that directly targets computer chips
- Advanced liquid loops
- Infrastructure plumbing modifications
- Thermal solutions monitoring applications
- Highly capable HVAC infrastructure
This fact is illustrated by Oracle’s Supercluster requirements for its cooling infrastructure. The rise of the 120kW Blackwell rack thermal HVAC upgrade mandate is forcing operators to treat thermal engineering as a strategic infrastructure priority rather than a secondary operational concern.
Without upgrades in the thermal engineering process, companies would find it challenging to meet future AI workloads.
Thus, thermal engineering has become a critical aspect of competition in cloud infrastructure operations.
Infrastructure Retrofits Pose Deployment Issues
Although enhanced cooling solutions enhance operational efficiency, retrofitting legacy systems for AI readiness poses several operational challenges.
Some key issues in deployment are:
- Higher costs of modern HVAC solutions
- Temporary downtime during the retrofit process
- Need for plumbing and facility remodeling
- Limited compatibility for older systems
- Difficulties in maintaining infrastructure over time
Oracle’s approach to infrastructure is predicted to shape industry-wide purchasing behavior, especially as more companies invest in creating AI islands and inference centers.
Moreover, Oracle’s emphasis on liquid-ready infrastructure is well-suited to the growing need for AI clouds capable of continuous inference.
Conclusion
Oracle is gearing up its cloud infrastructure to meet next-gen AI deployment needs through the development of OCI Superclusters, the introduction of advanced liquid-cooling technology, and the implementation of rear-door heat exchangers.
Industry experts are increasingly examining how does Oracle OCI Supercluster RDHX mandate force data center operators to replace raised-floor air cooling with direct-liquid loops for 120kW Blackwell AI racks as AI factories become more common across global cloud infrastructure.
The strong collaboration between Oracle and NVIDIA’s Blackwell processors, AI factories, and increased investment in thermal capital expenditures underscore the importance of cooling infrastructure for enterprise AI scalability.
The overall goal of Oracle OCI Supercluster thermal scaling pressure in 2026 is to show that the future of competitive cloud computing will be defined by both computational and thermal efficiency capabilities.
In the future, as AI infrastructure worldwide continues to expand, liquid-cooled data centers could emerge as the foundation of next-gen cloud computing platforms.
Enterprise Procurement Checklist
- ORCL Compliance: Verify that all sovereign cloud regions support “liquid-to-chip” connectivity before migration.
- Infrastructure Cost: Budget for a 25% increase in facility HVAC CapEx for all “Supercluster-ready” zones.
- Deployment Impact: RDHX retrofits may cause 48-hour localized downtime for non-contained rack rows.
- Procurement Effect: Standardize on Vertiv (VRT) or Schneider Electric liquid loops to ensure OCI compatibility.
- Operational Step: Implement “Thermal Shadow” monitoring to detect hotspots in high-density AI factories.
Source- Oracle Blogs













