OpenAI plans to address one of the biggest challenges in scaling AI by funding power generation and transmission for its large‑scale data centers.  

This marks a change, column, electricity access, no shapes, data center planning, as AI data centers require far more power than traditional data centers, altering AI infrastructure costs.  

Deloitte estimates that US AI data center power demand may rise over thirtyfold by 2035: from about 4 GW in 2024 to 123 GW.  

Last week, Microsoft also announced it would fund extra power and water infrastructure to ease pressure on local utilities.  

Each OpenAI target site will have its own energy plan, possibly building dedicated generation, storage, and transmission infrastructure rather than relying on a community grid.  

Every community and region has unique energy needs and grid conditions. Our commitment will be customized to the region. OpenAI said in an internal statement, depending on the site. This can range from bringing new dedicated power and storage that the project fully funds to adding and paying for new energy generation and transmission resources.  

A Move Toward Energy Independence 

Analysts say this signals a major shift, with companies now choosing data center sites for power rather than just network access.  

Historically, data centers were built near internet exchange points and city centers to decrease latency, said Ashish Banerjee, Sr. Principal Analyst at Gartna. However, as training requirements reach the gigawatt scale, open air is signaling that they will favor regions with energy sovereignty. There are places where they can build their own generation and transmission infrastructure rather than fighting for scraps from an overtaxed public good.  

For network design, this means expanding connections between the core and the edge. Large data centers in remote, energy-rich areas require long-distance, high-bandwidth fiber to connect these power islands to the network.  

We should expect a bifurcated network: a massive, centralized core for code model training (large-scale training, not done in real time) located in the wilderness, and a highly distributed edge for hot, real-time inference (immediate use of AI results) located near users, Banerjee added.  

Manish Ravat, a semiconductor analyst at TechInsights, also notes that the benefits may increase overall complexity.  

On the network side, this pushes architectures toward fewer mega hubs and more regionally distributed inference and training clusters. These are connected via high-capacity backbone links. Ravat said the trade-off is a greater upfront capex burden, but greater control over scalability timelines. This reduces dependence on slow-moving utility upgrades.  

For businesses, this could affect cost predictability and service locations, as platforms increasingly rely on power-rich areas rather than city data centers.  

What This Means for Data Center Design 

By managing their own power supply and transmission, AI firms are acting like utility providers.  

For data center interconnect design, focus shifts from basic redundancy to energy-aware load balancing. If an AI provider owns the power plant, they can time compute cycles with energy output, creating a new level of hardware integration.  

Unless I say it’s a misconception that these large sites handle all AI processing, in reality, energy investments target broad‑purpose model training, not instant inference.  

This move actually relaxes the latency requirements for the training site itself. It allows aiming for a more robust, albeit distant, goal. Interconnects binary added. The real innovation here isn’t just faster chips. It’s the synchronization of the electrical grid with the compute fabric to ensure a power fluctuation doesn’t kill a multi-month training run.  

This shift changes the approach to data center resilience, moving away from relying on grid diversity and toward models that combine owned power resources with network redundancy.  

This change places greater demands on network design, requiring stronger resilience across distributed facilities and tighter controls over latency (minimizing delays in data transfer) and traffic flows, Rawat said, especially for AI workloads sensitive to latency. This is likely to result in a tiered architecture, with large training clusters positioned near dedicated power assets, while inference infrastructure which handles delivering results to users stays closer to end users.  

Source: OpenAI shifts AI data center strategy toward power-first design