OpenAI and Amazon announced a multi-year partnership to speed up AI innovation for businesses, startups, and consumers worldwide. Amazon plans to invest $50 billion in OpenAI, starting with $15 billion now and an additional $35 billion over the next 4 months, subject to certain conditions.
Working together to deliver advanced AI tools to businesses worldwide, OpenAI and Amazon are building a stateful runtime environment using OpenAI’s models. The new environment will be available through Amazon Bedrock.
Stateful developer environments represent the next step in using advanced AI models. They allow models to access resources such as computing power, memory, and identity. With a stateful runtime environment, developers can retain context, preserve previous work, use multiple software tools and data sources, and access computing resources. These environments are built to support active projects and workflows.
These stateful developer environments will be optimized for AWS’s infrastructure and will work with Amazon Bedrock Agent Core and other AWS services. This way, customers’ AI applications and agents will run smoothly alongside their other AWS applications. The stateful runtime environment is expected to launch in the next few months.
Making OpenAI’s most advanced enterprise platform available to AWS customers
AWS will be the only third-party cloud provider for OpenAI Frontier. This will give more businesses access to OpenAI’s most advanced enterprise platform as demand for AI grows across industries.
Let’s organizations build, deploy, and manage teams of AI agents that work across real business systems with shared context, built-in governance, and strong security. Companies do not need to manage underlying infrastructure as businesses move AI from testing to production. Frontier makes it easy to quickly, securely, and at scale add powerful AI to existing workflows.
OpenAI will use Trainium’s computing power to meet the growing demand from Amazon customers. OpenAI and AWS are increasing their current $38 billion multi-year agreement by another $100 billion over the next 8 years. As part of this, OpenAI will use about 2 GW of Trainium capacity through AWS to support Stateful Runtime, Frontier, and other advanced workloads. This deal will help lower costs and make large-scale AI production more efficient.
With this agreement, OpenAI secures long-term computing capacity and works with AWS to use custom-built silicon chips with its larger computing system. This setup lets businesses use AI on demand without managing the underlying infrastructure.
This Commitment covers both Tranium 3 and the upcoming Tranium 4 chips, which will support many advanced AI workloads. Tranium 4 is expected to be available in 2027 and will offer much better performance, including higher FP4 compute power, more memory bandwidth, and greater high-bandwidth memory capacity to support more powerful AI systems.
Custom Models Will Be Available to Support Amazon’s Customer-Facing Applications
OpenAI and Amazon will collaborate to develop custom models for Amazon developers to use in customer-facing applications. Amazon teams will be able to adapt OpenAI models for different AI products and agents that serve customers directly. These new models will add to the options already available to Amazon developers, like the Nova family, giving teams more tools to build and deliver at scale.
OpenAI and Amazon share a belief that AI should show up in ways that are practical and genuinely useful for people, said Sam Altman, co-founder and CEO of OpenAI. Combining OpenAI’s models with Amazon’s infrastructure and worldwide reach helps us put powerful AI into the hands of businesses and users at a real scale.
We have many developers and companies eager to run devices powered by OpenAI models on AWS. Our unique collaboration with OpenAI to provide stateful runtime environments will change what’s possible for customers building AI apps and agents, Andy Jassy, President and CEO of Amazon, said. We continue to be impressed with what OpenAI is building, and we’re excited not only about their decision to go big on our custom AI silicon (Trainium), but also about our opportunity to invest in the company and partner with them over the long term.










