OpenAI is rapidly transforming from a research lab to a global infrastructure powerhouse. Once reliant as Microsoft as its sole cloud provider, the ChatGPT-maker is now partnering with various tech giants, chip suppliers and cloud providers — all in a race to secure enough computing power to fuel its artificial intelligence ambitions.
The latest step in that expansion is a seven-year $38 billion partnership with Amazon Web Services. The deal gives OpenAI access to AWS’s state-of-the-art cloud computing services, including its Amazon EC2 UltraServers equipped with hundreds of thousands of Nvidia GPUs, and the capacity to scale up to tens of millions of CPUs for its most demanding generative AI workloads. Once it is fully online by 2026, that hardware will allow the company to rapidly train its next generation of AI models and scale its core products like ChatGPT and DALL·E.
Until recently, OpenAI’s compute operations were fairly straightforward. From 2019 to 2023, Microsoft was both its largest investor and sole cloud provider under a tightly bound agreement, which required all training and inference workloads to run on Azure. As demand for computing power surged, this exclusivity became a constraint. When OpenAI rolled out its GPT-4.5 model in February 2025, the launch had to be staggered because the company had “run out of GPUs,” according to CEO Sam Altman. Several months later, the two renegotiated their contract entirely, freeing OpenAI to buy cloud capacity wherever it can find it.
Who Is OpenAI Partnering With?
OpenAI has struck major cloud deals with AWS, Oracle, CoreWeave and Google Cloud Platform, in addition to its long-term commitment to Microsoft, allowing the company to split its AI workloads across multiple channels. It’s also working with SoftBank, Oracle and the UAE to build new data centers worldwide, and has inked partnerships with chip suppliers Broadcom and Nvidia. In all, OpenAI has reportedly committed more than $1 trillion in hardware and cloud infrastructure agreements over the next ten years.
Since then, OpenAI has spent hundreds of billions of dollars on cloud agreements, and it plans to spend about $1 trillion more over the next ten years, creating a tightly knit, diverse constellation of channels through which it can split its AI workloads. At the same time, the company is teaming up with Oracle, Tokyo-based holdings group SoftBank, the United Arab Emirates and others to construct new data centers worldwide, effectively building out its own global network of AI infrastructure. To power these data centers, OpenAI has also secured chip supply deals with Nvidia, AMD and Broadcom.
One striking feature of the financial structure behind these efforts is how circular it has all become. The same chipmakers that supply OpenAI’s critical hardware needs are also helping fund and profit from the very data centers that rely on them. That loop guarantees ongoing access to cutting-edge components, but it also concentrates risk within a tight nexus of interdependent tech giants. A disruption anywhere — whether from supply chain bottlenecks, energy shortages or regulatory hurdles — could send ripples across the entire system.
Yet, this rapid expansion cements OpenAI’s spot among the industry’s biggest spenders. For context, Amazon, Google, Meta and Microsoft have collectively poured more than $360 billion in the past year alone into capital expenditures, most of it on AI infrastructure. OpenAI’s new partnerships signal that it wants to compete not only in software but in the physical machinery of artificial intelligence itself, from the chips and clusters to the energy pipelines that make it all possible.
In its new phase, OpenAI isn’t just renting space in the cloud anymore — it’s trying to own a piece of the sky.
OpenAI’s Collection of Cloud Deals
Here’s a snapshot of OpenAI’s growing cloud ecosystem. Together, these deals reveal how the company is locking in the compute power it needs to drive its AI models forward.
Amazon Web Services (AWS)
OpenAI’s $38 billion partnership with AWS, announced in late 2025, marks its first major infrastructure deal with Amazon. The seven-year agreement gives OpenAI access to AWS’s large-scale cloud resources — including Nvidia GPU clusters and other AI-focused hardware in its wheelhouse — enabling these systems to distribute workloads across multiple providers and reduce the risk of downtime from any single source. The deal also grants access to Amazon’s global compute and networking footprint, with the goal of boosting the reliability of popular services like ChatGPT. As AWS continues to expand its own AI infrastructure, the partnership speaks to both companies’ interest in building resilient, high-performance computing networks.
Microsoft Azure
Microsoft remains OpenAI’s main cloud and infrastructure partner, powering the core compute needed to train and run its large language models. The partnership, which started in 2019, has grown through several major investments, including a reported $250 billion purchase commitment in a 2025 restructuring. That deal ended Azure’s exclusivity, giving OpenAI the freedom to work with other cloud providers while keeping Microsoft as a key part of its infrastructure. Microsoft is also OpenAI’s largest investor, and has integrated its models across products like Copilot and Azure AI services. The new structure reflects a more complicated relationship between the two companies, where OpenAI gains operational independence without severing its long-standing, strategic ties to Microsoft.
Oracle
OpenAI’s deal with data management giant Oracle is one of its largest reported infrastructure commitments ever. The $300 billion, five-year agreement allows OpenAI to tap into Oracle’s massive data center and compute capacity for AI workloads. A part of America’s larger $500 billion Stargate Project, the initiative aims to build hyperscale facilities in an effort to support the company’s next-generation models. The agreement marks a shift from traditional cloud contracts toward long-term infrastructure leasing, reflecting just how resource-intensive artificial intelligence has become, while positioning Oracle alongside Microsoft and Amazon as a major player in the AI cloud market.
CoreWeave
CoreWeave, a specialist AI cloud provider known for its high-performance GPU infrastructure, signed a $22.4 billion deal with OpenAI to deliver dedicated compute capacity. This agreement builds on top of several earlier contracts, and indicates an increasing reliance on so-called “neocloud,” GPU-as-a-service providers that cater specifically to large-scale AI workloads, unlike general-purpose hyperscalers. The partnership underscores OpenAI’s strategy of mixing large, traditional suppliers like AWS with smaller firms to balance cost and performance. It also shows how newcomers are carving out their piece in the AI boom by offering more agile solutions that big cloud providers can’t always match.
Google Cloud Platform (GCP)
Despite being direct competitors in the generative AI space, OpenAI turned to Google Cloud Platform in June 2025 when it needed more computing resources to support ChatGPT’s global expansion. While financial terms haven’t been disclosed, Google’s infrastructure helps to supplement OpenAI’s existing network and distribute workloads more efficiently. The surprising alliance is a show of pragmatism in the way collaborations are forming across the industry. Adding GCP to its infrastructure portfolio strengthens the diversity of OpenAI’s multi-cloud strategies as the company’s computing needs continue to evolve.
What Happens If Something Goes Wrong?
If one of OpenAI’s cloud deals goes down — whether due to a service outage or a contractual snag — the impact depends on how its workloads and commitments are spread across different providers.
With multiple partners, OpenAI can reroute tasks like model training or running ChatGPT to other infrastructure, so a single technical outage or missed delivery doesn’t halt operations. This setup also helps the company keep continued access to critical hardware, like scarce, high-performance GPUs, even if one provider’s system goes offline or a partner falls out. That said, if several providers hit problems at the same time, training and service performance will likely hit a bottleneck.
Overall, OpenAI’s multi-cloud approach acts as both a built-in safety net and a strategic hedge, giving the company flexibility to manage both infrastructure hiccups and business-level risks.
What These Deals Mean for the AI Industry
All these new partnerships mark a turning point — not just for OpenAI itself, but for the vast infrastructure underpinning the entire AI industry. Processing power has shifted from a back-end detail to a strategic asset, with the rapid buildout of data centers becoming a key engine driving U.S. economic growth. Meanwhile, by spreading hundreds of thousands of GPUs and tens of millions of CPUs across multiple cloud providers, OpenAI is moving beyond its longtime dependence on Microsoft Azure, helping to ensure its models can train and run without interruption. The approach isn’t entirely new, but its sheer scale and speed are unprecedented, underscoring just how valuable access to vast, reliable compute has become.
For the major cloud providers, landing an OpenAI partnership is a huge win. These companies are becoming central hubs of modern AI development because they can concentrate resources in ways that fast-track innovation. But the financial arrangements backing these contracts have grown unusually circular. OpenAI receives billions in investments only to funnel much of that money back into the same companies that supply its chips and cloud infrastructure. It’s an ecosystem that fuels itself, creating a catastrophic risk if AI growth fails to keep pace with all the spending. And the stakes are enormous: OpenAI’s total cloud commitments now approach $600 billion, highlighting just how capital-intensive building advanced AI has become — and how fragile the system could be if it all falls apart.
For businesses, the knock-on effects will be felt across sectors, particularly in industries like healthcare and fintech, where faster, more reliable AI capabilities could supercharge everything from fraud detection to drug discovery. At the same time though, running these massive cloud operations raises tough questions about energy sourcing, efficiency and sustainability, since training and operating AI models now consumes power on a scale once reserved for national research labs.
Ultimately, OpenAI’s expansion shows that cloud infrastructure isn’t just a back-end service anymore — it’s the strategic engine that powers the AI race as a whole. The company that controls the most compute will determine not only who wins, but how quickly the technology itself can advance.
Frequently Asked Questions
Why is OpenAI partnering with so many different cloud providers?
OpenAI appears to be diversifying its compute sources to avoid bottlenecks, reduce dependency on long-time partner Microsoft Azure and ensure it has enough GPUs and CPUs to train and deploy its increasingly advanced AI models.
What does OpenAI’s deal with AWS include?
OpenAI has entered into a seven‑year, $38 billion partnership with Amazon’s cloud service, AWS.. Through the deal, OpenAI can use hundreds of thousands of Amazon’s Nvidia GPUs and expand its computing power to tens of millions of CPUs as needed. OpenAI will begin using AWS infrastructure immediately through this deal, with full capacity expected by 2026 and the option to expand into 2027 and beyond, according to Amazon.
Why does AI need so much computing power, and what role do data centers play?
Training large AI models involves processing enormous datasets through billions (or even trillions) of parameters, which requires massive compute clusters made up of specialized AI chips. This all happens inside data centers — large facilities filled with servers that store and process information for digital services, such as artificial intelligence. Most companies rely on cloud providers like Amazon, Google and Microsoft to access these data centers, allowing them to scale their AI systems quickly without having to build their own infrastructure.
