OpenAI Aims for Massive GPU Expansion, Signaling Strategic shift in AI Infrastructure
San Francisco, CA – OpenAI CEO Sam Altman has reportedly outlined an ambitious goal for the artificial intelligence research company: to acquire 100 million Graphics Processing Units (GPUs) to power its future AI models. This staggering number, which could cost upwards of $3 trillion, underscores OpenAI’s commitment to scaling its operations adn maintaining its leadership in the rapidly evolving AI landscape.
While Microsoft’s Azure remains OpenAI’s primary cloud provider, the company is actively diversifying its infrastructure partnerships. Recent reports indicate collaborations wiht Oracle and exploration of Google’s Tensor Processing Unit (TPU) accelerators. This strategic move mirrors a broader industry trend where major tech players like Meta, Amazon, and Google are increasingly investing in in-house chip advancement and high-bandwidth memory (HBM) solutions.
The surge in demand for GPUs directly benefits companies like SK Hynix, a key supplier of HBM, a critical component for AI training.Industry insiders suggest that customers like OpenAI are playing a meaningful role in defining the specifications for GPUs and HBMs,tailoring them to their specific needs. SK Hynix is poised for considerable growth, with forecasts predicting a record operating profit in the second quarter of 2025, driven in part by this escalating demand.
OpenAI’s relationship with the SK Group appears to be strengthening. Recent meetings between SK Group Chairman Chey Tae-won, CEO Kwak no-jung, and Sam Altman suggest a concerted effort to solidify their position within the AI infrastructure supply chain. This collaboration builds upon previous engagements, including SK Telecom’s AI competition with ChatGPT and its participation in the MIT GenAI Impact Consortium.
However, OpenAI’s rapid expansion is not without its challenges. Concerns have been raised regarding the financial sustainability of such ambitious scaling, with reports suggesting that SoftBank may be reconsidering its investment. Achieving the 100 million GPU target would necessitate not only substantial capital but also significant advancements in compute efficiency, manufacturing capacity, and global energy infrastructure. For now, the goal is viewed as an aspirational statement of intent rather than a concrete roadmap.