2 Sources
2 Sources
[1]
While OpenAI races to build AI data centers, Nadella reminds us that Microsoft already has them | TechCrunch
Microsoft CEO Satya Nadella on Thursday tweeted a video of his company's first deployed massive AI system -- or AI "factory" as Nvidia likes to call them. He promised this is the "first of many" such Nvidia AI factories that will be deployed across Microsoft Azure's global data centers to run OpenAI workloads. Each system is a cluster of more than 4,600 Nvidia GB300s rack computers sporting the much-in-demand Blackwell Ultra GPU chip and connected via Nvidia's super-fast networking tech called InfiniBand. (Besides AI chips, Nvidia CEO Jensen Huang also had the foresight to corner the market on InfiniBand when his company acquired Mellanox for $6.9 billion in 2019.) Microsoft promises that it will be deploying "hundreds of thousands of Blackwell Ultra GPUs" as it rolls out these systems globally. While the size of these systems is eye-popping (and the company shared plenty more technical details for hardware enthusiasts to peruse), the timing of this announcement is also noteworthy. It comes just after OpenAI, its partner and well-documented frenemy, inked two high-profile data center deals with Nvidia and AMD. In 2025, OpenAI has racked up, by some estimates, $1 trillion in commitments to build its own data centers. And CEO Sam Altman said this week that more were coming. Microsoft clearly wants the world to know that it already has the data centers -- more than 300 in 34 countries -- and that they are "uniquely positioned" to "meet the demands of frontier AI today," the company said. These monster AI systems are also capable of running the next generation of models with "hundreds of trillions of parameters," it said. We expect to hear more about how Microsoft is ramping up to serve AI workloads later this month. Microsoft CTO Kevin Scott will be speaking at TechCrunch Disrupt, which will be held October 27 to October 29 in San Francisco.
[2]
Microsoft's answer to OpenAI's data centers: An AI factory
Microsoft has deployed its first large-scale "AI factory" a massive cluster of over 4600 Nvidia GB300 rack computers dedicated to running OpenAI's workloads. Microsoft CEO Satya Nadella announced on Thursday the deployment of the company's first large-scale AI system, or "AI factory," which will be used to run OpenAI workloads. In a video post, Nadella stated this is the first of many such systems that will be installed across Microsoft's global Azure data centers. The deployed system is a cluster of more than 4,600 Nvidia GB300 rack computers, which feature Nvidia's Blackwell Ultra GPUs and are connected with InfiniBand networking technology. Microsoft stated its plans include deploying "hundreds of thousands" of these GPUs globally in systems capable of running next-generation AI models with "hundreds of trillions of parameters." The announcement follows recent high-profile deals made by Microsoft's partner, OpenAI, with Nvidia and AMD to build its own AI data centers. In 2025, OpenAI has reportedly made commitments estimated at up to $1 trillion for this effort, and CEO Sam Altman indicated this week that more deals were forthcoming. In its announcement, Microsoft highlighted its existing infrastructure of more than 300 data centers in 34 countries, stating they are positioned to meet the current demands of frontier AI.
Share
Share
Copy Link
Microsoft deploys its first large-scale AI system, showcasing existing infrastructure capabilities amidst OpenAI's push to build its own data centers. The move highlights the intensifying competition in AI computing power.
Microsoft has made a significant move in the AI computing landscape by deploying its first large-scale AI system, dubbed an 'AI factory'. This development comes as a strategic response to recent initiatives by its partner and competitor, OpenAI, to build its own data centers
1
.Microsoft CEO Satya Nadella announced the deployment through a video tweet, showcasing a massive cluster of over 4,600 Nvidia GB300 rack computers. These racks are equipped with the highly sought-after Blackwell Ultra GPU chips and are interconnected using Nvidia's ultra-fast InfiniBand networking technology
2
.The company has ambitious plans to roll out "hundreds of thousands of Blackwell Ultra GPUs" globally, positioning these systems to run next-generation AI models with "hundreds of trillions of parameters"
1
.In a clear message to the industry, Microsoft emphasized its existing infrastructure of over 300 data centers spread across 34 countries. The company states that these facilities are "uniquely positioned" to "meet the demands of frontier AI today"
1
.The timing of Microsoft's announcement is particularly noteworthy, coming on the heels of OpenAI's recent high-profile deals with Nvidia and AMD to construct its own data centers. In 2025, OpenAI has reportedly committed up to $1 trillion towards building its AI infrastructure, with CEO Sam Altman hinting at more deals to come
2
.Related Stories
It's worth noting that Nvidia plays a crucial role in this AI infrastructure race. Besides providing the advanced GPU chips, Nvidia's acquisition of Mellanox for $6.9 billion in 2019 has given it control over the InfiniBand technology, which is critical for these high-performance AI systems
1
.As the AI computing landscape continues to evolve rapidly, industry watchers are keen to learn more about Microsoft's plans. The company's CTO, Kevin Scott, is scheduled to speak at the upcoming TechCrunch Disrupt event in San Francisco, where he may provide further insights into Microsoft's AI infrastructure strategy
1
.Summarized by
Navi
[1]
18 Sept 2025β’Technology
Todayβ’Technology
02 Oct 2025β’Technology