7 Sources
7 Sources
[1]
Microsoft building datacenter superclusters
The 100 trillion-parameter models of the near future can't be built in one place Microsoft believes the next generation of AI models will use hundreds of trillions of parameters. To train them, it's not just building bigger, more efficient datacenters - it's started connecting distant facilities using high-speed networks spanning hundreds or thousands of miles. The first node of this multi-datacenter cluster came online in October, connecting Microsoft's datacenter campus in Mount Pleasant, Wisconsin, to a facility in Atlanta, Georgia. The software giant's goal is to eventually scale AI workloads across datacenters using similar methods as employed to distribute high-performance computing and AI workloads across multiple servers today. "To make improvements in the capabilities of the AI, you need to have larger and larger infrastructure to train it," said Microsoft Azure CTO Mark Russinovich in a canned statement. "The amount of infrastructure required now to train these models is not just one datacenter, not two, but multiples of that." These aren't any ordinary datacenters, either. The facilities are the first in a family of bit barns Microsoft is calling its "Fairwater" clusters. These facilities are two stories tall, use direct-to-chip liquid cooling, and consume "almost zero water," Microsoft boasts. Eventually, Microsoft envisions this network of datacenters will scale to hundreds of thousands of diverse GPUs chosen to match workloads and availability. At its Atlanta facility, Microsoft will deploy Nvidia's GB200 NVL72 rack systems, each rated to host over 120 kilowatts of kit and to offer 720 petaFLOPS of sparse FP8 compute for training, helped by the presence of 13TB HBM3e memory,. By connecting its datacenters, Microsoft will be able to train much larger models and give itself the chance to choose different locations for its facilities - meaning it can choose places with cheap land, cooler climates, and - perhaps most importantly - access to ample power. Microsoft doesn't specify what technology it's using to bridge the roughly 1,000 kilometer (as the vulture flies) distance between the two datacenters, but it has plenty of options. Last month, Cisco revealed the Cisco 8223, a 51.2 Tbps router designed to connect AI datacenters up to 1,000 kilometers away. Broadcom intends its Jericho 4 hardware, announced in August, to do the same job and provide similar bandwidth. Meanwhile, Nvidia, which has quietly become one of the largest networking vendors in the world on the back of the AI boom, has teased its Spectrum-XGS network switches with crypto-miner-turned-rent-a-GPU outfit Coreweave signed up as an early adopter. We've asked Microsoft to comment on which of these technologies it's using at its Fairwater facilities, and will update this story if we hear back. But Redmond's close ties to Nvidia certainly makes Spectrum-XGS a likely contender. Microsoft is famously one of the few hyperscalers that's standardized on Nvidia's InfiniBand network protocol over Ethernet or a proprietary data fabric like Amazon Web Service's EFA for its high-performance compute environments. While Microsoft has no shortage of options for stitching datacenters together, distributing AI workloads without incurring bandwidth- or latency-related penalties remains a topic of interest to researchers. They're making good progress: Readers may recall that earlier this year, Google's DeepMind team published a report showing that many of the challenges can be overcome by compressing models during training and strategically scheduling communications between datacenters. ®
[2]
What is an AI 'superfactory'? Microsoft unveils new approach to building and linking data centers
Microsoft says it has linked massive data centers in Wisconsin and Atlanta -- roughly 700 miles and five states apart -- through a high-speed fiber-optic network to operate as a unified system. The announcement Wednesday morning marks the debut of what the company is calling its AI "superfactory," a new class of data centers built specifically for artificial intelligence. The facilities are designed to train and run advanced AI models across connected sites -- a setup that Microsoft describes as the world's first "planet-scale AI superfactory." Unlike traditional cloud data centers that run millions of separate applications for different customers, Microsoft says the new facilities are designed to handle single, massive AI workloads across multiple sites. Each data center houses hundreds of thousands of Nvidia GPUs connected through a high-speed architecture known as an AI Wide Area Network, or AI-WAN, to share computing tasks in real time. Microsoft says it's using a new two-story data center design to pack GPUs more densely and minimize latency, a strategy enabled in part by a closed-loop liquid cooling system. By linking sites across regions, the company says it's able to pool computing capacity, redirect workloads dynamically, and distribute the massive power requirements across the grid so that it isn't dependent on available energy resources in one part of the country. This unified supercomputer will train and run the next generation of AI models for key partners like OpenAI, France's Mistral AI, and xAI, and for Microsoft's own internal models. The new approach shows the rapid pace of the AI infrastructure race among the world's largest tech companies. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter -- much of it on data centers and GPUs -- to keep up with what it sees as soaring AI demand. Amazon is taking a similar approach with its new Project Rainier complex in Indiana, a cluster of seven data center buildings spanning more than 1,200 acres. Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar bets, collectively putting hundreds of billions into new facilities, chips, and systems to train and deploy AI models. Some analysts and investors see echoes of a tech bubble in the rush to build AI infrastructure, if business customers don't realize enough value from AI in the near term. Microsoft, Amazon and others say the demand is real, not speculative, pointing to long-term contracts as evidence.
[3]
Microsoft links 700-mile-apart data centers for first AI superfactory
Announced on Wednesday, the Redmond-based company is building a special class of data centers, known as Fairwater data centers, designed to work as a cohesive unit sharing common architecture and design. The Fairwater data centers are interconnected to ensure a seamless and rapid data flow between them. This facility enables Fairwater sites situated across different geographies to train AI models in weeks, rather than months. Microsoft's network of Fairwater data centers is designed to help OpenAI and the Microsoft AI Superintelligence Team use advanced GPUs for operational compute tasks. "This is about building a distributed network that can act as a virtual supercomputer for tackling the world's biggest challenges in ways that you just could not do in a single facility," said Alistair Speirs, Microsoft general manager focusing on Azure infrastructure. As AI models become increasingly large, every new generation requires more parameters -- the internal settings that enable AI to understand and process information, and produce accurate answers.
[4]
Microsoft brings Atlanta-based AI 'super factory' online - SiliconANGLE
Microsoft Corp. said today it's bringing online its first artificial intelligence "super factory" with the grand opening of its newest multibillion-dollar data center facility in Atlanta. The company said it's going to link the massive new server farm with existing data center infrastructure elsewhere in the country to ensure it can provide the immense computing power its customers require without energy constraints. The announcement is a key milestone for Microsoft, which has ambitions to double its data center footprint over the next two years. The new Atlanta site is said to be a part of the company's larger "Fairwater" network, which is a new class of server hubs that's dedicated to training and running AI models. Microsoft said the facility will host hundreds of thousands of Nvidia Corp.'s most powerful graphics processing units and other AI accelerators, and have dedicated high-speed links to other facilities that make up the greater Fairwater network. The complex spans more than one million square feet and sits on 85 acres of land, and it's also the company's first data center to feature two floors, which means it can squeeze in double the amount of computing infrastructure compared to older facilities. Microsoft said the design supports a greater degree of efficiency in terms of speed and power consumption. The facility's energy requirements also benefit from a new liquid-cooling system that allows it to pack the thousands of individual GPUs much more tightly than before. Because it recycles virtually all of the water it uses, its consumption is equivalent to that of approximately 20 U.S. households, the company added. To connect all of those GPUs, Microsoft has installed more than 120,000 miles of fiber-optic cables at the facility, allowing them to send and receive data at close to the speed of light. Microsoft Executive Vice President of Cloud and AI Scott Guthrie said this network can be likened to a "cutting-edge express lane" that's dedicated to AI traffic. Microsoft didn't say how much the Atlanta data center cost to build, nor did it comment on its annual electricity consumption, but it said the broader Fairwater network will use "multigigawatts" of power. To ensure it can access enough power, Fairwater has been distributed across multiple geographic regions, the company said. For instance, one of the other data centers is located in Wisconsin, which is roughly 700 miles distant from Atlanta. This means Microsoft can tap into various different power sources and avoid exhausting the available energy in one location. One of the Atlanta facility's biggest customers will be OpenAI, which is already heavily reliant on Microsoft for its compute infrastructure needs, but it will also cater to other AI firms, including the French startup Mistral AI and Elon Musk's xAI Corp. In addition, Microsoft will also reserve some of the capacity for its own needs, including training its proprietary models. The Redmond-based company recently revealed that it spent a staggering $34 billion on capital expenditures during its fiscal first quarter, with the bulk of that capital going towards its data center build out. It also said it's planning to increase its investments in infrastructure over the remainder of the fiscal year. It's far from the only company doing this - in fact, a number of technology giants, including Google LLC, Oracle Corp., Meta Platforms Inc. and Amazon Web Services Inc., have similar plans. Combined, those companies are set to spend around $400 billion on AI infrastructure this year.
[5]
Microsoft Builds Massive AI 'Super Factory' In Atlanta To Power OpenAI And Elon Musk's xAI - Microsoft (NASDAQ:MSFT)
Microsoft Corp (NASDAQ:MSFT) is accelerating its data center expansion with the launch of an artificial intelligence "super factory" in Atlanta -- a massive two-story complex designed to link seamlessly with similar hubs across its Fairwater network to deliver enormous computing power. The company plans to double its global data center footprint within two years, with the Atlanta site emerging as one of its most important new facilities. Built specifically for AI training, the Fairwater hub will house hundreds of thousands of Nvidia Corp (NASDAQ:NVDA) GPUs and feature high-speed interconnections with other Microsoft data centers, the Wall Street Journal reported on Wednesday. Also Read: Nvidia Rallies As Microsoft Expands AI Infrastructure In Middle East With US Backing OpenAI, Mistral AI, and Elon Musk's xAI will use the facility alongside Microsoft's own AI model training operations. Microsoft, with a market cap of nearly $3.78 trillion, gained 19% year-to-date. Microsoft spent over $34 billion in capital expenditures during its fiscal first quarter and plans to boost infrastructure investments further next year as part of a broader industry surge -- one that's expected to reach $400 billion in AI spending this year. The company says Fairwater's two-story design improves network efficiency and reduces latency, while a liquid-cooling system enables GPUs to be densely packed. The one-million-square-foot Atlanta complex spans 85 acres and consumes about as much water annually as 20 U.S. households. Microsoft is also laying 120,000 miles of fiber-optic cables to link Fairwater locations, enabling data to travel "nearly at the speed of light." Bank of America Securities analyst Brad Sills highlighted renewed momentum in Microsoft's cloud business, driven by strong enterprise demand for Azure security services and ongoing AI adoption. He cited consistent deal activity from channel partners and rising investment in AI and data infrastructure as evidence of the company's growing role in corporate technology roadmaps. Sills identified Azure growth, enterprise Office adoption, and strategic AI infrastructure expansion -- including Microsoft's work with Aligned Data Centers, BlackRock Inc (NYSE:BLK) and Nvidia -- as key growth catalysts. The analyst projected fiscal 2026 revenue of $322.1 billion and EPS of $15.24. MSFT Price Action: MSFT stock traded 1.25% lower to $502.24 at last check on Wednesday. Read Next: The 'Big 3' Tech Playbook: Why Apple, Amazon And Microsoft Keep Winning Photo: Shutterstock MSFTMicrosoft Corp$507.82-0.17%OverviewBLKBlackRock Inc$1095.970.94%NVDANVIDIA Corp$192.67-0.25%Market News and Data brought to you by Benzinga APIs
[6]
Microsoft fires up 'AI superfactory' powered by hundreds of thousands of Nvidia GPUs (MSFT:NASDAQ)
Microsoft (MSFT) has brought its latest datacenter online in Atlanta, which it dubs an 'AI superfactory' as it directly connects with other datacenters to deliver the power of hundreds of thousands of Nvidia (NVDA) GPUs to support AI workloads. The The Atlanta AI superfactory connects with other datacenters to deliver massive GPU power and support large-scale, unified AI workloads for Microsoft and partners like OpenAI. Interconnected Fairwater datacenters form a 'fungible fleet,' seamlessly running AI workloads across sites for maximum efficiency and performance, and preventing GPU idleness. Microsoft uses dedicated fiber networks and clusters of NVIDIA GPUs to flexibly allocate hardware for diverse AI workloads, ensuring resources are optimized and readily available.
[7]
Microsoft builds AI super factory in Atlanta as part of data center expansion - WSJ By Investing.com
Investing.com -- Microsoft is unveiling a new AI "super factory" in Atlanta on Wednesday as part of its plan to double its data center footprint over the next two years, according to a report from the Wall Street Journal. The Atlanta site belongs to Microsoft's Fairwater network of artificial-intelligence centers, a new class of hubs built specifically for AI training. The facility will house hundreds of thousands of Nvidia graphics processing units and feature dedicated high-speed connections to other Fairwater locations. Major AI companies including OpenAI, France's Mistral AI, and Elon Musk's xAI are among Fairwater's customers. Microsoft will also use the site for training its own proprietary models. Microsoft invested more than $34 billion in capital expenditures during its fiscal first quarter and plans to increase its total infrastructure investments over the next fiscal year. The company is part of a tech industry trend that will see companies collectively invest about $400 billion in AI efforts this year, driven by high demand for AI computing capacity. The company claims its Fairwater data centers achieve greater efficiency in both speed and power consumption. The two-story design allows for better networking and reduced latency, while a novel liquid-cooling system enables GPUs to be placed close together. The Atlanta complex covers more than 1 million square feet across 85 acres. Microsoft says the cooling system uses approximately the same amount of water annually as 20 average U.S. households, though data centers generally require significant energy and water resources. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Share
Share
Copy Link
Microsoft debuts its first AI 'superfactory' by linking data centers in Wisconsin and Atlanta through high-speed networks, creating a distributed supercomputer capable of training massive AI models with hundreds of trillions of parameters.
Microsoft has officially launched its first artificial intelligence "superfactory," marking a significant milestone in the company's ambitious infrastructure expansion. The tech giant announced on Wednesday that it has successfully connected massive data centers in Mount Pleasant, Wisconsin, and Atlanta, Georgia—roughly 700 miles apart—through a high-speed fiber-optic network to operate as a unified supercomputing system
1
2
.This groundbreaking approach represents what Microsoft describes as the world's first "planet-scale AI superfactory," designed specifically to train and run advanced AI models across connected sites. Unlike traditional cloud data centers that handle millions of separate applications for different customers, these new facilities are engineered to manage single, massive AI workloads distributed across multiple locations
2
.
Source: GeekWire
The connected facilities represent the first nodes in Microsoft's "Fairwater" network, a new class of data centers built exclusively for AI operations. These two-story facilities utilize direct-to-chip liquid cooling systems and consume "almost zero water," according to Microsoft's specifications
1
. The Atlanta facility alone spans over one million square feet across 85 acres, featuring a revolutionary design that allows for double the computing infrastructure compared to traditional single-story data centers4
.
Source: Benzinga
Each facility houses hundreds of thousands of Nvidia GPUs, including the company's most powerful GB200 NVL72 rack systems. These systems are rated to host over 120 kilowatts of equipment and offer 720 petaFLOPS of sparse FP8 compute for training, supported by 13TB HBM3e memory
1
. The facilities are interconnected through what Microsoft calls an AI Wide Area Network (AI-WAN), featuring more than 120,000 miles of fiber-optic cables that enable data transmission at nearly the speed of light4
.Microsoft's commitment to this infrastructure represents a substantial financial investment, with the company spending over $34 billion on capital expenditures in its most recent quarter alone, primarily focused on data centers and GPUs
2
5
. The company plans to double its global data center footprint within the next two years as part of a broader industry surge expected to reach $400 billion in AI infrastructure spending this year5
.This massive investment reflects the rapid escalation in AI infrastructure requirements. As Azure CTO Mark Russinovich explained, "To make improvements in the capabilities of the AI, you need to have larger and larger infrastructure to train it. The amount of infrastructure required now to train these models is not just one datacenter, not two, but multiples of that"
1
.Related Stories
The Fairwater network will serve several high-profile AI companies and initiatives. OpenAI, already heavily reliant on Microsoft's infrastructure, will be one of the primary users of the new facility. Other major clients include France's Mistral AI and Elon Musk's xAI, alongside Microsoft's own AI model training operations
4
5
.The distributed approach enables Microsoft to train much larger models while providing flexibility in facility placement. By connecting datacenters across different regions, the company can choose locations with advantages such as cheap land, cooler climates, and crucially, access to ample power resources
1
. This geographic distribution allows Microsoft to pool computing capacity, redirect workloads dynamically, and distribute massive power requirements across the electrical grid2
.Summarized by
Navi
[1]
[2]
[3]
18 Sept 2025•Technology

10 Oct 2025•Technology

04 Jan 2025•Business and Economy

1
Policy and Regulation

2
Technology

3
Business and Economy
