10 Sources
10 Sources
[1]
Microsoft building datacenter superclusters
The 100 trillion-parameter models of the near future can't be built in one place Microsoft believes the next generation of AI models will use hundreds of trillions of parameters. To train them, it's not just building bigger, more efficient datacenters - it's started connecting distant facilities using high-speed networks spanning hundreds or thousands of miles. The first node of this multi-datacenter cluster came online in October, connecting Microsoft's datacenter campus in Mount Pleasant, Wisconsin, to a facility in Atlanta, Georgia. The software giant's goal is to eventually scale AI workloads across datacenters using similar methods as employed to distribute high-performance computing and AI workloads across multiple servers today. "To make improvements in the capabilities of the AI, you need to have larger and larger infrastructure to train it," said Microsoft Azure CTO Mark Russinovich in a canned statement. "The amount of infrastructure required now to train these models is not just one datacenter, not two, but multiples of that." These aren't any ordinary datacenters, either. The facilities are the first in a family of bit barns Microsoft is calling its "Fairwater" clusters. These facilities are two stories tall, use direct-to-chip liquid cooling, and consume "almost zero water," Microsoft boasts. Eventually, Microsoft envisions this network of datacenters will scale to hundreds of thousands of diverse GPUs chosen to match workloads and availability. At its Atlanta facility, Microsoft will deploy Nvidia's GB200 NVL72 rack systems, each rated to host over 120 kilowatts of kit and to offer 720 petaFLOPS of sparse FP8 compute for training, helped by the presence of 13TB HBM3e memory,. By connecting its datacenters, Microsoft will be able to train much larger models and give itself the chance to choose different locations for its facilities - meaning it can choose places with cheap land, cooler climates, and - perhaps most importantly - access to ample power. Microsoft doesn't specify what technology it's using to bridge the roughly 1,000 kilometer (as the vulture flies) distance between the two datacenters, but it has plenty of options. Last month, Cisco revealed the Cisco 8223, a 51.2 Tbps router designed to connect AI datacenters up to 1,000 kilometers away. Broadcom intends its Jericho 4 hardware, announced in August, to do the same job and provide similar bandwidth. Meanwhile, Nvidia, which has quietly become one of the largest networking vendors in the world on the back of the AI boom, has teased its Spectrum-XGS network switches with crypto-miner-turned-rent-a-GPU outfit Coreweave signed up as an early adopter. We've asked Microsoft to comment on which of these technologies it's using at its Fairwater facilities, and will update this story if we hear back. But Redmond's close ties to Nvidia certainly makes Spectrum-XGS a likely contender. Microsoft is famously one of the few hyperscalers that's standardized on Nvidia's InfiniBand network protocol over Ethernet or a proprietary data fabric like Amazon Web Service's EFA for its high-performance compute environments. While Microsoft has no shortage of options for stitching datacenters together, distributing AI workloads without incurring bandwidth- or latency-related penalties remains a topic of interest to researchers. They're making good progress: Readers may recall that earlier this year, Google's DeepMind team published a report showing that many of the challenges can be overcome by compressing models during training and strategically scheduling communications between datacenters. ®
[2]
Powering AI Superfactories, NVIDIA and Microsoft Integrate Latest Technologies for Inference, Cybersecurity, Physical AI
Timed with the Microsoft Ignite conference running this week, NVIDIA is expanding its collaboration with Microsoft, including through the adoption of next-generation NVIDIA Spectrum-X Ethernet switches for the new Microsoft Fairwater AI superfactory, powered by the NVIDIA Blackwell platform. The collaboration brings new integrations across Microsoft 365 Copilot, as well as the public preview of next-generation Azure NC Series VMs powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, NVIDIA Nemotron integrations to accelerate AI for Microsoft SQL Server 2025, capabilities for onboarding AI agents in Microsoft 365 and optimizations for high-performance inference, cybersecurity and physical AI. Microsoft's AI Superfactory connects the landmark Fairwater data center in Wisconsin with a new, state-of-the-art facility in Atlanta, Georgia. This massive-scale infrastructure will integrate hundreds of thousands of NVIDIA Blackwell GPUs for large-scale training. In addition, Microsoft is deploying more than 100,000 Blackwell Ultra GPUs in NVIDIA GB300 NVL72 systems being deployed globally for inference. "Our collaboration with NVIDIA is built on driving innovation across the entire system and full stack, from silicon to services," said Nidhi Chappell, corporate vice president of product management at Microsoft. "By coupling Microsoft Azure's unmatched data center scale with NVIDIA's accelerated computing, we are maximizing AI data center performance and efficiency, which is of paramount importance for our customers leading the new AI era." The most demanding workloads for OpenAI, the Microsoft AI Superintelligence Team, Microsoft 365 Copilot and Microsoft Foundry services will be powered by this infrastructure. Customers like Black Forest Labs are also using NVIDIA GB200 NVL72 systems to train next-generation multimodal FLUX models that power visual intelligence. To connect this massive infrastructure, Microsoft is deploying next-generation NVIDIA Spectrum-X Ethernet switches in its Fairwater AI data center -- the largest and most sophisticated AI factories ever built -- delivering the performance, scale and efficiency required for OpenAI to run large-scale AI models and applications. New Azure NCv6 Series VMs with NVIDIA RTX PRO 6000 Blackwell GPUs are now in public preview on Azure, expanding the Blackwell platform to provide right-sized acceleration for multiple workloads including multimodal agentic AI, industrial digitalization with NVIDIA Omniverse libraries, scientific simulation and visual computing. This flexibility extends from the cloud to the edge with Azure Local, enabling powerful sovereign AI solutions while bringing low-latency, real-time AI to wherever data needs to reside. This allows enterprises to seamlessly develop, deploy and manage AI-powered digital twins and generative AI applications with NVIDIA RTX PRO 6000 Blackwell GPUs from the Azure cloud directly to their factory floors, on-premises data centers or secure edge locations. The NVIDIA platform on Azure, spanning NVIDIA Blackwell and Hopper GPUs, accelerates the latest models from the Microsoft AI Superintelligence Team, including text (MAI-1-preview), real-time voice (MAI-Voice-1) and high-fidelity image generation (MAI-Image-1) -- bringing new multimodal experiences across Bing Image Creator and Microsoft Copilot. Central to NVIDIA's collaboration with Microsoft is building a fungible fleet -- a flexible, continuously modernized infrastructure that can accelerate any workload with maximum efficiency. This is achieved through continuous, full-stack software optimizations that deliver compounding performance gains and maximize throughput across the entire AI lifecycle and across multiple NVIDIA architectures on Azure. The gains also extend to workloads beyond generative AI, including data processing, vector search, databases, digital twins, scientific computing and 3D design. This co-engineering saves significant costs for customers, making AI projects that were once theoretical now economically viable. For example, the continuous full-stack optimization work has directly contributed to an over 90% drop in the price of popular GPT models for end users on Azure in two years. Ongoing optimization work now extends to Microsoft Foundry, where the NVIDIA TensorRT-LLM library helps boost throughput, reduce latency and lower costs for a wide range of popular open models. NVIDIA and Microsoft have also partnered to optimize their fleet for AI workload performance through the NVIDIA DGX Cloud Benchmarking suite. Engineering teams from both companies worked closely together to identify bottlenecks and implement infrastructure tuning, driving performance gains. By achieving 95% of the performance possible using the NVIDIA reference architecture, Microsoft was named an Exemplar Cloud for H100 training. NVIDIA and Microsoft are integrating AI into the core of the enterprise, unlocking decades of proprietary data stored in one of the world's most trusted databases. NVIDIA is accelerating AI in the new Microsoft SQL Server 2025 by integrating it with NVIDIA Nemotron open models and NVIDIA NIM microservices. This solution delivers GPU-optimized, secure and scalable retrieval-augmented generation directly where enterprise data lives, in the cloud or on premises. Plus, the collaboration extends to the new frontier of agentic AI in the workplace. The NVIDIA NeMo Agent Toolkit now connects with Microsoft Agent 365, enabling developers to build, deploy and onboard compliant, enterprise-ready AI agents directly into the Microsoft 365 app ecosystem, including Outlook, Teams, Word and SharePoint. To power these new enterprise agents, Microsoft Foundry now offers NVIDIA Nemotron models for digital AI and NVIDIA Cosmos models for physical AI as secure NIM microservices. Developers can use them to build enterprise-grade agentic AI for a vast range of applications that benefit from multimodal intelligence, multilingual reasoning, math, coding and physical AI capabilities. The collaboration is also tackling cyber threats for enterprises. Microsoft and NVIDIA are collaborating on research for new adversarial learning models, built on the NVIDIA Dynamo-Triton framework and the NVIDIA TensorRT suite of tools, that can help enterprises defend against real-time cybersecurity threats with a 160x performance speedup compared with CPU methods. NVIDIA and Microsoft are building the future of physical AI. With NVIDIA Omniverse libraries available on Microsoft Azure, NVIDIA is unlocking end-to-end reindustrialization in the cloud through its developer ecosystem. Developers are transforming industrial workflows, from computer-aided engineering with Synopsys to factory operations with Sight Machine and SymphonyAI. Robotics developers can tap into the NVIDIA Isaac Sim open-source robotics simulation framework to unlock critical workflows, from synthetic data generation to software-in-the-loop testing for all types of robot embodiments. Hexagon is building its AEON humanoid robot primarily using NVIDIA's full robotics stack on Azure. Similarly, the robotics platform, Wandelbots NOVA, running on Azure integrates Isaac Sim and Isaac Lab to simplify and speed up simulation to real-world deployment. In addition, NVIDIA and Microsoft are using a standardized approach for digital engineering to enable seamless OpenUSD interoperability across 3D workflows, making simulation and digital content creation accessible in the cloud. This expanded collaboration comes on the heels of a partnership announced with Anthropic and Microsoft earlier today. NVIDIA and Anthropic will collaborate on design and engineering to optimize Anthropic models for performance, efficiency and total cost of ownership, as well as optimize future NVIDIA architectures for Anthropic workloads.
[3]
What is an AI 'superfactory'? Microsoft unveils new approach to building and linking data centers
Microsoft says it has linked massive data centers in Wisconsin and Atlanta -- roughly 700 miles and five states apart -- through a high-speed fiber-optic network to operate as a unified system. The announcement Wednesday morning marks the debut of what the company is calling its AI "superfactory," a new class of data centers built specifically for artificial intelligence. The facilities are designed to train and run advanced AI models across connected sites -- a setup that Microsoft describes as the world's first "planet-scale AI superfactory." Unlike traditional cloud data centers that run millions of separate applications for different customers, Microsoft says the new facilities are designed to handle single, massive AI workloads across multiple sites. Each data center houses hundreds of thousands of Nvidia GPUs connected through a high-speed architecture known as an AI Wide Area Network, or AI-WAN, to share computing tasks in real time. Microsoft says it's using a new two-story data center design to pack GPUs more densely and minimize latency, a strategy enabled in part by a closed-loop liquid cooling system. By linking sites across regions, the company says it's able to pool computing capacity, redirect workloads dynamically, and distribute the massive power requirements across the grid so that it isn't dependent on available energy resources in one part of the country. This unified supercomputer will train and run the next generation of AI models for key partners like OpenAI, France's Mistral AI, and xAI, and for Microsoft's own internal models. The new approach shows the rapid pace of the AI infrastructure race among the world's largest tech companies. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter -- much of it on data centers and GPUs -- to keep up with what it sees as soaring AI demand. Amazon is taking a similar approach with its new Project Rainier complex in Indiana, a cluster of seven data center buildings spanning more than 1,200 acres. Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar bets, collectively putting hundreds of billions into new facilities, chips, and systems to train and deploy AI models. Some analysts and investors see echoes of a tech bubble in the rush to build AI infrastructure, if business customers don't realize enough value from AI in the near term. Microsoft, Amazon and others say the demand is real, not speculative, pointing to long-term contracts as evidence.
[4]
Microsoft links 700-mile-apart data centers for first AI superfactory
Announced on Wednesday, the Redmond-based company is building a special class of data centers, known as Fairwater data centers, designed to work as a cohesive unit sharing common architecture and design. The Fairwater data centers are interconnected to ensure a seamless and rapid data flow between them. This facility enables Fairwater sites situated across different geographies to train AI models in weeks, rather than months. Microsoft's network of Fairwater data centers is designed to help OpenAI and the Microsoft AI Superintelligence Team use advanced GPUs for operational compute tasks. "This is about building a distributed network that can act as a virtual supercomputer for tackling the world's biggest challenges in ways that you just could not do in a single facility," said Alistair Speirs, Microsoft general manager focusing on Azure infrastructure. As AI models become increasingly large, every new generation requires more parameters -- the internal settings that enable AI to understand and process information, and produce accurate answers.
[5]
Microsoft unveils a huge two-state AI engine designed to merge datacenters
Two-story chip density increases compute power while lowering inter-rack latency Microsoft has unveiled its first AI superfactory, linking large AI datacenters in Wisconsin and Atlanta through a dedicated fiber network designed for high-speed movement of training data. The design places chips close together across two floors to increase density and reduce lag. It also uses extensive cabling and liquid systems arranged to manage the weight and heat produced by large clusters of hardware. In a blog post,, Microsoft said this configuration will support vast AI workloads that differ from the smaller and more isolated tasks common in cloud environments. "This is about building a distributed network that can act as a virtual supercomputer for tackling the world's biggest challenges," said Alistair Speirs, Microsoft general manager focusing on Azure infrastructure. "The reason we call this an AI superfactory is it's running one complex job across millions of pieces of hardware...it's not just a single site training an AI model, it's a network of sites supporting that one job." The AI WAN system moves information across thousands of miles using dedicated fiber, part newly built and part repurposed from earlier acquisitions. Network protocols and architecture have been adjusted to shorten pathways and keep data moving with minimal delay. Microsoft claims this allows distant sites to cooperate on the same model training process in near real time, with each location contributing its share of computation. The focus is on maintaining continuous activity across large numbers of GPUs so that no unit pauses while waiting for results from another location. "Leading in AI isn't just about adding more GPUs - it's about building the infrastructure that makes them work together as one system," said Scott Guthrie, Microsoft executive vice president of Cloud + AI. Microsoft uses the Fairwater layout to support the high-throughput rack systems, including Nvidia GB200 NVL72 units designed to scale to very large clusters of Blackwell GPUs. The company pairs this hardware with liquid cooling systems that send heated fluid outside the building and return it at lower temperatures. Microsoft states that the operational cooling uses almost no new water, aside from periodic replacement when needed for chemistry control. The Atlanta site mirrors the Wisconsin layout, providing a consistent architecture across multiple regions as more facilities come online. "To make improvements in the capabilities of the AI, you need to have larger and larger infrastructure to train it," said Mark Russinovich, CTO, deputy CISO, and technical fellow, Microsoft Azure. "The amount of infrastructure required now to train these models is not just one datacenter, not two, but multiples of that." The company positions these sites as purpose-built for training advanced AI tools, citing rising parameter counts and larger training datasets as key pressures driving expansion. The facilities incorporate exabytes of storage and millions of CPU cores for supporting tasks around the primary training workflows. Microsoft suggests that this scale is necessary for partners such as OpenAI and its own AI Superintelligence Team to continue model development.
[6]
Microsoft brings Atlanta-based AI 'super factory' online - SiliconANGLE
Microsoft Corp. said today it's bringing online its first artificial intelligence "super factory" with the grand opening of its newest multibillion-dollar data center facility in Atlanta. The company said it's going to link the massive new server farm with existing data center infrastructure elsewhere in the country to ensure it can provide the immense computing power its customers require without energy constraints. The announcement is a key milestone for Microsoft, which has ambitions to double its data center footprint over the next two years. The new Atlanta site is said to be a part of the company's larger "Fairwater" network, which is a new class of server hubs that's dedicated to training and running AI models. Microsoft said the facility will host hundreds of thousands of Nvidia Corp.'s most powerful graphics processing units and other AI accelerators, and have dedicated high-speed links to other facilities that make up the greater Fairwater network. The complex spans more than one million square feet and sits on 85 acres of land, and it's also the company's first data center to feature two floors, which means it can squeeze in double the amount of computing infrastructure compared to older facilities. Microsoft said the design supports a greater degree of efficiency in terms of speed and power consumption. The facility's energy requirements also benefit from a new liquid-cooling system that allows it to pack the thousands of individual GPUs much more tightly than before. Because it recycles virtually all of the water it uses, its consumption is equivalent to that of approximately 20 U.S. households, the company added. To connect all of those GPUs, Microsoft has installed more than 120,000 miles of fiber-optic cables at the facility, allowing them to send and receive data at close to the speed of light. Microsoft Executive Vice President of Cloud and AI Scott Guthrie said this network can be likened to a "cutting-edge express lane" that's dedicated to AI traffic. Microsoft didn't say how much the Atlanta data center cost to build, nor did it comment on its annual electricity consumption, but it said the broader Fairwater network will use "multigigawatts" of power. To ensure it can access enough power, Fairwater has been distributed across multiple geographic regions, the company said. For instance, one of the other data centers is located in Wisconsin, which is roughly 700 miles distant from Atlanta. This means Microsoft can tap into various different power sources and avoid exhausting the available energy in one location. One of the Atlanta facility's biggest customers will be OpenAI, which is already heavily reliant on Microsoft for its compute infrastructure needs, but it will also cater to other AI firms, including the French startup Mistral AI and Elon Musk's xAI Corp. In addition, Microsoft will also reserve some of the capacity for its own needs, including training its proprietary models. The Redmond-based company recently revealed that it spent a staggering $34 billion on capital expenditures during its fiscal first quarter, with the bulk of that capital going towards its data center build out. It also said it's planning to increase its investments in infrastructure over the remainder of the fiscal year. It's far from the only company doing this - in fact, a number of technology giants, including Google LLC, Oracle Corp., Meta Platforms Inc. and Amazon Web Services Inc., have similar plans. Combined, those companies are set to spend around $400 billion on AI infrastructure this year.
[7]
Microsoft Builds Massive AI 'Super Factory' In Atlanta To Power OpenAI And Elon Musk's xAI - Microsoft (NASDAQ:MSFT)
Microsoft Corp (NASDAQ:MSFT) is accelerating its data center expansion with the launch of an artificial intelligence "super factory" in Atlanta -- a massive two-story complex designed to link seamlessly with similar hubs across its Fairwater network to deliver enormous computing power. The company plans to double its global data center footprint within two years, with the Atlanta site emerging as one of its most important new facilities. Built specifically for AI training, the Fairwater hub will house hundreds of thousands of Nvidia Corp (NASDAQ:NVDA) GPUs and feature high-speed interconnections with other Microsoft data centers, the Wall Street Journal reported on Wednesday. Also Read: Nvidia Rallies As Microsoft Expands AI Infrastructure In Middle East With US Backing OpenAI, Mistral AI, and Elon Musk's xAI will use the facility alongside Microsoft's own AI model training operations. Microsoft, with a market cap of nearly $3.78 trillion, gained 19% year-to-date. Microsoft spent over $34 billion in capital expenditures during its fiscal first quarter and plans to boost infrastructure investments further next year as part of a broader industry surge -- one that's expected to reach $400 billion in AI spending this year. The company says Fairwater's two-story design improves network efficiency and reduces latency, while a liquid-cooling system enables GPUs to be densely packed. The one-million-square-foot Atlanta complex spans 85 acres and consumes about as much water annually as 20 U.S. households. Microsoft is also laying 120,000 miles of fiber-optic cables to link Fairwater locations, enabling data to travel "nearly at the speed of light." Bank of America Securities analyst Brad Sills highlighted renewed momentum in Microsoft's cloud business, driven by strong enterprise demand for Azure security services and ongoing AI adoption. He cited consistent deal activity from channel partners and rising investment in AI and data infrastructure as evidence of the company's growing role in corporate technology roadmaps. Sills identified Azure growth, enterprise Office adoption, and strategic AI infrastructure expansion -- including Microsoft's work with Aligned Data Centers, BlackRock Inc (NYSE:BLK) and Nvidia -- as key growth catalysts. The analyst projected fiscal 2026 revenue of $322.1 billion and EPS of $15.24. MSFT Price Action: MSFT stock traded 1.25% lower to $502.24 at last check on Wednesday. Read Next: The 'Big 3' Tech Playbook: Why Apple, Amazon And Microsoft Keep Winning Photo: Shutterstock MSFTMicrosoft Corp$507.82-0.17%OverviewBLKBlackRock Inc$1095.970.94%NVDANVIDIA Corp$192.67-0.25%Market News and Data brought to you by Benzinga APIs
[8]
Microsoft fires up 'AI superfactory' powered by hundreds of thousands of Nvidia GPUs (MSFT:NASDAQ)
Microsoft (MSFT) has brought its latest datacenter online in Atlanta, which it dubs an 'AI superfactory' as it directly connects with other datacenters to deliver the power of hundreds of thousands of Nvidia (NVDA) GPUs to support AI workloads. The The Atlanta AI superfactory connects with other datacenters to deliver massive GPU power and support large-scale, unified AI workloads for Microsoft and partners like OpenAI. Interconnected Fairwater datacenters form a 'fungible fleet,' seamlessly running AI workloads across sites for maximum efficiency and performance, and preventing GPU idleness. Microsoft uses dedicated fiber networks and clusters of NVIDIA GPUs to flexibly allocate hardware for diverse AI workloads, ensuring resources are optimized and readily available.
[9]
Microsoft builds AI super factory in Atlanta as part of data center expansion - WSJ By Investing.com
Investing.com -- Microsoft is unveiling a new AI "super factory" in Atlanta on Wednesday as part of its plan to double its data center footprint over the next two years, according to a report from the Wall Street Journal. The Atlanta site belongs to Microsoft's Fairwater network of artificial-intelligence centers, a new class of hubs built specifically for AI training. The facility will house hundreds of thousands of Nvidia graphics processing units and feature dedicated high-speed connections to other Fairwater locations. Major AI companies including OpenAI, France's Mistral AI, and Elon Musk's xAI are among Fairwater's customers. Microsoft will also use the site for training its own proprietary models. Microsoft invested more than $34 billion in capital expenditures during its fiscal first quarter and plans to increase its total infrastructure investments over the next fiscal year. The company is part of a tech industry trend that will see companies collectively invest about $400 billion in AI efforts this year, driven by high demand for AI computing capacity. The company claims its Fairwater data centers achieve greater efficiency in both speed and power consumption. The two-story design allows for better networking and reduced latency, while a novel liquid-cooling system enables GPUs to be placed close together. The Atlanta complex covers more than 1 million square feet across 85 acres. Microsoft says the cooling system uses approximately the same amount of water annually as 20 average U.S. households, though data centers generally require significant energy and water resources. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[10]
Microsoft's AI Super Factory explained: The mega data center for the next generation of intelligence
The mega infrastructure shaping the future of artificial intelligence There's an irony in calling it a "super factory." Artificial intelligence is supposed to live in code, not concrete. Yet outside Atlanta, Georgia, Microsoft has built something very real, a one-million-square-foot complex stretching across 85 acres. It's part of a new class of infrastructure the company calls the Fairwater network, a system of AI-optimized data centers designed not to store photos or emails, but to manufacture intelligence itself. This new AI Super Factory marks Microsoft's latest leap in the race for computing dominance. Instead of hosting smaller, cloud-based workloads, Fairwater is built for frontier-scale AI training - the kind needed by OpenAI, Mistral AI, and xAI. Each facility in the network links together through a 120,000-mile fiber-optic web, allowing them to function as a single, distributed brain. It's not a data center anymore; it's an assembly line for algorithms. Also read: AI isn't about bigger models: Snowflake's Jeff Hollan on agentic AI future Most data centers are flat, single-floor halls filled with racks of servers. Fairwater isn't. Microsoft has built a two-story structure, an unusual move in an industry where heat and weight management typically limit vertical design. The benefit? Speed. By stacking thousands of NVIDIA GPUs closer together and interlinking them through short, dense fiber runs, the system dramatically reduces latency, a crucial factor when massive language or vision models need to exchange data across millions of parameters. Each rack reportedly handles up to 140 kW of power, and rows can draw more than 1,300 kW, cooled through a closed-loop liquid system. That system, Microsoft claims, uses about as much water per year as 20 U.S. households. While that sounds sustainable, critics note that scaling this model across dozens of such sites could multiply water use into significant volumes. The trade-off between compute and consumption remains the industry's uneasy secret. Fairwater symbolizes a deeper evolution inside Microsoft's cloud empire. Traditional Azure data centers handle millions of smaller client requests. The new generation, however, is specialized for AI workloads, dense racks of GPUs focused purely on training large models, reinforcement learning, and fine-tuning loops. In other words, Microsoft isn't just renting servers anymore; it's building dedicated AI manufacturing plants. And the strategy is deliberate. With rivals like Amazon's Project Rainier and Meta's GPU-rich facilities, the company knows that compute capacity, not just algorithms, is the new currency of dominance. Also read: Iconic voices for hire: ElevenLabs opens AI marketplace for celebrity audio licensing Executives have hinted that Microsoft plans to double its data-center footprint within two years, pouring billions into infrastructure that will power everything from Copilot in Office to advanced research at OpenAI. Fairwater's distributed network has a second advantage: it spreads the enormous energy demand across multiple states. Sites like Atlanta and Wisconsin were chosen for their resilient local grids and access to renewable energy. This helps Microsoft manage the power burden, and the optics, of running what are effectively digital steel mills. Still, the math is staggering. Each site can consume hundreds of megawatts of electricity, and the chips themselves require specialized cooling and continuous maintenance. The company says the closed-loop system and fiber design make the operation more efficient, but environmental groups argue that large-scale AI compute remains resource-intensive by nature. There's also the geopolitical angle. Concentrating massive AI power in U.S. hubs could deepen the gap for international users, including those in markets like India, where data-locality laws and latency issues might complicate access to frontier AI models. Beyond the engineering spectacle, Fairwater represents something: the physical infrastructure of the next generation of intelligence. These machines will train the models that simulate traffic systems, design autonomous vehicles, and predict battery behavior for EVs, areas that link directly to future industries and research. For media and journalism, too, this is where the algorithms behind recommendation systems, automated writing, and analytics are born. The "factory" metaphor isn't just poetic - it's literal. If the last century's factories built machines, this one builds minds. Microsoft's AI Super Factory is where the intangible becomes tangible, where intelligence takes shape in silicon and steel. Behind every chatbot, translation engine, or creative AI tool sits a humming warehouse like Fairwater, cables glowing, coolants cycling, and GPUs crunching numbers at unimaginable speeds. It's a reminder that the future of intelligence won't float in the cloud. It will be forged on the ground, one data rack at a time.
Share
Share
Copy Link
Microsoft has launched its pioneering AI superfactory, linking massive data centers in Wisconsin and Atlanta over 700 miles. This unified system will train next-generation AI models with hundreds of trillions of parameters, marking a new era in AI infrastructure development.
Microsoft has unveiled its pioneering AI "superfactory," an innovative infrastructure linking geographically dispersed data centers into a single, unified supercomputer. This initiative connects facilities in Mount Pleasant, Wisconsin, and Atlanta, Georgia—approximately 700 miles apart—via a high-speed fiber-optic network optimized for intense AI workloads
1
3
. This shifts focus from traditional cloud centers to handling immense, singular AI tasks distributed across multiple locations, crucial for next-generation models.
Source: GeekWire
The new "Fairwater" clusters are purpose-built AI data centers with advanced designs. These two-story structures use direct-to-chip liquid cooling, minimizing water consumption. Their vertical layout boosts chip density and reduces inter-rack latency, vital for peak AI training performance
1
5
. The Atlanta site will deploy NVIDIA's GB200 NVL72 rack systems. Microsoft plans to integrate hundreds of thousands of NVIDIA Blackwell GPUs across these sites, including over 100,000 Blackwell Ultra GPUs for global inference2
. This model addresses the exponential growth in AI model complexity, with future AI requiring trillions of parameters, beyond single-data-center capabilities. Microsoft Azure CTO Mark Russinovich stressed the necessity of multiple data centers for future AI training, enabling strategic site selection for power and climate1
3
.
Source: Benzinga
Related Stories
Central to the superfactory's functionality is its AI Wide Area Network (AI-WAN), guaranteeing high-speed, low-latency communication between distant facilities
3
. Microsoft is deploying state-of-the-art NVIDIA Spectrum-X Ethernet switches, making this one of the most advanced AI factory deployments globally2
. The strong collaboration with NVIDIA, including full-stack software optimizations, has already led to over 90% cost reductions for Azure users of popular GPT models over two years2
. This robust infrastructure will underpin demanding AI workloads for partners such as OpenAI, Mistral AI, and xAI, alongside Microsoft's internal teams and services. It represents a unified supercomputing effort to train and deploy the most sophisticated AI models2
3
.
Source: NVIDIA
Summarized by
Navi
[1]
[3]
[4]
10 Oct 2025•Technology

18 Sept 2025•Technology

20 Nov 2024•Technology
