2 Sources
[1]
Gearing Up for the Gigawatt Data Center Age
Inside the AI factories powering the trillion‑parameter era -- and why the network matters more than ever. Across the globe, AI factories are rising -- massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories -- where the rules are being rewritten and the wiring doesn't look anything like the old internet. These aren't typical hyperscale data centers. They're something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs -- not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It's the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won't cut it. What's needed is a layered design with bleeding-edge technologies -- like co-packaged optics that once seemed like science fiction. The complexity isn't a bug; it's the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn't rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight -- literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables -- tightly wound and precisely routed. It moves more data per second than the entire internet. That's 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn't just fast. It's foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn't about burning cycles on a single machine. It's about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices -- typically massive matrices of numbers -- need to be regularly merged and updated. That merging occurs through collective operations, such as "all-reduce" (which combines data from all nodes and redistributes the result) and "all-to-all" (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network -- what engineers call latency (delay) and bandwidth (data capacity) -- causing stalls in training. For inference -- the process of running trained models to generate answers or predictions -- the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads -- not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it's a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance -- and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation -- one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It's why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world's most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA's congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system -- giving customers flexibility. A key ingredient is NVIDIA SuperNICs -- based on NVIDIA BlueField-3 or ConnectX-8 -- which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand's best innovations -- like telemetry-driven congestion control, adaptive load balancing and direct data placement -- to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world's most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA's approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
[2]
Nvidia is betting on dedicated networks for the era of 'AI factories'.
NVIDIA announces the rise of "AI factories," new data centers designed for training and inferencing very large-scale artificial intelligence models. These infrastructures are based on tens of thousands of GPUs connected by specialized network architectures. The company is highlighting its NVLink interconnect, capable of 130 TB/s of GPU-to-GPU bandwidth, and its InfiniBand Quantum technology, the benchmark in high-performance computing, which reduces latency and doubles effective bandwidth with the SHARP protocol. For companies committed to the Ethernet ecosystem, NVIDIA offers Spectrum-X, launched in 2023, which adapts Ethernet to the needs of AI with advanced congestion management and sustained throughput of 95% compared to 60% for traditional Ethernet. The strategy combines NVLink for internal rack communication, InfiniBand for hyperscale clusters, and Spectrum-X for existing Ethernet environments to support the rise of models with several thousand billion parameters.
Share
Copy Link
NVIDIA introduces advanced networking technologies to support the rise of 'AI factories', massive data centers designed for training and deploying large-scale AI models, emphasizing the critical role of network architecture in AI infrastructure.
NVIDIA is spearheading a new era in computing with the introduction of 'AI factories' – massive data centers specifically designed for training and deploying large-scale artificial intelligence models. These facilities are not your typical hyperscale data centers; they represent a paradigm shift in computing infrastructure, where the entire data center functions as a single unit of computing power 1.
At the heart of these AI factories lies a critical component: the network architecture. NVIDIA emphasizes that traditional networking solutions are inadequate for the demands of modern AI workloads. The company is introducing a layered design with cutting-edge technologies to address this challenge 1.
NVIDIA's NVLink spine stands out as a marvel of engineering. Built from over 5,000 coaxial cables, it can move more data per second than the entire internet, boasting 130 TB/s of GPU-to-GPU bandwidth 1. This internal rack communication system is crucial for the seamless operation of AI factories.
Source: NVIDIA Blog
For clusters spanning multiple racks, NVIDIA presents the Quantum-X800 InfiniBand switches. These switches offer 144 ports of 800 Gbps connectivity and incorporate advanced features like hardware-based SHARPv4, adaptive routing, and telemetry-based congestion control 1. InfiniBand's ability to scale AI communication with precision has led to its adoption in the majority of systems on the TOP500 list of the world's most powerful supercomputers 2.
Recognizing the significant investments made in Ethernet infrastructure, NVIDIA offers Spectrum-X as a solution for companies committed to the Ethernet ecosystem. Launched in 2023, Spectrum-X adapts Ethernet to AI requirements, featuring advanced congestion management and sustained throughput of 95% compared to 60% for traditional Ethernet 2.
Training modern large language models (LLMs) requires orchestrating the work of tens or even hundreds of thousands of GPUs. This distributed computing approach involves splitting massive calculations across nodes and regularly merging and updating data through collective operations 1.
The performance of these operations is highly dependent on network speed and responsiveness. Traditional Ethernet, designed for single-server workloads, falls short in meeting the demands of distributed AI, which requires zero-jitter operation and the ability to handle extreme throughput bursts 1.
For inference tasks, especially in cloud environments with multi-tenant setups, the networking challenges shift. These scenarios demand real-time lookups and responses while maintaining strict isolation between different users' workloads. NVIDIA's networking solutions aim to provide the lightning-fast, high-throughput capabilities necessary to meet these requirements 1.
As AI models continue to grow in size and complexity, with some reaching trillion-parameter scales, the importance of efficient networking solutions becomes even more pronounced. NVIDIA's strategy combines NVLink for internal rack communication, InfiniBand for hyperscale clusters, and Spectrum-X for existing Ethernet environments to support this new era of AI computing 2.
By addressing the unique networking needs of AI workloads, NVIDIA is positioning itself at the forefront of the AI infrastructure revolution, enabling the development and deployment of increasingly sophisticated AI models across various industries and applications.
Meta Platforms has signed a six-year, $10 billion cloud computing agreement with Google, signaling a major move in its AI infrastructure expansion strategy.
14 Sources
Business
14 hrs ago
14 Sources
Business
14 hrs ago
Elon Musk sought Mark Zuckerberg's support for a $97.4 billion bid to acquire OpenAI, leading to legal complications and raising questions about Meta's role in the AI industry's power dynamics.
13 Sources
Business
14 hrs ago
13 Sources
Business
14 hrs ago
Nvidia CEO Jensen Huang discusses potential new AI chips for China, addresses security concerns, and praises TSMC during his visit to Taiwan, highlighting the complex dynamics of US-China tech relations.
15 Sources
Technology
14 hrs ago
15 Sources
Technology
14 hrs ago
Anthropic, the AI company behind Claude, is close to securing a massive $10 billion funding round, doubling its initial target due to high investor demand. This raise would significantly boost its valuation and fuel its competition with other AI giants.
4 Sources
Business
14 hrs ago
4 Sources
Business
14 hrs ago
OpenAI announces plans to open its first office in India, located in New Delhi, as part of its strategy to tap into the country's rapidly growing AI market and expand its global footprint.
11 Sources
Technology
14 hrs ago
11 Sources
Technology
14 hrs ago