3 Sources
3 Sources
[1]
NVIDIA Spectrum-X Ethernet Switches Speed Up Networks for Meta and Oracle
OCP -- NVIDIA today announced that Meta and Oracle will boost their AI data center networks with NVIDIA Spectrum-Xโข Ethernet networking switches. Meta and Oracle are utilizing Spectrum-X Ethernet switches within their ecosystem as an open, accelerated networking architecture that speeds deployment at scale, unlocks exponential gains in AI training efficiency and shortens time to insights. "Trillion-parameter models are transforming data centers into giga-scale AI factories, and industry leaders like Meta and Oracle are standardizing on Spectrum-X Ethernet to drive this industrial revolution," said Jensen Huang, founder and CEO of NVIDIA. "Spectrum-X is not just faster Ethernet -- it's the nervous system of the AI factory, enabling hyperscalers to connect millions of GPUs into a single giant computer to train the largest models ever built." Oracle will build giga-scale AI factories accelerated by the NVIDIA Vera Rubin architecture and interconnected by Spectrum-X Ethernet. "Oracle Cloud Infrastructure is designed from the ground up for AI workloads, and our partnership with NVIDIA extends that AI leadership," said Mahesh Thiagarajan, executive vice president of Oracle Cloud Infrastructure. "By adopting Spectrum-X Ethernet, we can interconnect millions of GPUs with breakthrough efficiency so our customers can more quickly train, deploy and benefit from the next wave of generative and reasoning AI." Meta will integrate Spectrum Ethernet switches into its networking infrastructure for the Facebook Open Switching System ("FBOSS"), a software platform developed to manage and control network switches at massive scale. This integration will speed deployment at scale to unlock gains in AI training efficiency and shorten time to insights. "Meta's next-generation AI infrastructure requires open and efficient networking at a scale the industry has never seen before," said Gaya Nagarajan, vice president of networking engineering at Meta. "By integrating NVIDIA Spectrum Ethernet into the Minipack3N switch and FBOSS, we can extend our open networking approach while unlocking the efficiency and predictability needed to train ever-larger models and bring generative AI applications to billions of people." NVIDIA Spectrum-X Ethernet Platform Designed for the trillion-parameter model era, the NVIDIA Spectrum-X Ethernet platform, consisting of Spectrum-X Ethernet switches and Spectrum-X Ethernet SuperNICs, is the first Ethernet platform purpose-built for AI, enabling hyperscalers to interconnect millions of GPUs with unprecedented efficiency and scale. Trillion-parameter models and generative AI are redefining the scale of data centers. Spectrum-X Ethernet enables AI at scale, delivering the performance and scalability needed to build the world's most advanced AI infrastructure. Spectrum-X Ethernet has already demonstrated record-setting efficiency, enabling the world's largest AI supercomputer to achieve 95% data throughput with its congestion-control technology. By contrast, off-the-shelf Ethernet at scale suffers from thousands of flow collisions, limiting throughput to roughly 60%. This leap in efficiency marks a breakthrough in the economics and performance of AI-scale networking. NVIDIA Spectrum-XGS Ethernet technology, part of the Spectrum-X Ethernet networking platform, enables scale-across capabilities to link data centers across cities, nations and continents into vast, giga-scale AI super-factories. Spectrum-X builds on NVIDIA's full-stack platform -- including GPUs, CPUs, NVIDIA NVLinkโข and software -- to deliver seamless performance from compute to network. Its advanced congestion control, adaptive routing and AI-driven telemetry capabilities ensure efficiency and predictability for massive AI training and inference clusters.
[2]
Meta and Oracle embrace Nvidia's Spectrum-X: Ethernet enters the mega AI factory era - SiliconANGLE
Meta and Oracle embrace Nvidia's Spectrum-X: Ethernet enters the mega AI factory era At this year's Open Compute Project Summit, Nvidia Corp. took another major step toward redefining the data center as a mega artificial intelligence factory. The company announced that Meta Platforms Inc. and Oracle Corp. will adopt its Spectrum-X Ethernet networking platform, a purpose-built system designed for AI workloads that connects millions of graphics processing units into one unified fabric. For Meta and Oracle, this isn't just a networking upgrade -- it's a bet on a new architecture for AI-scale computing, or AI factories. And for the industry, it marks a clear shift: Ethernet is no longer "good enough" for AI -- it's being reinvented for it. Unlike traditional Ethernet solutions retrofitted for AI, Nvidia Spectrum-X was engineered from the ground up to handle the communication patterns of large-scale AI workloads -- massive all-to-all GPU synchronization, low-latency messaging and congestion-prone flows. As I've discussed on theCUBE in our AI Factory Series, Spectrum-X represents a networking stack purpose-built to accelerate generative AI by removing bottlenecks, maximizing GPU utilization and enabling both intra-data center and giga-scale, cross-data center deployments. This purpose-built approach is Nvidia's path to unlocking performance gains at scale. The company claims 1.6 times higher networking performance for AI communication versus conventional Ethernet -- a leap that directly translates to less GPU idle time and higher throughput during training and inference. Meta's integration of Spectrum-X into its Facebook Open Switching System, or FBOSS, and Minipack3N switch marks a key moment for open networking. It extends Meta's open hardware and software philosophy into the AI infrastructure layer -- now serving as an acceleration backbone tuned for the trillion-parameter model era. As Gaya Nagarajan, vice president of networking engineering at Meta, said, "Meta's next-generation AI infrastructure requires open and efficient networking at a scale the industry has never seen before." By merging Spectrum-X Ethernet with FBOSS, Meta is pairing open, programmable control planes with AI-optimized physical infrastructure. The result is predictable, congestion-free performance while preserving the flexibility of Meta's disaggregated network model. Oracle Cloud Infrastructure is taking a complementary approach -- scale. Oracle is using Spectrum-X to build giga-scale AI factories powered by the upcoming Nvidia Vera Rubin architecture. "By adopting Spectrum-X Ethernet, we can interconnect millions of GPUs with breakthrough efficiency," said Mahesh Thiagarajan, executive vice presidet of Oracle Cloud Infrastructure. This signals Oracle's intent to compete directly with the largest hyperscalers in AI compute, building globally distributed AI factories capable of massive training jobs. Spectrum-X's scale-across capabilities (SpectrumXGS) allow Oracle to connect clusters across multiple data centers -- even across countries -- into one logical AI system. It's a vision of AI without physical boundaries. From my perspective, the details of Spectrum-X's architecture explain why both Meta and Oracle are standardizing on it: Integrated hardware stack: Spectrum-X combines the Spectrum4 Ethernet switch (offering 51.2 terabits per second of throughput) with BlueField-3 SuperNICs and DPUs, which offload and secure network services so GPUs can focus purely on compute. Higher networking performance: By delivering roughly 1.6 times better effective performance, Spectrum-X reduces GPU idle time -- crucial for cost efficiency when training large models. Advanced telemetry and routing: End-to-end visibility, adaptive routing and congestion control dynamically adjust packet flows to prevent the dreaded "elephant flow" bottlenecks common in AI workloads. High-speed RDMA and multitenancy: BlueField-3 enables 400 gigabits per second RDMA over converged Ethernet -- enabling faster GPU-to-GPU communication and secure, multitenant environments. Gigascale scaling (SpectrumXGS): SpectrumXGS extends networking beyond a single data center, enabling multi-site AI super-factories with consistent performance -- a true foundation for distributed AI systems. Software ecosystem integration: Spectrum-X is part of Nvidia's full-stack strategy -- integrated with DOCA, Cumulus Linux, Pure SONiC, NetQ, AI Enterprise and AI Workbench for development, deployment and operational visibility. The result is an end-to-end networking solution for AI that delivers both high performance and operational simplicity. If deployed effectively, it can reduce total cost of ownership while increasing return on investment for expensive GPU assets. As I've argued in theCUBE's AI factory coverage, networking has quietly become the de facto operating system of the AI era. Today's AI applications are more data-hungry than ever, and GPUs are vastly more capable -- but their potential is realized only when data moves efficiently. Networking is now both the enabler and bottleneck. The industry consensus is forming around a new reality: AI's performance frontier has shifted from compute to connectivity. Networking plays the same integrative role that operating systems once did -- orchestrating, scheduling and synchronizing distributed resources across clouds, edges and data centers. In this new paradigm, the network is the control plane of AI factories -- the connective tissue that binds compute, storage and data into one intelligent system. This also explains why inference, not just training, is driving architectural evolution. As Shekar Ayyar and others have pointed out, inference happens across distributed environments that require AI-aware networks capable of understanding latency, congestion and workload locality. This shift demands adaptive, programmable and observable network fabrics that extend from core data centers to the edge. Networking now underpins observability, security and orchestration for hybrid AI deployments, ensuring that models and agents can operate predictably across environments. In short, networking is becoming the nervous system of the AI factory -- orchestrating computation, enabling data mobility and transforming the data center into a living, distributed organism of intelligence. What Nvidia, Meta,and Oracle are collectively signaling is that we've entered the industrial phase of AI infrastructure. Nvidia Chief Executive Jensen Huang captured it best when he said: "Trillion-parameter models are transforming data centers into giga-scale AI factories.... Spectrum-X is the nervous system of the AI factory." Meta's integration shows how open networking meets AI acceleration, while Oracle's adoption underscores the rise of mega AI factories as the new hyperscale. Both point to a world where network design becomes the strategic lever for the future of AI -- the road to superintelligence determined by performance, cost and energy efficiency. The adoption of Spectrum-X by Meta and Oracle validates a major architectural inflection point: AI is no longer constrained by compute and energy -- it's now constrained by the network. With Spectrum-X, Nvidia is redefining Ethernet as AI Ethernet -- a fully instrumented, GPU-aware, congestion-free data fabric capable of scaling across geographies. It's the missing piece that turns clusters into connected AI supercomputers and AI factories at mega scale. This move places Nvidia at the center of a new large-scale computing stack -- from silicon to systems to the global AI network. For hyperscalers like Meta and Oracle, the message is clear: AI performance now starts with the network.
[3]
Meta, Oracle Tap NVIDIA's Spectrum-X For AI Data Centers - NVIDIA (NASDAQ:NVDA)
NVIDIA Corporation (NASDAQ:NVDA) announced on Monday that Meta Platforms, Inc. (NASDAQ:META) and Oracle Corporation (NYSE:ORCL) will enhance their AI data center networks using NVIDIA's Spectrum-X Ethernet switches. The companies are standardizing on NVIDIA's open, accelerated networking architecture to improve scalability, increase AI training efficiency, and speed up insights across large-scale workloads. NVIDIA said that Oracle will build giga-scale AI factories powered by its Vera Rubin architecture and interconnected using Spectrum-X Ethernet technology. Also Read: Nvidia's Middle East Play: How Gulf AI Ambitions Could Reshape US Semiconductor ETFs NVIDIA said in the press release that Meta will integrate Spectrum Ethernet switches into its Facebook Open Switching System (FBOSS) to enhance management of large-scale networks. The integration will accelerate AI deployments, boost training efficiency, and reduce time to insights. NVIDIA's Spectrum-X Ethernet platform, which comprises Spectrum-X Ethernet switches and SuperNICs, is the first Ethernet solution purpose-built for artificial intelligence. Designed for the trillion-parameter model era, Spectrum-X enables hyperscalers to interconnect millions of GPUs with greater efficiency and scale. Jensen Huang, founder and CEO of NVIDIA, stated, "Trillion-parameter models are transforming data centers into giga-scale AI factories, and industry leaders like Meta and Oracle are standardizing on Spectrum-X Ethernet to drive this industrial revolution." "Oracle Cloud Infrastructure is designed from the ground up for AI workloads, and our partnership with NVIDIA extends that AI leadership." said Mahesh Thiagarajan, executive vice president of Oracle Cloud Infrastructure. "Meta's next-generation AI infrastructure requires open and efficient networking at a scale the industry has never seen before." said Gaya Nagarajan, vice president of networking engineering at Meta. Spectrum-X Ethernet platform has achieved record-setting efficiency, enabling the world's largest AI supercomputer to reach 95% data throughput using its advanced congestion-control technology, the company mentioned. The company stated that conventional off-the-shelf Ethernet networks typically experience thousands of flow collisions at scale, limiting throughput to about 60%. Spectrum-XGS Ethernet technology, part of the Spectrum-X networking platform, delivers a significant leap in AI networking efficiency, marking a breakthrough in both performance and economics. The system enables scale-across capabilities, connecting data centers across cities, countries, and continents to form giga-scale AI super-factories. Built on NVIDIA's full-stack architecture, spanning GPUs, CPUs, NVLink interconnects, and software, Spectrum-X integrates advanced congestion control, adaptive routing, and AI-driven telemetry to ensure efficient, predictable performance for large-scale AI training and inference workloads. NVDA Price Action: Nvidia shares were up 2.73% at $188.15 at the time of publication on Monday. The stock is approaching its 52-week high of $195.62, according to Benzinga Pro data. Read Next: Vertiv, Nvidia Unite To Revolutionize AI Factory Power Systems Photo by Below the Sky via Shutterstock NVDANVIDIA Corp$188.052.67%OverviewMETAMeta Platforms Inc$714.921.36%ORCLOracle Corp$309.385.60%Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
NVIDIA announces that Meta and Oracle will adopt its Spectrum-X Ethernet networking platform to enhance their AI data center networks. This move marks a significant shift in AI infrastructure, enabling unprecedented scale and efficiency in AI computations.
NVIDIA has announced a major breakthrough in AI infrastructure with its Spectrum-X Ethernet networking platform. Industry giants Meta and Oracle are set to adopt this technology, marking a significant shift in the landscape of AI computing
1
.The Spectrum-X Ethernet platform is designed specifically for the era of trillion-parameter AI models. It enables hyperscalers to interconnect millions of GPUs with unprecedented efficiency and scale. This purpose-built system for AI workloads connects GPUs into a unified fabric, effectively transforming data centers into giga-scale AI factories
2
.Spectrum-X offers several advantages over traditional Ethernet solutions:
2
.1
.1
.Related Stories
Meta will integrate Spectrum Ethernet switches into its Facebook Open Switching System (FBOSS), enhancing its ability to manage large-scale networks efficiently. This integration aligns with Meta's open hardware and software philosophy, extending it into the AI infrastructure layer
1
2
.Oracle, on the other hand, plans to build giga-scale AI factories accelerated by NVIDIA's Vera Rubin architecture and interconnected by Spectrum-X Ethernet. This move positions Oracle to compete directly with the largest hyperscalers in AI compute capabilities
1
3
.The adoption of Spectrum-X by major players like Meta and Oracle signals a new era in AI infrastructure. It represents a shift from traditional Ethernet solutions to purpose-built systems designed to handle the unique communication patterns of large-scale AI workloads. This development is expected to significantly impact the economics and performance of AI-scale networking, potentially reshaping the landscape of AI research and applications
2
.As the AI industry continues to evolve, NVIDIA's Spectrum-X Ethernet platform stands poised to play a crucial role in enabling the next generation of AI innovations, from more efficient training of large language models to the creation of increasingly sophisticated AI applications.
Summarized by
Navi
[1]
[2]
22 Aug 2025โขTechnology
YesterdayโขTechnology
19 Mar 2025โขTechnology