Nvidia's networking business overtakes Cisco as AI infrastructure dominance expands beyond chips

3 Sources

Share

Nvidia's networking division generated $31 billion in annual revenue, surpassing Cisco to become the world's largest networking company. The business grew 267% year-over-year, driven by demand for AI data center infrastructure. This expansion signals Nvidia's push beyond chips into end-to-end AI solutions, challenging established players across multiple categories.

Nvidia Becomes World's Largest Networking Company

Nvidia has quietly built a networking business that now rivals the scale of its flagship chip operations, achieving a milestone that underscores the company's expanding grip on AI infrastructure. The networking division reported $11 billion in revenue last quarter, marking a 267% year-over-year increase, and brought in more than $31 billion for the full year

1

. This figure surpasses Cisco's $28 billion in networking revenue for its 2025 fiscal year, making Nvidia the world's largest networking company

3

. Kevin Cook, a senior equity strategist at Zacks Investment Research, noted that Nvidia's networking business does in one quarter what Cisco's business accomplishes in a year

1

.

Source: CRN

Source: CRN

From Mellanox Acquisition to Market Leadership

The foundation of Nvidia's networking business traces back to its 2020 acquisition of Mellanox, an Israeli networking company founded in 1999, for $7 billion

1

. At the time, the strategic rationale wasn't immediately clear to everyone, including Kevin Deierling, now senior vice president of networking at Nvidia, who joined through the acquisition. However, Jensen Huang's vision has proven prescient. "When Jensen bought Mellanox in 2020, he saw that was the missing piece to make GPUs a complete package," Cook explained

1

. The networking revenue is now "up more than 10 times" from when Nvidia acquired Mellanox

3

.

Building Complete AI Factories Through Vertical Integration

Nvidia's data center networking portfolio now encompasses the complete technology stack needed for building AI factories—data centers designed specifically for training AI models. The division includes NVLink, which powers communication between GPU units on data center racks, InfiniBand Switches for in-network computing, Spectrum-X ethernet platform for AI networking, and co-packaged optics switches

1

. What drove networking revenue in the fourth quarter was a "continued ramp" of NVLink compute fabric for the Grace Blackwell GB200 and GB300 rack-scale platforms, along with growth of Spectrum-X Ethernet and Quantum InfiniBand networking platforms

3

.

End-to-End AI Solutions Strategy

At the GTC conference in San Jose, Nvidia presented a vision of complete AI data center ownership through what it calls "extreme co-design"

3

. The company showcased a lineup of 40 server racks representing different components of its vertically integrated solutions

2

. During his keynote address on March 16, Huang announced the Nvidia Rubin platform, which includes six new chips to power an "AI supercomputer," along with new Inference Context Memory Storage platform and more efficient Spectrum-X Ethernet Photonics switches

1

. The company also introduced the LPX rack for ultra-fast inference, incorporating technology from its $20 billion licensing deal with AI startup Groq

2

.

Source: TechCrunch

Source: TechCrunch

Expanding Competition Across Multiple Fronts

Nvidia's dominance of the AI infrastructure market is extending well beyond AI chips into product categories where it didn't participate a decade ago, putting the company in direct competition with partners like Cisco, Intel, and AMD

3

. The company is seeing real interest in its Vera CPU as a stand-alone offering, with deals from CoreWeave and Meta to supply the upcoming CPU for their data centers

3

. Ian Buck, Nvidia's head of hyper-scale and high-performance computing, emphasized the economic advantages: "From the five-layer-cake of energy, chips, the infrastructure itself, the models, and the applications, this multi-layer infrastructure is driving the revenue and job creation"

2

. Huang expects hyperscalers to spend nearly $700 billion this year on AI infrastructure, a market where Nvidia is positioning itself to capture value across every layer

3

. The company's approach of selling only full-stack solutions through partners, rather than individual components, differentiates it from traditional networking vendors and reflects its data center-scale AI compute philosophy

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo