Cisco unveils Silicon One G300 chip to challenge Broadcom and Nvidia in AI networking race

Reviewed byNidhi Govil

8 Sources

Share

Cisco launched its Silicon One G300, a 102.4Tbps switching chip designed to compete with Broadcom and Nvidia in the booming AI infrastructure market. The chip features advanced congestion management and can support deployments of up to 128,000 GPUs using just 750 switches. Cisco claims its implementation achieves 33% better link utilization and can cut training times by up to 28%.

Cisco Takes Aim at AI Networking with Silicon One G300

Cisco has entered the high-stakes competition for AI networking dominance with the introduction of its Silicon One G300, a 102.4Tbps networking chip unveiled at Cisco Live EMEA in Amsterdam

4

. The chip directly challenges offerings from Broadcom and Nvidia for a share of the $600 billion AI infrastructure spending boom

2

. Expected to go on sale in the second half of the year, the G300 will power new Cisco N9000 and Cisco 8000 data center systems designed to support massive AI training and inference clusters running training, inference, and real-time agentic workloads

4

.

Source: SiliconANGLE

Source: SiliconANGLE

The Silicon One G300 packs 512 ultra-fast 200 Gbps serializers/deserializers that can be aggregated to support port speeds of up to 1.6 Tbps

1

. This massive radix means Cisco can now support deployments of up to 128,000 GPUs using just 750 switches, where 2,500 were needed previously

1

. The chip will be manufactured using Taiwan Semiconductor Manufacturing Co's 3-nanometer chipmaking technology

2

.

Advanced Congestion Management Drives GPU Utilization

What sets the G300 apart from competitors is its Intelligent Collective Networking engine, which features a fully shared packet buffer and path-based load balancing to mitigate network congestion

1

. According to Cisco fellow and SVP Rakesh Chopra, the lack of packet buffer segmentation allows packets to be absorbed irrespective of the port, enabling better handling of bursts in AI workloads

1

. The load balancing agent monitors flows and congestion points, communicating with all other G300s in the network to build a global collective map of what is happening across the entire AI cluster

1

.

Cisco claims its implementation achieves 33% better link utilization and can cut training times by up to 28% compared to packet-spraying-based approaches used by Broadcom and Nvidia

1

. Martin Lund, executive vice president of Cisco's common hardware group, told Reuters that the chip expects to help some AI computing jobs get done 28% faster, in part by re-routing data around network problems automatically within microseconds

2

. As data movement between GPUs now directly affects how fast AI jobs finish and how much compute capacity can be used, these efficiency gains matter significantly

4

.

P4 Programmability Extends Hardware Lifespan

The Silicon One G300 incorporates P4 programmability, allowing Cisco to reprogram the device to add new functionality and capabilities, deploying the same equipment in multiple different roles

1

. Chopra emphasized this is particularly valuable for extending the useful life of switches, as new features can often be added via software updates rather than requiring new hardware

1

. This approach mirrors AMD's Pensando NICs strategy, which allowed the chip designer to start shipping Ultra Ethernet compatible NICs before the spec had been finalized

1

.

Sameh Boujelbene, vice president at Dell'Oro Group, noted that the G300 represents more than a typical incremental upgrade, as Cisco is pushing intelligence directly into switching silicon to address real pain points for large AI clusters like unpredictable east-west traffic and congestion hotspots

5

.

Unified Management for Front-End and Back-End Networks

Cisco has emphasized operational resiliency across both front-end and back-end networks as critical for maximizing ROI on AI infrastructure

3

. Surbhi Paul, Director of Data Center Networking at Cisco, explained that the conversation has shifted from just bandwidth to determinism, noting that GPUs can stall with jitter and burn millions of dollars of capital expense if they sit idle for milliseconds

3

. The N9000 Series includes crucial resiliency features including Priority-based Flow Control and Explicit Congestion Notification, which ensure AI training and inference operations can complete without dropping jobs

3

.

The Nexus Dashboard management platform provides centralized management with integrated observability features, real-time applications, and job monitoring to ensure operational resiliency across data center deployments

3

. Paul emphasized that to maximize ROI, organizations cannot treat front-end and back-end networks as islands, requiring a unified management platform to keep GPU idle time to zero

3

.

Source: Cisco

Source: Cisco

Energy Efficiency Through Liquid Cooling and Advanced Optics

Alongside the silicon announcement, Cisco introduced new fixed and modular switching systems in its Nexus 9000 and Cisco 8000 product lines powered by the G300, available in both air-cooled and fully liquid-cooled designs

5

. The company claimed the liquid-cooled configuration can improve energy efficiency by nearly 70% compared with prior generations, delivering the same bandwidth that previously required six systems

5

.

To support these systems, Cisco is rolling out new 1.6 Tbps pluggable optics with 64 OSFP cages, which can be broken out into up to eight 200 Gbps connections

1

. The company also introduced 800 Gbps linear pluggable optics that ditch the onboard digital signal processor and retimer to cut power consumption by 50%, with overall switch power reduction of approximately 30%

1

. These advances in pluggable optics become significant when considering how many may be deployed across a data center

1

.

Source: Interesting Engineering

Source: Interesting Engineering

Market Shift Toward Enterprise and Inference Workloads

Kevin Wolterweber, senior vice president and general manager of Cisco's data center and internet infrastructure business, noted that the AI infrastructure market is shifting beyond hyperscale cloud providers

5

. The last two to three years focused mainly on building massive training clusters with hyperscalers, but now there's a shift toward agentic AI workloads and broader adoption within enterprise service providers

5

. This expansion means enterprises, neocloud providers, and sovereign cloud operators are increasingly investing in their own AI clusters and require more efficient networking to support GPU-heavy workloads

5

.

As the G300 will compete directly with Broadcom in the merchant silicon arena in addition to powering Cisco's own networking appliances, the company is positioning itself to capture share across multiple customer segments

1

. With networking becoming a critical competitive field in AI—as evidenced by Nvidia unveiling a networking chip as one of six key chips in its newest systems—the stakes for scale AI data centers continue to rise

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo