Cisco unveils Silicon One G300 chip to challenge Broadcom and Nvidia in AI networking race

Reviewed byNidhi Govil

6 Sources

Share

Cisco introduced its Silicon One G300, a 102.4Tbps switch chip designed to power massive AI clusters with up to 128,000 GPUs. The chip features advanced congestion management and P4 programmability, promising 28% faster job completion times. Set to launch in the second half of 2026, it targets the booming $600 billion AI infrastructure market.

Cisco Enters High-Stakes AI Networking Battle With New Silicon

Cisco has launched its Silicon One G300, a 102.4Tbps switch chip that positions the networking giant squarely against Broadcom's Tomahawk 6 and Nvidia's Spectrum-X offerings in the rapidly expanding AI infrastructure market

1

. Unveiled at Cisco Live EMEA in Amsterdam, the chip addresses a critical bottleneck as AI training and inference clusters scale to unprecedented sizes

2

. Expected to go on sale in the second half of 2026, the G300 will be manufactured using TSMC's 3-nanometer chipmaking technology and targets a piece of the $600 billion AI infrastructure spending boom

4

.

Source: SiliconANGLE

Source: SiliconANGLE

Massive Scale Meets Intelligent Traffic Management

The Silicon One G300 packs 512 ultra-fast 200 Gbps SerDes that can be aggregated to support port speeds of up to 1.6 Tbps

1

. This massive radix enables Cisco to support deployments of up to 128,000 GPUs using just 750 switches, where 2,500 were needed previously

1

. What distinguishes the G300 from competing solutions is its Intelligent Collective Networking engine, which features a fully shared packet buffer and path-based load balancing designed to mitigate network congestion

3

. According to Cisco fellow and SVP Rakesh Chopra, this architecture allows packets to be absorbed irrespective of the port, enabling better handling of burst traffic in AI workloads

1

.

Performance Claims Target GPU Utilization Efficiency

Cisco claims the G300 achieves 33% better link utilization and can cut training times by up to 28% compared to packet-spraying-based approaches used by competitors

1

. The load-balancing agent monitors flows through the G300, tracks congestion points, and communicates with other G300s in the network to build a global collective map of activity across entire large AI clusters

1

. Martin Lund, Cisco's executive vice president of common hardware, explained that the chip includes "shock absorber" features designed to help networks of AI chips avoid bogging down when hit with large spikes of data traffic, automatically re-routing data around problems within microseconds

2

.

P4 Programmability Extends Hardware Lifespan

The 102.4Tbps switch chip leverages P4 programmability, allowing Cisco to reprogram devices to add new functionality and deploy the same equipment in multiple roles

1

. This capability proves particularly valuable for extending the useful life of switches, as new features can often be added via software updates rather than requiring new hardware

1

. Cisco isn't alone in this approach—AMD's Pensando NICs also use P4 programmability, which enabled the company to ship Ultra Ethernet compatible NICs before the spec was finalized

1

.

New Systems Target Broader AI Infrastructure Market

The G300 will power new Nexus 9000 and Cisco 8000 fixed and modular switching platforms, both equipped with 64 1.6 Tbps OSFP cages

1

. These systems will be available in both air-cooled and fully liquid-cooled configurations, with the liquid cooling design improving energy efficiency by nearly 70% compared to prior generations

3

. Kevin Wolterweber, senior vice president and general manager of Cisco's data center and internet infrastructure business, noted that the move toward liquid cooling aligns with future GPU server platforms, as next-generation GPUs will all be liquid cooled

3

.

Source: The Register

Source: The Register

Energy-Efficient Optics Address Power Consumption

Alongside the chip, Cisco introduced new 1.6 Tbps pluggable optics that can be broken out into up to eight 200 Gbps connections

1

. The company is also rolling out 800 Gbps Linear Pluggable Optics (LPO), which eliminate the onboard digital signal processor and retimer to reduce power consumption by 50%

1

. When combined with N9000 or Cisco 8000 systems, customers can expect roughly 30% reduction in switch power

1

. This matters significantly when considering the massive number of pluggable optics deployed across data centers supporting AI workloads

3

.

Enterprise Focus Signals Market Expansion

Cisco is positioning these innovations as a response to AI infrastructure demands expanding beyond hyperscale cloud providers to enterprises, neocloud providers, and sovereign cloud operators

3

. Nick Kucharewski, senior vice president and general manager of Cisco Silicon One, told CRN that while the company has been building networks with hyperscalers, the next wave of AI deployment will happen within the enterprise customer base, particularly for inference and agentic workloads

5

. Cisco also updated its Nexus One management platform to provide a unified operating model for AI networks spanning on-premises and cloud deployments

3

. The G300 will compete as merchant silicon against Broadcom while also powering Cisco's own networking appliances

1

.

Source: CRN

Source: CRN

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo