6 Sources
6 Sources
[1]
Cisco unveils 102.4T Silicon One G300 switch chip
Switchzilla leans on P4 programmability and revamped congestion controls to differentiate its latest Silicon One ASIC As AI training and inference clusters grow larger, they require bigger, higher-bandwidth networks to feed them. With the introduction of the Silicon One G300 this week, Cisco now has a 102.4 Tbps monster to challenge Broadcom's Tomahawk 6 and Nvidia Spectrum-X Ethernet Photonics. Much like those chips, the G300 packs 512 ultra-fast 200 Gbps serializers/deserializers (SerDes). The massive radix -- that means loads of ports -- means Cisco can now support deployments of up to 128,000 GPUs using just 750 switches, where 2,500 were needed previously. Alternatively, those SerDes can be aggregated to support port speeds of up to 1.6 Tbps. None of this is unique to Cisco, however. That's just how bandwidth scales. Those same figures apply to Broadcom and Nvidia's 102.4 Tbps silicon just as they do to anyone else's. According to Cisco fellow and SVP Rakesh Chopra, what really sets the G300 apart from the competition is its collective networking engine, which features a fully shared packet buffer and a path-based load balancer to mitigate congestion, improve link utilization and latency, and reduce time to completion. "There's no sort of segmentation of packet buffers, allowing packets to come in [and] be absorbed irrespective of the port. That means that you can ride through bursts better in AI workflows or front-end workloads," he said. The load-balancing agent "monitors the flows coming through the G300. It monitors congestion points and it communicates with all the other G300s in the network and builds sort of a global collective map of what is happening across the entire AI cluster," he added. This kind of congestion management isn't new by any means. Both Broadcom and Nvidia have implemented similar technologies in their own switches and NICs for this reason. However, Cisco claims its implementation achieves a 33 percent better link utilization, which can cut training times by up to 28 percent compared to packet-spraying-based approaches. And while Chopra doesn't say which vendor it's comparing the G300 to, we'll note both Broadcom and Nvidia's implementations rely on packet spraying. As usual, we recommend taking any vendor-supplied performance claim with a grain of salt. While there aren't that many 102.4 Tbps switches out there, there are countless ways to build networks using them, and some topologies may benefit from one vendor's tech more than another. Cisco's collective networking engine is only one piece of the puzzle. The other, claims Chopra, is the chip's P4 programmability. "It means that we can take our device, we can reprogram it to add new functionality, new capabilities, and deploy the same equipment in multiple different roles," Chopra said, adding that this is particularly valuable for extending the useful life of the switches. As new features are introduced, they can often be added via a software update rather than requiring new hardware. If any of this sounds familiar, that's because Cisco isn't the only AI networking vendor leaning into P4. AMD's Pensando NICs, like the Polara 400, also make use of the programming language. This actually came in handy for AMD as it allowed the chip designer to start shipping Ultra Ethernet compatible NICs before the spec had actually been finalized, since any changes to the spec could be implemented later via a software update. As with past Silicon One switch chips, the G300 will compete directly with Broadcom in the merchant silicon arena in addition to powering Cisco's own networking appliances. In particular, Cisco says the part will be available as part of its N9000 and Cisco 8000 product lines, both of which will come equipped with 64 1.6 Tbps OSFP cages. To support these appliances, Cisco is also rolling out new 1.6 Tbps pluggable optics, which can be broken out on the other end into up to eight 200 Gbps connections. Alongside the ultrafast links, Cisco is introducing 800 Gbps linear pluggable optics (LPO), which ditch the onboard digital signal processor (DSP) and retimer in order to cut power consumption. This is possible because signal processing is handled entirely in the G300. While pluggables don't consume that much power - usually 10-20 watts - a 50 percent reduction is still significant when you consider just how many pluggables may be deployed across a datacenter. Cisco wouldn't tell us how much power its new LPO transceiver consumes, but did say that when combined with the N9000 or Cisco 8000 systems, customers can expect to see a roughly 30 percent reduction in switch power. While Cisco is making gains on pluggables, it's not quite ready to follow Nvidia and Broadcom and take the plunge on copackaged optics (CPO) just yet. The technology involves moving the optics from the pluggable module into the switch package, reducing power consumption and, in theory, improving the reliability of the network. "We're not making any public announcements about CPO at this time," Chopra said, noting that Cisco has previously demoed the tech on its G100 silicon. "We certainly have the technology, we're looking for business alignment to sort of productize it." Alongside the new optics, Cisco also announced broader availability for its Silicon One P200 routing silicon. We took a closer look at the chip last fall, but in a nutshell, the 51.2 Tbps chip is designed to connect AI training clusters over distances of up to 1,000 kilometers. Alongside the previously announced Cisco 8223 router, Switchzilla says the part is now available in its N9000 line and as a 28.8 Tbps line card. Cisco's G300 chips, systems, and optics are slated to begin shipping later this year, just in time for Nvidia and AMD's next-gen rack systems. ®
[2]
Cisco unveils new AI networking chip, taking on Broadcom and Nvidia
SAN FRANCISCO, Feb 10 (Reuters) - Cisco Systems (CSCO.O), opens new tab on Tuesday launched a new chip and router designed to speed information through massive data centers that will compete against offerings from Broadcom (AVGO.O), opens new tab and Nvidia (NVDA.O), opens new tab for a piece of the $600 billion AI infrastructure spending boom. Cisco said its Silicon One G300 switch chip, expected to go on sale in the second half of the year, will help the chips that train and deliver AI systems talk to each other over hundreds of thousands of links. The chip will be made with Taiwan Semiconductor Manufacturing Co's (2330.TW), opens new tab 3-nanometer chipmaking technology and will have several new "shock absorber" features designed to help networks of AI chips from bogging down when hit with large spikes of data traffic, Martin Lund, executive vice president of Cisco's common hardware group, told Reuters in an interview. Cisco expects the chip to help some AI computing jobs get done 28% faster, in part by re-routing data around any problems in the network automatically, within microseconds. "This happens when you have tens of thousands, hundreds of thousands of connections - it happens quite regularly," Lund said. "We focus on the total end-to-end efficiency of the network." Networking has become a key competitive field in AI. When Nvidia unveiled its newest systems last month, one of the six key chips in the system was a networking chip that competes with Cisco's offerings. Broadcom is going after the same market with its "Tomahawk" series of chips. Reporting by Stephen Nellis in San Francisco; Editing by Jamie Freed Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
Cisco targets high-speed AI networking with 102.4Tbps chip, liquid-cooled switches - SiliconANGLE
Cisco targets high-speed AI networking with 102.4Tbps chip, liquid-cooled switches Cisco Systems Inc. today introduced a new generation of networking silicon, systems and optical modules aimed at supporting hyperscale-class artificial intelligence infrastructure as AI workloads shift from model training toward broader inference and agentic applications. At Cisco Live EMEA in Amsterdam, the company unveiled its Silicon One G300, a 102.4-terabit-per-second Ethernet switching chip designed to support massive "scale-out" AI clusters. Cisco said the chip will power new Nexus 9000 and Cisco 8000 switching platforms that can be deployed in high-density, air- and liquid-cooled configurations to reduce power consumption and simplify operations for organizations building large AI networks. Cisco framed the announcements as a response to the expanding AI ecosystem, in which the demands of AI infrastructure are no longer confined to hyperscale cloud providers. Instead, enterprises, neocloud providers and sovereign cloud operators are increasingly investing in their own AI clusters and require more efficient networking to support graphics processing unit-heavy workloads. "The last two or three years, we've mainly been focused on building out massive training clusters with hyperscalers," said Kevin Wolterweber, senior vice president and general manager of Cisco's data center and internet infrastructure business. "What we're starting to see now is a shift toward agentic AI workloads, and more adoption within enterprise service providers and a broader customer base." Cisco said the G300 chip is designed to improve the efficiency of AI clusters by maximizing GPU utilization and reducing delays caused by network congestion that stalls distributed training or inference jobs. The chip supports a feature Cisco calls Intelligent Collective Networking, which combines a shared packet buffer, path-based load balancing and telemetry capabilities intended to better handle the bursty traffic patterns typical of AI workloads. Cisco claimed the approach can deliver a 33% increase in network utilization and a 28% reduction in job completion time compared with simulated environments using non-optimized path selection. "The G300 represents more than a typical incremental upgrade," said Sameh Boujelbene, vice president at Dell'Oro Group Inc. "Cisco is pushing intelligence directly into switching silicon, which addresses real pain points for large AI clusters like unpredictable east-west traffic, congestion hotspots and scaling limits in legacy fabrics." Wolterweber said the differentiator is less about raw bandwidth, since other chipmakers are also moving toward 100-terabit-class devices. Cisco sees its primary value being the intelligence built into the silicon. "It's about the efficiency of being able to load balance workloads across the network efficiently, doing very granular telemetry to feeding back to the job schedulers," he said. The G300 is also programmable, meaning customers can add new networking functionality after deployment as standards evolve. Alongside the silicon announcement, Cisco introduced new fixed and modular switching systems in its Nexus 9000 and Cisco 8000 product lines powered by the G300. Cisco said the systems deliver 102.4Tbps switching speeds and are aimed at customers building large AI fabrics. The new systems will be available in both air-cooled and fully liquid-cooled designs. The company claimed the liquid-cooled configuration enables significantly higher bandwidth density and can improve energy efficiency by nearly 70% compared with prior generations, delivering the same bandwidth that previously required six systems. Wolterweber said the move toward liquid-cooled switching aligns with the direction of GPU server platforms. "The future generations of GPUs are all going to be liquid cooled," he said, "so we're starting to build liquid cooling into our switching devices as well." The Nexus switches run the company's NX-OS software, a modular, Linux-based network operating system designed for Nexus-series data center switches and Multilayer Director Switch storage networking devices. The 8000 models can also support alternative network operating systems, including the open-source Sonic platform. Cisco also introduced a new set of optical modules designed to support higher-density interconnects inside AI clusters. The 1.6-terabit octal small form factor pluggable optical modules are aimed at switch-to-network interface card links and switch-to-server connections. OSFP is a type of transceiver that connects a network device to fiber or copper cable. Cisco claimed new 800-gigabit linear pluggable optics can reduce optical module power consumption by 50% compared with retimed modules, reducing overall switch power by 30%. Cisco also updated its Nexus One management platform, positioning it as a unified operating model for AI networks that span on-premises and cloud deployments. The platform is designed to simplify the deployment and management of AI fabrics while improving observability and security. New features include a unified fabric capability, application programming interface-driven automation and AI job observability that correlates network telemetry with AI workload behavior. Cisco also added native integration with the Splunk data analytics platform it acquired in 2024, enabling network administrators to analyze network telemetry where the data resides rather than moving it into external systems Wolterweber said the integration is designed to address the cost and complexity of ingesting large volumes of network data into Splunk. "Organizations can't ingest everything they have," he said. The move brings together Nexus Dashboard and Nexus Hyperfabric into a single platform that can manage both on-premises and cloud-managed devices, while also supporting multiple operating systems including Sonic. Dell'Oro's Boujelbene said Cisco is making inroads into an AI networking market that has been dominated by Nvidia Corp., Celestica Inc., Juniper Networks Inc. and Arista Networks Inc. "Few competitors can match Cisco's integrated stack of silicon, systems, optics, software and operations tooling, which matters for customers seeking end-to-end visibility and unified support," she said. "Cisco's deep penetration in enterprise data centers also gives it leverage, as IT teams often prefer incremental evolution of vendors they already trust rather than introducing entirely new suppliers." Wolterweber said the growth of agentic AI workflows will drive major changes in how networks are secured and monitored, since enterprises will need to manage identities and permissions for large numbers of autonomous agents operating continuously. "We expect to see a lot more utilization of the network itself, because now you're going to have a multiplicative effect of agents doing things for you," he said. That means security models will need to evolve beyond centralized firewalls, with enforcement distributed across the network. "We're embedding a lot of the technologies that we used to put into things like centralized firewalls around the network to allow us to do policy enforcement," Wolterweber said. Cisco said many of the products and features announced today are in development and will be made available as they are finalized.
[4]
Cisco unveils new AI networking chip, taking on Broadcom and Nvidia
Cisco said its Silicon One G300 switch chip, expected to go on sale in the second half of the year, will help the chips that train and deliver AI systems talk to each other over hundreds of thousands of links. Cisco Systems on Tuesday launched a new chip and router designed to speed information through massive data centers that will compete against offerings from Broadcom and Nvidia for a piece of the $600 billion AI infrastructure spending boom. Cisco said its Silicon One G300 switch chip, expected to go on sale in the second half of the year, will help the chips that train and deliver AI systems talk to each other over hundreds of thousands of links. The chip will be made with Taiwan Semiconductor Manufacturing Co's 3-nanometer chipmaking technology and will have several new "shock absorber" features designed to help networks of AI chips from bogging down when hit with large spikes of data traffic, Martin Lund, executive vice president of Cisco's common hardware group, told Reuters in an interview. Cisco expects the chip to help some AI computing jobs get done 28% faster, in part by re-routing data around any problems in the network automatically, within microseconds. "This happens when you have tens of thousands, hundreds of thousands of connections - it happens quite regularly," Lund said. "We focus on the total end-to-end efficiency of the network." Networking has become a key competitive field in AI. When Nvidia unveiled its newest systems last month, one of the six key chips in the system was a networking chip that competes with Cisco's offerings. Broadcom is going after the same market with its "Tomahawk" series of chips.
[5]
Cisco Reveals New Silicon One Offering, Enhanced Nexus One At Cisco Live EMEA
"We're still at the early stages of AI buildouts. ... Cisco's here to be able to help [enterprises] really get value out of these AI technologies,' Nick Kucharewski, Cisco Silicon One's senior vice president and general manager, tells CRN. Cisco Systems is making AI infrastructure buildouts accessible to more than just hyperscalers and the largest of the large service providers with the introduction of the new Silicon One G300, an advanced systems portfolio, new optics and an updated Nexus One operating model. The networking giant took to Cisco Live EMEA 2026 in Amsterdam Tuesday to unveil its latest networking innovations to help enterprises specifically get the most from their network infrastructure, extend AI networking to every site, and power and scale their AI use cases, Nick Kucharewski, senior vice president and general manager of Cisco Silicon One, told CRN ahead of the event. "We're building networks with hyperscalers [and] we're starting to build with some of the neoclouds now. But really where we see the next wave of AI deployment starting to happen is within the enterprise customer base," Kucharewski said. "We're still at the early stages of AI buildouts. We think that inference and agentic workloads are the next wave of technologies that are going to hit our customers. It's going to expand and broaden the customer base, and Cisco's here to be able to help that customer base really get value out of these AI technologies." [Related: Cisco 360 Program Launch 'Excites' Partners Ready To Tackle AI Opportunities] The new Cisco Silicon One G300 102.4-Tbps programmable switching silicon has been designed to power gigawatt-scale AI clusters for training, inference and real-time agentic workloads while maximizing GPU utilization with a 28 percent improvement in job completion time. Cisco Silicon One, first introduced in 2019, is playing critical roles in major networks around the world, according to Cisco. Powered by the G300 are the new next-generation Cisco N9000 and Cisco 8000 fixed and modular Ethernet systems, which offer 102.4-Tbps switching speeds in systems designed for hyperscalers, neoclouds, sovereign clouds, service providers and enterprises looking to scale out their AI infrastructure. The new offerings are available as a 100 percent liquid-cooled design that, along with new optics, will let end users improve energy efficiency by close to 70 percent, according to Cisco. The networking giant is also introducing new 1.6T OSFP (Octal Small Form-factor Pluggable Optics) that will offer customers high performance and reliability, 800G Linear Pluggable Optics (LPO), and an expanded portfolio of Silicon One P200-powered systems that build on the introduction of 51.2T systems for hyperscale deployments. The new P200-powered N9000 systems and expanded OS support on 8223 systems offer data center interconnect, universal spine, and core and peer routing capabilities to neoclouds, service providers and enterprises, Cisco said. Lastly on the networking front, Cisco has enhanced Nexus One, its data center networking offering that brings together Cisco's NX-OS VXLAN EVPN and Cisco ACI fabrics based on open standards and gives users a consistent experience across the two fabrics by way of the Cisco Nexus Dashboard. The updates to Nexus One will make it easier for enterprises to operate their AI networks, whether they are on-premises or in the cloud. It's this complexity that stands in the way of enterprises scaling their AI data centers, Cisco said. The new products will be available and shipping this calendar year, according to San Jose, Calif.-based Cisco. Understanding AI use cases that enterprises want to deploy and being able to deliver those in a way that these end customers can consume it will be critical for Cisco's channel partners helping their clients with AI networking, Kucharewski said. "The adoption [by] those enterprise customers of AI is going to require that the channel ecosystem be strong," he said. "When I work with hyperscalers, some of them might just buy that silicon from me and then they go build everything else themselves. Most of the enterprise customers that I talk to are looking for, 'What's the value of these AI workloads that I'm deploying? How do I get more efficiency out of what I'm doing today?' So, I think for the channel partners, it's about understanding the use cases that are interesting and relevant to the customer base that they're trying to address and coming to them with solutions, not coming to them with the bucket full of parts."
[6]
Cisco introduces AI networking chip
Cisco Systems (CSCO) unveiled a new networking chip aimed at speeding information through large data centers that will potentially compete against products from Broadcom (AVGO) and Nvidia (NVDA). Cisco said Silicon One G300 -- a 102.4 Terabit per second, or Tbps, switching silicon -- can power Cisco's G300 targets large data centers with high-performance switching, positioning it as a direct competitor to Broadcom's and Nvidia's latest networking chips, potentially increasing Cisco's appeal to hyperscalers and enterprises focused on AI workloads. The G300 chip enables a 28% improvement in AI job completion time, fully liquid-cooled systems for nearly 70% better energy efficiency, and maximizes GPU utilization for demanding AI training and inference. Cisco is introducing centralized visibility tools for AI assets, risk management features, security enhancements through advanced red teaming, and real-time guardrails to ensure agent safety and integrity in AI operations.
Share
Share
Copy Link
Cisco introduced its Silicon One G300, a 102.4Tbps switch chip designed to power massive AI clusters with up to 128,000 GPUs. The chip features advanced congestion management and P4 programmability, promising 28% faster job completion times. Set to launch in the second half of 2026, it targets the booming $600 billion AI infrastructure market.
Cisco has launched its Silicon One G300, a 102.4Tbps switch chip that positions the networking giant squarely against Broadcom's Tomahawk 6 and Nvidia's Spectrum-X offerings in the rapidly expanding AI infrastructure market
1
. Unveiled at Cisco Live EMEA in Amsterdam, the chip addresses a critical bottleneck as AI training and inference clusters scale to unprecedented sizes2
. Expected to go on sale in the second half of 2026, the G300 will be manufactured using TSMC's 3-nanometer chipmaking technology and targets a piece of the $600 billion AI infrastructure spending boom4
.
Source: SiliconANGLE
The Silicon One G300 packs 512 ultra-fast 200 Gbps SerDes that can be aggregated to support port speeds of up to 1.6 Tbps
1
. This massive radix enables Cisco to support deployments of up to 128,000 GPUs using just 750 switches, where 2,500 were needed previously1
. What distinguishes the G300 from competing solutions is its Intelligent Collective Networking engine, which features a fully shared packet buffer and path-based load balancing designed to mitigate network congestion3
. According to Cisco fellow and SVP Rakesh Chopra, this architecture allows packets to be absorbed irrespective of the port, enabling better handling of burst traffic in AI workloads1
.Cisco claims the G300 achieves 33% better link utilization and can cut training times by up to 28% compared to packet-spraying-based approaches used by competitors
1
. The load-balancing agent monitors flows through the G300, tracks congestion points, and communicates with other G300s in the network to build a global collective map of activity across entire large AI clusters1
. Martin Lund, Cisco's executive vice president of common hardware, explained that the chip includes "shock absorber" features designed to help networks of AI chips avoid bogging down when hit with large spikes of data traffic, automatically re-routing data around problems within microseconds2
.The 102.4Tbps switch chip leverages P4 programmability, allowing Cisco to reprogram devices to add new functionality and deploy the same equipment in multiple roles
1
. This capability proves particularly valuable for extending the useful life of switches, as new features can often be added via software updates rather than requiring new hardware1
. Cisco isn't alone in this approach—AMD's Pensando NICs also use P4 programmability, which enabled the company to ship Ultra Ethernet compatible NICs before the spec was finalized1
.The G300 will power new Nexus 9000 and Cisco 8000 fixed and modular switching platforms, both equipped with 64 1.6 Tbps OSFP cages
1
. These systems will be available in both air-cooled and fully liquid-cooled configurations, with the liquid cooling design improving energy efficiency by nearly 70% compared to prior generations3
. Kevin Wolterweber, senior vice president and general manager of Cisco's data center and internet infrastructure business, noted that the move toward liquid cooling aligns with future GPU server platforms, as next-generation GPUs will all be liquid cooled3
.
Source: The Register
Related Stories
Alongside the chip, Cisco introduced new 1.6 Tbps pluggable optics that can be broken out into up to eight 200 Gbps connections
1
. The company is also rolling out 800 Gbps Linear Pluggable Optics (LPO), which eliminate the onboard digital signal processor and retimer to reduce power consumption by 50%1
. When combined with N9000 or Cisco 8000 systems, customers can expect roughly 30% reduction in switch power1
. This matters significantly when considering the massive number of pluggable optics deployed across data centers supporting AI workloads3
.Cisco is positioning these innovations as a response to AI infrastructure demands expanding beyond hyperscale cloud providers to enterprises, neocloud providers, and sovereign cloud operators
3
. Nick Kucharewski, senior vice president and general manager of Cisco Silicon One, told CRN that while the company has been building networks with hyperscalers, the next wave of AI deployment will happen within the enterprise customer base, particularly for inference and agentic workloads5
. Cisco also updated its Nexus One management platform to provide a unified operating model for AI networks spanning on-premises and cloud deployments3
. The G300 will compete as merchant silicon against Broadcom while also powering Cisco's own networking appliances1
.
Source: CRN
Summarized by
Navi
[1]
[3]
1
Policy and Regulation

2
Technology

3
Technology
