8 Sources
8 Sources
[1]
Cisco Seeks to Challenge Broadcom in Connecting AI Data Centers
Cisco Systems Inc. is releasing a new chip and networking system meant to connect AI data centers across hundreds of miles, a move that escalates competition with Broadcom Inc. The Silicon One P200 chip and 8223 routing systems allow for faster transfers of data across long-haul optic cables, the company announced on Wednesday. The components are also much smaller than the previous version, Martin Lund, executive vice president of the company's common hardware group, said in an interview. The new technology is meant to link up far-flung data centers and help them work together to develop artificial intelligence models. Previous versions of the Cisco product could work over similar distances but didn't transfer enough data to be useful for things like AI training -- a process that involves bombarding models with massive amounts of information. Broadcom has taken a similar approach with its products. That company unveiled its latest Jericho networking chip in August, saying it would move larger volumes of data and be ideal for handling AI work across multiple locations. Though Cisco is a less-recognized player in this market, it's trying to get the latest equipment to customers faster than Broadcom. Microsoft Corp. and Alibaba Group Holding Ltd., users of the current Silicon One, are examining adopting the P200, the San Jose, California-based company said. "It's a little known fact is that Cisco have a complete portfolio that matches Broadcom," Lund said. "Broadcom is obviously recognized as being a leader, but they are not alone." The Cisco chips are reprogrammable, so they can be updated without having to replace them. They also have significant capacity for buffering, or storing incoming data when there's a burst of activity. That helps prevent the information from getting lost if the destination is too busy -- something especially critical given the cost of running graphics processing units, or GPUs, the chips used to train AI. "The reality is this: Every packet that doesn't get to the GPU is just like lighting money on fire," said Cisco President Jeetu Patel said in an interview.
[2]
Cisco's 51.2T Silicon One P200 chip brings scale across to distributed AI - SiliconANGLE
Cisco Systems Inc. today announced its 8223 routing system, powered by its new Silicon One P200 chip -- a new network system designed to unlock artificial intelligence's potential through massive scale. Earlier this year, Nvidia Corp. introduced the concept of "scale-across" architectures as AI is now hitting the limits of a single data center. A "unit of compute" was once a server then evolved into a rack and then the entire data center. Scale across enables multiple data centers to act as a single unit of compute, and Cisco designed the 8223 and P200 silicon for the specific rigors of this task. The new chip sets a new high-water mark for networking with a whopping 51.2 terabits per second of full-duplex throughput. There are two models of the new router. Both are 3RU and have 64 800G ports. The 8223-64EF uses OSFP optics while the 8223-64E uses QSFP. While both modules achieve the same total data rate, their key differences are in size, thermal management and backward compatibility. These distinctions influence their suitability for different network environments, such as high-density data centers versus telecom applications. Also, OSFP supports both Ethernet and Infiniband standards, while QSFP is used primarily in Ethernet networks. Though the capacity of these routers may seem off the charts, AI is consuming network traffic at unprecedented rate. As AI models double in size every year, the infrastructure required to train them has ballooned. This has pushed hyperscalers past the point of being able to scale-up and scale-out, leaving the only path forward as scale-across. This scale-across migration is driving a massive increase in long-haul traffic. On a pre-briefing, Cisco Senior Vice President Rakesh Chopra talked about this. He mentioned that a scale-across network needs approximately 14 times more bandwidth than the traditional wide-area network interconnect and could require up to 16,000 ports to deliver 13 petabits per second of bandwidth for a massive AI cluster. Trying to achieve this with older modular chassis would require thousands of modular chassis, becoming prohibitively expensive, power-hungry and complex to manage. With scale-across, this can be done with only about 2,000 ports, which is a fraction of the previously estimated 16,000. A key part of Cisco's strategy is its use of deep buffers -- a feature typically associated with traditional routers, not the shallow-buffered switches favored in internal AI clusters. This is arguably the most significant architectural point of divergence from competing approaches such as Nvidia Spectrum-XGS Ethernet. What's interesting about this is that deep buffers have not been used for AI infrastructure as they are perceived to actually slow down AI workloads. Deep buffers are thought to be detrimental to AI networking, particularly for distributed training workloads, because they cause high latency and jitter, which severely degrade the performance of AI models. The concept comes from the notion that with deep buffers, the buffers need to be repeatedly filled and then drained and that causes jitter in the transmission of data between GPUs. Though deep buffers prevent packet loss during congestion (microbursts), which is good for throughput, the tradeoff is a phenomenon called bufferbloat. AI workloads, especially distributed training involving multiple GPUs, are highly sensitive to latency and synchronization issues. To Cisco's credit, it addressed this proactively in the analyst call and explained how it can overcome perceived limitations of deep buffers. Cisco's argument is that it's not the existence of deep buffers that causes the problem but rather congestion that causes them to fill in the first place. Chopra argues, "The problem is the fact that you've done a bad job of load balancing and avoiding congestion control." The other thing to realize is that even if the buffers are filling and draining that doesn't affect job completion time as AI workloads are synchronous in nature. "AI workloads wait for the longest path through the network to complete which effects mean transmission time, not the maximum transmission time," Chopra explained. The introduction of deep buffers creates better reliability for long distance, scale across networks supporting AI workloads. Losing a single packet forces a massive rollback to a checkpoint, a process that is very expensive to do when AI training runs for months. The P200's deep buffering capabilities are designed to absorb massive traffic surges from training, ensuring performance is maintained and power is not wasted on re-processing. Through good congestion management, Cisco can marry the benefits of deep buffering without the historical downsides. Recognizing the criticality of the data moving across data centers, security is baked deep into the 8223. The system offers line-rate encryption using post-quantum resilient algorithms for key management, a level of future-proofing essential for multiyear AI training jobs. Furthermore, a Root of Trust is embedded in the silicon, guaranteeing integrity from manufacturing to deployment, safeguarding against physical tampering. Also, Cisco is embracing operational flexibility. The 8223 is initially available for open-source SONiC deployments, targeting hyperscalers and large AI data center builders who often prefer open options. Support for IOS XR is coming shortly after, which will allow the platform to serve traditional Data Center Interconnect (DCI), core, and backbone WAN use cases, expanding the total addressable market significantly beyond the core AI-cloud customers. The P200 chip will also be available in modular platforms and disaggregated chassis and will power the Cisco Nexus portfolio (running NX-OS) for the enterprise data center, ensuring the same foundational technology and architectural consistency across the entire AI ecosystem. This multifaceted deployment strategy positions Cisco to capture a significant portion of the well north of $10 billion TAM for networking equipment in the AI cloud sector. It's important to note that both Cisco and Nvidia now offer scale-across networking products with Cisco leveraging deep buffer and Nvidia with shallow buffers. Though industry watchers will want to pit one versus the other, the reality is the demand for AI networking is so great that both can succeed. Cisco's approach is ideally suited for distributed AI interconnects where network resiliency is critical. Nvidia's approach is better aligned with low latency scenarios where predictable, minimal latency is an absolute priority for fast training cycles. AI has created a rising tide, and options are good for customers.
[3]
Cisco rolls out chip designed to connect AI data centers over vast distances
The P200 chip, as Cisco calls it, will compete against rival offerings from Broadcom. It will sit at the heart of a new routing device that the company also rolled out on Wednesday and is designed to connect the sprawling data centers that are located over vast distances and which train AI systems. Cisco Systems launched on Wednesday a new networking chip designed to connect artificial intelligence data centers, with the cloud computing units of Microsoft and Alibaba enrolling as the chip's customers. The P200 chip, as Cisco calls it, will compete against rival offerings from Broadcom. It will sit at the heart of a new routing device that the company also rolled out on Wednesday and is designed to connect the sprawling data centers that are located over vast distances and which train AI systems. Inside those data centers, companies such as Nvidia are connecting tens of thousands and eventually hundreds of thousands of powerful computing chips together to act as one brain to handle AI tasks. The purpose of the new Cisco chip and router is to connect multiple data centers together to act as one massive computer. "Now we're saying, 'the training job is so large, I need multiple data centers to connect together,'" Martin Lund, executive vice president of Cisco's common hardware group, told Reuters in an interview. "And they can be 1,000 miles apart." The reason for those big distances is that data centers consume huge amounts of electricity, which has driven firms such as Oracle and OpenAI to Texas and Meta Platforms to Louisiana in search of gigawatts. AI firms are putting data centers "wherever you can get power," Lund said. He did not disclose Cisco's investment in building the chip and router or sales expectations from them. Cisco said the P200 chip replaces what used to take 92 separate chips with just one, and the resulting router uses 65% less power than comparable ones. One of the key challenges is keeping data in sync across multiple data centers without losing any, which requires a technology called buffering that Cisco has worked on for decades. "The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts" of data, Dave Maltz, corporate vice president of Azure Networking at Microsoft, said in a statement. "We're pleased to see the P200 providing innovation and more options in this space."
[4]
Cisco Launches New Silicon One Chip To 'Satiate' Rising AI Data Center Demands
The latest chip that's part of the Cisco Silicon One portfolio comes as AI places new and unprecedented demands on the data center, according to the tech giant. With AI workloads growing faster than the data centers that host them, Cisco Systems is unveiling its latest Silicon One chip for efficiently and securely connecting AI-ready data centers, the tech giant said Wednesday. The new Silicon One P200 chip, the Cisco 8223, boasts what the company said is the industry's most optimized routing system for AI networking that can enable interconnect bandwidth scale of more than 3 Exabits per second. The latest chip comes as AI places new and unprecedented demands on the data center, Jeetu Patel, Cisco's president and chief product officer, said at the company's WebexOne event last month. [Related: Cisco Webex, Soon With AI Canvas, Proves Reality Of 'Better Together' Story: Executives] Increasing AI adoption is raising demand and power constraints for data centers while introducing new security threats. In fact, AI is currently being constrained by data center limitations, Patel said. It's a constraint that Cisco is working to solve, he added. "At Cisco, we've been thinking long and hard about the key kind of impediments that might actually stall the adoption of AI. ... The first one is what we call an 'infrastructure constraint.' There's just simply not enough power in the world, or compute capacity or network bandwidth, to satiate the needs of AI," he said during his keynote at WebexOne 2025. To keep up with increasing power demands to train AI workloads, enterprises are migrating data centers to remote locations where more power is readily available, said Martin Lund, executive vice president of Cisco's Common Hardware Group, in a blog post about the new chip. "This migration requires unprecedented bandwidth to interconnect massive AI clusters over large distances. These huge AI training runs can last months and are incredibly costly, and any network downtime risks leaving a billion-dollar data center idle," he said. The Cisco 8223 has been built for these new requirements as the only 51.2-terabit-per-second Ethernet fixed router on the market that is ready to tackle the intense traffic of AI workloads between data centers, according to Cisco. At the heart of the technology is Cisco Silicon One P200 routing silicon, the latest innovation in the Cisco Silicon One portfolio that was launched six years ago. Silicon One is a scalable, programmable unified networking architecture that has been deployed at five of the six top hyperscalers, according to Cisco. "That's why we built the Cisco 8223, powered by Cisco Silicon One P200. ...The Cisco 8223 is purpose-built for scale-across networking demands. With industry-leading power efficiency, scalability and security features, it's engineered to handle what's next," Lund said. The Cisco 8223 is now available in the fixed 8223 system, and the P200 silicon itself will be deployable in modular platforms and disaggregated chassis, according to the company. The Cisco Nexus portfolio will also support systems running NX-OS based on the P200 in the near future, Cisco said.
[5]
Cisco Launches 8223 Routing Systems To Enable Secure And Scalable AI Networking Across Data Centers
"The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts. We're pleased to see the P200 providing innovation and more options in this space. Microsoft was an early adopter of Silicon One, and the common ASIC architecture has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments." - Dave Maltz, Technical Fellow and Corporate Vice President, Azure Networking, Microsoft "As Alibaba continues to invest in and expand the cloud infrastructure, DCI is a critical pillar of our strategy. We are pleased to see the launch of Cisco Silicon One P200, the industry's first 51.2T routing ASIC that delivers high bandwidth, lower power consumption, and full P4 programmability. This breakthrough chip aligns perfectly with the evolution of Alibaba's eCore architecture. We plan to leverage the P200 to build a single chip platform, serving as a foundational building block for expanding our eCore deployment. Beyond supporting our Cisco Silicon One Q200 deployment scenarios, this new routing chip will enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices. This transition will significantly enhance the stability, reliability, and scalability of our DCI network while keeping the simplicity. In addition, we are developing and exploring innovative disaggregated architectures using Cisco G200 for our high-performance datacenter network. The introduction of this advanced routing chip marks a pivotal step forward, empowering Alibaba to accelerate innovation and drive infrastructure expansion in the AI era." - Dennis Cai, Vice President, Head of Network Infrastructure, Alibaba Cloud
[6]
Cisco Launches Routing System to Connect Data Centers, Power AI Workloads
Cisco is launching a new routing system built for the intense traffic of artificial-intelligence workloads between data centers. Routing systems use AI algorithms to direct and manage the flow of tasks, information, or requests in various systems and applications. The networking-equipment company said Wednesday that its new routing system, Cisco 8223, is optimized to efficiently and securely connect data centers and power the next generation of AI workloads. "AI compute is outgrowing the capacity of even the largest data center, driving the need for reliable, secure connection of data centers hundreds of miles apart," said Martin Lund, executive vice president of Cisco's common hardware group. Cisco 8223 will deliver the bandwidth, scale and security needed for distributed data center architectures, he added. The system, powered by the company's P200 chip, is now shipping to initial hyperscalers such as Microsoft, Alibaba Group and Lumen Technologies, Cisco said.
[7]
Cisco rolls out chip designed to connect AI data centers over vast distances
SAN FRANCISCO (Reuters) -Cisco Systems launched on Wednesday a new networking chip designed to connect artificial intelligence data centers, with the cloud computing units of Microsoft and Alibaba enrolling as the chip's customers. The P200 chip, as Cisco calls it, will compete against rival offerings from Broadcom. It will sit at the heart of a new routing device that the company also rolled out on Wednesday and is designed to connect the sprawling data centers that are located over vast distances and which train AI systems. Inside those data centers, companies such as Nvidia are connecting tens of thousands and eventually hundreds of thousands of powerful computing chips together to act as one brain to handle AI tasks. The purpose of the new Cisco chip and router is to connect multiple data centers together to act as one massive computer. "Now we're saying, 'the training job is so large, I need multiple data centers to connect together,'" Martin Lund, executive vice president of Cisco's common hardware group, told Reuters in an interview. "And they can be 1,000 miles apart." The reason for those big distances is that data centers consume huge amounts of electricity, which has driven firms such as Oracle and OpenAI to Texas and Meta Platforms to Louisiana in search of gigawatts. AI firms are putting data centers "wherever you can get power," Lund said. He did not disclose Cisco's investment in building the chip and router or sales expectations from them. Cisco said the P200 chip replaces what used to take 92 separate chips with just one, and the resulting router uses 65% less power than comparable ones. One of the key challenges is keeping data in sync across multiple data centers without losing any, which requires a technology called buffering that Cisco has worked on for decades. "The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts" of data, Dave Maltz, corporate vice president of Azure Networking at Microsoft, said in a statement. "We're pleased to see the P200 providing innovation and more options in this space." (Reporting by Stephen Nellis in San Francisco; Editing by Muralikumar Anantharaman)
[8]
How Cisco's new chip links AI data centers thousands of miles apart
Cisco challenges Broadcom linking AI facilities with high-speed routers As artificial intelligence (AI) continues to scale at unprecedented speeds, the infrastructure that supports it is facing a critical challenge: connecting AI data centers over vast distances without slowing performance. Cisco, the networking giant long known for enterprise routers and switches, is now making a bold play to solve this problem with its latest innovation - the Silicon One P200 chip. The new chip, integrated into Cisco's 8223 router, is designed to make multiple AI data centers function as a single, seamless system. This is a significant leap in infrastructure design, enabling companies to train massive AI models across geographically separated facilities while maintaining ultra-low latency and high data throughput. Also read: Gemini 2.5 Computer Use model explained: Google's AI agent to navigate interfaces Modern AI models, especially large language models and generative AI systems, require enormous amounts of data and computational power. Training them in a single data center can be inefficient or even impossible. Many companies now distribute workloads across multiple facilities, often hundreds or even a thousand miles apart. Historically, connecting distant data centers required dozens of networking chips, high power consumption, and complex routing protocols. Cisco claims that the Silicon One P200 replaces the equivalent of 92 older chips, reducing power usage by roughly 65%. In practice, this means AI operators can synchronize data faster and more efficiently, allowing models to scale without adding prohibitive energy costs. Cisco's push directly challenges Broadcom, which has dominated the AI networking space with its Jericho4 chip. Broadcom's solution is optimized for short- to medium-range data center connections - up to about 60 miles - and focuses on high-bandwidth memory and congestion management. Cisco, however, is targeting long-haul connections, a niche increasingly important as AI workloads expand across continents. Also read: OpenAI's AgentKit explained: Anyone can make AI Agents with ease Cloud providers and hyperscalers are taking notice. Microsoft and Alibaba are reportedly among the early companies exploring Cisco's new routers. The technology could also influence the next generation of AI supercomputers, which rely heavily on efficient interconnects to maintain performance across large clusters. Energy consumption is a growing concern in AI infrastructure. Many new data centers are located near renewable energy sources, sometimes far from traditional tech hubs. Efficient networking chips like Cisco's P200 allow operators to place data centers in optimal locations for both cost and sustainability, while still maintaining the high-speed connections needed for AI training. By reducing the number of chips required and cutting power consumption, Cisco not only improves operational efficiency but also addresses the environmental footprint of large-scale AI training, a topic increasingly in the spotlight among tech leaders and policymakers. Cisco's Silicon One P200 is designed to handle massive bursts of data with minimal latency. It supports high-speed packet switching, intelligent buffering, and advanced routing protocols that keep data flowing efficiently over long distances. Essentially, it acts as the backbone of a "global AI fabric," letting geographically distributed data centers work in concert as if they were a single, local system. This is particularly crucial for generative AI models, which require frequent synchronization of model weights and large-scale gradient updates. Even a small delay in these updates can significantly slow training or introduce inconsistencies. Cisco's approach promises to minimize such bottlenecks. Cisco's entry into high-end AI networking marks a significant moment in the industry. While Broadcom and others will continue to innovate, the ability to connect AI data centers thousands of miles apart efficiently could become a decisive factor for cloud providers, supercomputing facilities, and enterprise AI operators. The Silicon One P200 illustrates a broader trend: the AI revolution is not just about algorithms and GPUs. It's also about the unseen infrastructure that moves data quickly, efficiently, and sustainably across the globe. As AI models grow ever larger, the race for smarter, faster, and longer-distance networking chips is only beginning, and Cisco is staking a major claim.
Share
Share
Copy Link
Cisco Systems launches a new networking chip and routing system designed to connect AI data centers over long distances, challenging Broadcom in the market for high-performance networking solutions.
Cisco Systems has unveiled its latest networking innovation, the Silicon One P200 chip and 8223 routing system, designed to connect AI data centers across vast distances
1
2
. This move challenges Broadcom's market dominance in high-performance networking for AI infrastructure.
Source: DIGITAL TERMINAL
The Silicon One P200 chip delivers 51.2 terabits per second of full-duplex throughput
2
. The 8223 routing system features two 3RU models with 64 800G ports2
. Key innovations include deep buffers, crucial for overcoming traditional AI networking limitations, and line-rate encryption with post-quantum resilient algorithms for enhanced security2
.
Source: Digit
These systems address the escalating demands of AI workloads, which are straining existing data center infrastructure . With AI models doubling annually, "scale-across" architectures are essential
2
. Cisco's solution supports over 3 Exabits per second interconnect bandwidth, allowing distributed data centers to function as a unified compute unit. This is vital as data centers often locate in remote areas for power access3
.Related Stories
Cisco's P200 chip and 8223 system compete directly with Broadcom's Jericho chip
1
. Microsoft and Alibaba are already exploring its adoption1
5
. Microsoft's Dave Maltz lauded the P200 for its faster networks and improved buffering5
, while Alibaba Cloud's Dennis Cai noted its potential for enhancing DCI strategy and eCore architecture5
. This innovation marks a crucial step in overcoming infrastructure limitations for AI adoption, enabling future AI applications and services4
.Summarized by
Navi
[1]
[2]