Curated by THEOUTPOST
On Wed, 16 Oct, 12:09 AM UTC
3 Sources
[1]
Cisco Silicon One G200 AI/ML chip powers new systems for hyperscalers and enterprises
Cisco Silicon One has stood for innovation since day one. It's the first unified architecture for routing and switching silicon, provides the most scalable solutions in the industry, and offers the most customer choice, with the ability to consume in a variety of ways including silicon, hardware and full systems. It is now used in over 40 Cisco platforms across cloud, artificial intelligence / machine learning (AI/ML), service provider, enterprise campus and data center networks. Meta announced at Open Compute Project (OCP) Global Summit that they plan to deploy the OCP-inspired Cisco 8501, which combines the power of the Cisco Silicon One G200 and a Cisco-designed and validated hardware system. Continuing the momentum, Cisco also announced two new solutions based on Cisco Silicon One G200 - the Cisco 8122-64EH/EHF and the Cisco Nexus 9364E-SG2. These are purpose-built products to support AI/ML buildouts across enterprise datacenters and hyperscalers. Large-scale, high-bandwidth AI/ML networks are evolving quickly. They demand scalable, programmable, high-radix, low-power switches with advanced load balancing and observability - all of which are the foundation of Cisco's Silicon One architecture. We have more exciting news coming in the near future; in the meantime, learn all about Cisco Silicon One architecture, devices, and benefits.
[2]
Scaling Cloud Network Infrastructure for the AI Era
The world has changed dramatically since generative AI made its debut. Businesses are starting to use it to summarize online reviews. Consumers are getting problems resolved through chatbots. Employees are accomplishing their jobs faster with AI assistants. What these AI applications have in common is they rely on generative AI models that have been trained on high-performance, back-end networks in the data center and served through AI inference clusters deployed in data center front-end networks. Training models can use billions or even trillions of parameters to process massive data sets across artificial intelligence/machine learning (AI/ML) clusters of graphics processing unit (GPU)-based servers. Any delays -- such as from network congestion or packet loss -- can dramatically impact the accuracy and training time of these AI models. As AI/ML clusters grow ever larger, the platforms that are used to build them need to support higher port speeds as well as higher radices (such as the number of ports). A higher radix allows the building of flatter topologies, which reduces layers and improves performance. In recent years, we have seen the GPU needs for scale-out bandwidth increase from 200G to 400G to 800G, which is accelerating connectivity requirements compared to traditional CPU-based compute solutions. The density of the data center leaf must increase accordingly, while also maximizing the number of addressable nodes with flatter topologies. To address these needs, we are introducing the Cisco 8122-64EH/EHF with support for 64 ports of 800G. This new platform is powered by the Cisco Silicon One G200 -- a 5 nm 51.2T processor that uses 512G x 112G SerDes, which enables extreme scaling capabilities in just a two-rack unit (2RU) form factor (see Figure 1). With 64 QSFP-DD800 or OSFP interfaces, the Cisco 8122 supports options for 2x 400G and 8x 100G Ethernet connectivity. Cisco Silicon One architecture, with its fully shared packet buffer for congestion control and P4 programmable forwarding engine, along with the Silicon One software development kit (SDK), are proven and trusted by hyperscalers globally. Through major innovations, the Cisco Silicon One G200 supports 2x the performance and power efficiency, as well as lower latency, compared to the previous-generation device. With the introduction of Cisco Silicon One G200 last year, Cisco was first to market with 512-wide radix, which can help cloud providers lower costs, complexity, and latency by designing networks with fewer layers, switches, and optics. Advancements in load balancing, link-failure avoidance, and congestion reaction/avoidance help improve job completion times and reliability at scale for better AI workload performance (see Cisco Silicon One Breaks the 51.2 Tbps Barrier for more details). The Cisco 8122 supports open network operating systems (NOSs), such as Software for Open Networking in the Cloud (SONiC), and other third-party NOSs. Through broad application programming interface (API) support, cloud providers can use tooling for management and visibility to efficiently operate the network. With these customizable options, we are making it easier for hyperscalers and other cloud providers that are adopting the hyperscaler model to meet their requirements. In addition to scaling out back-end networks, the Cisco 8122 can also be used for mainstream workloads in front-end networks, such as email and web servers, databases, and other traditional applications. With these innovations, cloud providers can benefit from: We are giving cloud providers the flexibility to meet critical cloud network infrastructure requirements for AI training and inferencing with the Cisco 8122-64EH/EHF. With this platform, cloud providers can better control costs, latency, space, power consumption, and complexity in both front-end and back-end networks. At Cisco, we are investing in silicon, systems, and optics to help build scalable, high-performance data center networks for cloud providers to help deliver high-quality results and insights quickly with AI and mainstream workloads. The Open Compute Project (OCP) Global Summit meeting is October 15-17, 2024, in San Jose. Come visit us in the community lounge to learn more about our exciting new innovations; customers can sign up to see a demo here.
[3]
Supercharge Your AI Data Center Infrastructure with New Cisco Nexus 9000 Series Switches
The exponential growth of AI is reshaping data center requirements, driving demand for scalable, secure, and programmable networks. Enterprise customers are evaluating their current infrastructure to support rapid AI deployment and scalability, often upgrading to be AI-ready and securing workload communications such as GPU or CPU. This shift requires integrating AI-ready networking with distributed security policies as users, applications, and data span public and private clouds, colocation centers and more. Our customers are using Cisco Nexus 9000 Series Switches to run AI/ML workloads today over 400G network infrastructure. With generative AI adding complexity, we are seeing the customer need for a simple and secure infrastructure for performance monitoring and security across diverse environments, with 800G-based design plans in many data center buildouts. Leveraging Cisco Silicon One G200, Cisco Nexus 9000 Series Switches are engineered to meet these demands with high-density 800G fabrics (see Figure 1) -- making them ideal for next-generation leaf-and-spine network designs for cloud architecture, high-performance computing (HPC), and large-scale AI/ML workloads (see Figure 2). For example, Cisco Silicon One G200 uses advanced load balancing and fault detection to help improve job completion times (JCTs) for AI/ML workloads. With the Cisco Nexus 9364E-SG2 switches, we are introducing high-density 800G aggregation for data center fabrics (see Figure 3). Support for numerous port speeds and densities include 400, 200, and 100 Gbps, and both OSFP and QSFP-DD form factors. When combined with tools like Cisco Nexus Dashboard for visibility and automation, Cisco Nexus 9000 Series Switches offer the efficient management, troubleshooting, and in-depth analysis required by large cloud and data center networking teams. Architectural flexibility: Cisco Nexus 9000 Series Switches support a wide range of protocols and architectures, including VXLAN EVPN, Cisco IP Fabric for Media (IPFM), and IP-routed Ethernet-switched fabrics. This flexibility ensures that businesses can adapt their network infrastructure to meet evolving needs without requiring significant overhauls. Extensive programmability: The switches can drastically reduce provisioning time and enhance network observability with features like Day-0 automation through PowerOn Auto Provisioning (POAP) and industry-leading integrations for DevOps configuration management applications (such as Ansible). This level of programmability allows businesses to streamline operations and improve efficiency. AI/ML Networking: Cisco Nexus 9000 Series Switches support innovative congestion management and flow control algorithms along with the right latency and telemetry to meet the design requirements of AI/ML fabrics. High availability: With features like virtual port channel (vPC) technology, Software Maintenance Upgrades (SMUs), and In-Service Software Upgrades (ISSUs), Cisco Nexus 9000 Series Switches ensure high availability and minimal downtime. This reliability is essential for businesses that require continuous network operation. Simplified operations: By using Cisco Nexus Dashboard with Cisco Nexus 9000 Series Switches, data center network operations can be transformed through simplicity, automation, and AI analytics. Cisco Nexus Dashboard helps customers efficiently manage and operate data center networks, including with comprehensive visibility and control, that enables businesses to effectively optimize their network infrastructure. Flexible licensing: The Cisco Nexus 9364E-SG2 switch utilizes Cisco standard licensing model, which includes Premier, Advantage, and Essentials options. This flexible licensing model allows businesses to choose the licensing that best suits their immediate needs, while still offering the ability to scale and unlock more advanced features as they grow. Driving business outcomes with advanced features: Cisco Nexus 9000 Series Switches offer a robust, scalable, and flexible solution for modern data centers, driving significant business outcomes through enhanced performance, reliability, and efficiency. Key innovations include: The Cisco Nexus 9000 Series Switches are UEC-ready, fully complying with Ultra Ethernet Consortium (UEC) fabric baseline requirements such as PFC, ECN, and multiple traffic classes, to help ensure robust performance for AI Ethernet networks. Additionally, the programmability of the Silicon One architecture ensures future proofing, enabling the switches to adapt to evolving UEC standards while delivering consistent high performance and scalability, allowing businesses to seamlessly advance their AI/ML infrastructure. Through major investments across silicon, systems, software, and optics, Cisco has the knowledge, expertise, and integration capabilities to deliver what customers need. Whether you are looking to support AI/ML workloads or modernize your network infrastructure, we can provide the tools and capabilities needed to improve customer outcomes with Cisco Nexus 9000 Series Switches. Learn more at the Open Compute Project event (October 15-17) Community Lounge. Interested customers can schedule a demo here.
Share
Share
Copy Link
Cisco introduces new high-performance switches powered by Silicon One G200 chip, designed to meet the growing demands of AI/ML workloads in enterprise and hyperscale data centers.
Cisco has unveiled new high-performance switches powered by its Silicon One G200 chip, designed to meet the growing demands of artificial intelligence and machine learning (AI/ML) workloads in enterprise and hyperscale data centers. The announcement comes as the AI industry experiences rapid growth and increasing network infrastructure requirements 1.
The newly introduced switches include the Cisco 8122-64EH/EHF and the Cisco Nexus 9364E-SG2, both powered by the Cisco Silicon One G200 chip. These switches offer:
The new switches are designed to address the specific needs of AI/ML networks, which require:
Meta has announced plans to deploy the OCP-inspired Cisco 8501, which combines the Cisco Silicon One G200 with Cisco-designed hardware. This adoption by a major tech company underscores the potential of Cisco's new offerings in the AI/ML infrastructure space 1.
The new switches offer several advantages for cloud providers and enterprises:
Cisco's new switches are Ultra Ethernet Consortium (UEC) ready, complying with fabric baseline requirements such as PFC, ECN, and multiple traffic classes. This ensures robust performance for AI Ethernet networks and allows for adaptation to evolving UEC standards 3.
As the AI industry continues to evolve rapidly, Cisco's latest offerings demonstrate the company's commitment to providing cutting-edge network infrastructure solutions that can meet the demands of next-generation AI/ML workloads while offering flexibility and scalability for enterprises and cloud providers alike.
Cisco introduces new AI-focused servers, networking equipment, and preconfigured AI PODs to strengthen its position in the growing AI infrastructure market.
4 Sources
4 Sources
Cisco introduces a new architecture to help service providers adapt their networks for the increasing demands of AI workloads, offering opportunities for monetization and improved efficiency.
3 Sources
3 Sources
Cisco, a global leader in networking technology, has announced its comprehensive AI strategy. The company aims to provide secure, ethical, and innovative AI solutions for enterprises, focusing on responsible AI development and implementation.
2 Sources
2 Sources
Cisco and NVIDIA announce a major partnership expansion to simplify AI-ready data center networks, combining Cisco's networking expertise with NVIDIA's AI technologies to drive enterprise AI adoption.
9 Sources
9 Sources
Cisco introduces new Nexus 9300 Series Smart Switches with embedded AI-powered security, designed to revolutionize data center architecture for the AI era. The switches combine networking and security capabilities in a single device, offering improved protection against cyberattacks and simplified infrastructure management.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved