Curated by THEOUTPOST
On Wed, 30 Oct, 4:02 PM UTC
4 Sources
[1]
Cisco shifts focus to AI with new infrastructure solutions
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. While most people think of Cisco as a company that links infrastructure elements in data centers and the cloud, it is not the first company that comes to mind when discussing GenAI. However, at its recent Partner Summit event, the company made several announcements aimed at changing that perception. Specifically, Cisco debuted several new servers equipped with Nvidia GPUs and AMD CPUs, targeted for AI workloads, a new high-speed network switch optimized for interconnecting multiple AI-focused servers, and several preconfigured PODs of compute and network infrastructure designed for specific applications. On the server side, Cisco's new UCS C885A M8 Server packages up to eight Nvidia H100 or H200 GPUs and AMD Epyc CPUs into a compact rack server capable of everything from model training to fine-tuning. Configured with both Nvidia Ethernet cards and DPUs, the system can function independently or be networked with other servers into a more powerful system. The new Nexus 9364E-SG2 switch, based on Cisco's latest G200 custom silicon, offers 800G speeds and large memory buffers to enable high-speed, low-latency connections across multiple servers. The most interesting new additions are in the form of AI PODs, which are Cisco Validated Designs (CVDs) that combine CPU and GPU compute, storage, and networking along with Nvidia's AI Enterprise platform software. Essentially, they are completely preconfigured infrastructure systems that provide an easier, plug-and-play solution for organizations to launch their AI deployments - something many companies beginning their GenAI efforts need. Cisco is offering a range of different AI PODs tailored for various industries and applications, helping organizations eliminate some of the guesswork in selecting the infrastructure they need for their specific requirements. Additionally, because they come with Nvidia's software stack, there are several industry-specific applications and software building blocks (e.g., NIMs) that organizations can use to build from. Initially, the PODs are geared more towards AI inferencing than training, but Cisco plans to offer more powerful PODs capable of AI model training over time. Another key aspect of the new Cisco offerings is a link to its Intersight management and automation platform, providing companies with better device management capabilities and easier integration into their existing infrastructure environments. The net result is a new set of tools for Cisco and its sales partners to offer to their long-established enterprise customer base. Realistically, Cisco's new server and compute offerings are unlikely to appeal to big cloud customers who were early purchasers of this type of infrastructure. (Cisco's switches and routers, on the other hand, are key components for hyperscalers.) However, it's becoming increasingly clear that enterprises are interested in building their own AI-capable infrastructure as their GenAI journeys progress. While many AI application workloads will likely continue to exist in the cloud, companies are realizing the need to perform some of this work on-premises. In particular, because effective AI applications need to be trained or fine-tuned on a company's most valuable (and likely most sensitive) data, many organizations are hesitant to have that data and models based on it in the cloud. In that regard, even though Cisco is a bit late in bringing certain elements of its AI-focused infrastructure to market, the timing for its most likely audience could be just right. As Cisco's Jeetu Patel commented during the Day 2 keynote, "Data centers are cool again." This point was further reinforced by the recent TECHnalysis Research survey report, The Intelligent Path Forward: GenAI in the Enterprise, which found that 80% of companies engaged in GenAI work were interested in running some of those applications on-premises. Ultimately, the projected market growth for on-site data centers presents intriguing new possibilities for Cisco and other traditional enterprise hardware suppliers. Whether due to data gravity, privacy, governance, or other issues, it now seems clear that while the move to hybrid cloud took nearly a decade, the transition to hybrid AI models that leverage cloud and on-premises resources (not to mention on-device AI applications for PCs and smartphones) will be significantly faster. How the market responds to that rapid evolution will be very interesting to observe. Bob O'Donnell is the president and chief analyst of TECHnalysis Research, LLC, a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow Bob on Twitter @bobodtech
[2]
Cisco debuts new Nvidia-powered data center systems for AI workloads - SiliconANGLE
Cisco debuts new Nvidia-powered data center systems for AI workloads Cisco Systems Inc. is expanding its hardware portfolio with two data center appliance lineups optimized to run artificial intelligence models. The systems debuted today at a partner event that the company is hosting in Los Angeles. The first new product line, the UCS C885A M8 series, comprises servers that can each accommodate up to eight graphics processing units. Cisco offers three GPU options: the H100 and H200, which are both supplied by Nvidia Corp., as well as Advanced Micro Devices Inc.'s rival MI300X chip. Every graphics card in a UCS C885A M8 machine has its own network interface controller, or NIC. This is a specialized chip that acts as an intermediary between a server and the network to which it's attached. Cisco offers a choice between two Nvidia NICs: the ConnectX-7 or the BlueField-3, a so-called SuperNIC with additional components that speed up tasks such as encrypting data traffic. Cisco also ships its new servers with BlueField-3 chips. Those are so-called data processing units, or DPUs, likewise made by Nvidia. They speed up some of the tasks involved in managing the network and storage infrastructure attached to a server. A pair of AMD central processing units perform the computations not relegated to the server's more specialized chips. Customers can choose between the chipmaker's latest fifth-generation CPUs or its 2022 server processor lineup. Cisco debuted the server series alongside four so-called AI PODs. According to TechTarget, those are large data center appliances that combine up to 16 Nvidia graphics cards with networking equipment and other supporting components. Customers can optionally add more hardware, notably storage equipment from NetApp Inc. or Pure Storage Inc. On the software side, the AI Pods come with a license to Nvidia AI Enterprise. This is a collection of prepackaged AI models and tools that companies can use to train their own neural networks. There are also more specialized components, such as the Nvidia Morpheus framework for building AI-powered cybersecurity software. The suite is complemented by two other software products: HPC-X and Red Hat OpenShift. The former offering is an Nvidia-developed toolkit that helps customers optimize the networks that power their AI clusters. OpenShift, in turn, is a platform that eases the task of building and deploying container applications. "Enterprise customers are under pressure to deploy AI workloads, especially as we move toward agentic workflows and AI begins solving problems on its own," said Cisco Chief Product Officer Jeetu Patel. "Cisco innovations like AI PODs and the GPU server strengthen the security, compliance, and processing power of those workloads." Cisco will make the AI Pods available for order next month. The UCS C885A M8 server series, in turn, is orderable now and will start shipping to customers by the end of the year.
[3]
Power Your GenAI Ambitions with New Cisco AI-Ready Data Center Infrastructure
Let's start with a staggering statistic: According to McKinsey, generative AI, or GenAI, will add somewhere between $2.6T and $4.4T per year to global economic output, with enterprises at the forefront. Whether you're a manufacturer looking to optimize your global supply chain, a hospital that's analyzing patient data to suggest personalized treatment plans, or a financial services company wanting to improve fraud detection -- AI may hold the keys for your organization to unlock new levels of efficiency, insight, and value creation. Many of the CIOs and technology leaders we talk to today recognize this. In fact, most say that their organizations are planning full GenAI adoption within the next two years. Yet according to the Cisco AI Readiness Index, only 14% of organizations report that their infrastructures are ready for AI today. What's more, a staggering 85% of AI projects stall or are disrupted once they have started. The reason? There's a high barrier to entry. It can require an organization to completely overhaul infrastructure to meet the demands of specific AI use cases, build the skillsets needed to develop and support AI, and contend with the additional cost and complexity of securing and managing these new workloads. We believe there's an easier path forward. That's why we're excited to introduce a strong lineup of products and solutions for data- and performance-intensive use cases like large language model training, fine-tuning, and inferencing for GenAI. Many of these new additions to Cisco's AI infrastructure portfolio are being announced at Cisco Partner Summit and can be ordered today. These announcements address the comprehensive infrastructure requirements that enterprises have across the AI lifecycle, from building and training sophisticated models to widespread use for inferencing. Let's walk through how that would work with the new products we're introducing. A typical AI journey starts with training GenAI models with large amounts of data to build the model intelligence. For this important stage, the new Cisco UCS C885A M8 Server is a powerhouse designed to tackle the most demanding AI training tasks. With its high-density configuration of NVIDIA H100 and H200 Tensor Core GPUs, coupled with the efficiency of NVIDIA HGX architecture and AMD EPYC processors, UCS C885A M8 provides the raw computational power necessary for handling massive data sets and complex algorithms. Moreover, its simplified deployment and streamlined management makes it easier than ever for enterprise customers to embrace AI. To train GenAI models, clusters of these powerful servers often work in unison, generating an immense flow of data that necessitates a network fabric capable of handling high bandwidth with minimal latency. This is where the newly released Cisco Nexus 9364E-SG2 Switch shines. Its high-density 800G aggregation ensures smooth data flow between servers, while advanced congestion management and large buffer sizes minimize packet drops -- keeping latency low and training performance high. The Nexus 9364E-SG2 serves as a cornerstone for a highly scalable network infrastructure, allowing AI clusters to expand seamlessly as organizational needs grow. Once these powerful models are trained, you need infrastructure deployed for inferencing to provide actual value, often across a distributed landscape of data centers and edge locations. We have greatly simplified this process with new Cisco AI PODs that accelerate deployment of the entire AI infrastructure stack itself. No matter where you fall on the spectrum of use cases mentioned at the beginning of this blog, AI PODs are designed to offer a plug-and-play experience with NVIDIA accelerated computing. The pre-sized and pre-validated bundles of infrastructure eliminate the guesswork from deploying edge inferencing, large-scale clusters, and other AI inferencing solutions, with more use cases planned for release over the next few months. Our goal is to enable customers to confidently deploy AI PODs with predictability around performance, scalability, cost, and outcomes, while shortening time to production-ready inferencing with a full stack of infrastructure, software, and AI toolsets. AI PODs include NVIDIA AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines AI development and deployment. Managed through Cisco Intersight, AI PODs provide centralized control and automation, simplifying everything from configuration to day-to-day operations, with more use cases to come. To help organizations modernize their data center operations and enable AI use cases, we further simplify infrastructure deployment and management with Cisco Nexus Hyperfabric, a fabric-as-a-service solution announced earlier this year at Cisco Live. Cisco Nexus Hyperfabric features a cloud-managed controller that simplifies the design, deployment, and management of the network fabric for consistent performance and operational ease. The hardware-accelerated performance of Cisco Nexus Hyperfabric, with its inherent high bandwidth and low latency, optimizes AI inferencing, enabling fast response times and efficient resource utilization for demanding, real-time AI applications. Furthermore, Cisco Nexus Hyperfabric's comprehensive monitoring and analytics capabilities provide real-time visibility into network performance, allowing for proactive issue identification and resolution to maintain a smooth and reliable inferencing environment. By providing a seamless continuum of solutions, from powerful training servers and high-performance networking to simplified inference deployments, we are enabling enterprises to accelerate their AI initiatives, unlock the full potential of their data, and drive meaningful innovation. The Cisco UCS C885A M8 Server is now orderable and is expected to ship to customers by the end of this year. The Cisco AI PODs will be orderable in November. The Cisco Nexus 9364E-SG2 Switch will be orderable in January 2025 with availability to begin Q1 calendar year 2025. Cisco Nexus Hyperfabric will be available for purchase in January 2025 with 30+ certified partners. Hyperfabric AI will be available in May and will include a plug-and-play AI solution inclusive of Cisco UCS servers (with embedded NVIDIA accelerated computing and AI software), and optional VAST storage. For more information about these products, please visit: If you are attending the Cisco Partner Summit this week, please visit the solution showcase to see the Cisco UCS C885A M8 Server and Cisco Nexus 9364E-SG2 Switch. You can also attend the business insights session BIS08 entitled "Revolutionize tomorrow: Unleash innovation through the power of AI-ready infrastructure" for more details on the products and solutions announced.
[4]
Cisco Expands AI Infrastructure Offerings
These provide Cisco and its sales partners with a new set of tools that they can sell to their long-established base of enterprise customers. While most people think about Cisco (NASDAQ:CSCO) as a company that can link infrastructure elements together in data centers and the cloud, it is definitely not the first company that comes to mind when you mention GenAI. At Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.
Share
Share
Copy Link
Cisco introduces new AI-focused servers, networking equipment, and preconfigured AI PODs to strengthen its position in the growing AI infrastructure market.
Cisco, traditionally known for networking and data center solutions, is making a significant push into the AI infrastructure market. At its recent Partner Summit, the company unveiled a range of new products designed to support the growing demand for AI workloads in enterprise environments 12.
At the heart of Cisco's new offerings is the UCS C885A M8 Server, a powerhouse designed for AI training and inferencing tasks. This server can accommodate up to eight NVIDIA H100 or H200 GPUs, or AMD MI300X chips, paired with AMD EPYC CPUs 12. The server's architecture includes dedicated network interface controllers (NICs) for each GPU and NVIDIA BlueField-3 data processing units (DPUs) to optimize network and storage management 2.
Complementing the new server is the Nexus 9364E-SG2 switch, based on Cisco's latest G200 custom silicon. This switch offers 800G speeds and large memory buffers, enabling high-speed, low-latency connections crucial for AI workloads 13.
Cisco has also introduced AI PODs, which are preconfigured infrastructure systems designed to simplify AI deployment for organizations 13. These Cisco Validated Designs (CVDs) combine CPU and GPU compute, storage, and networking components with NVIDIA's AI Enterprise platform software 1. The AI PODs offer a plug-and-play solution for various industries and applications, initially focusing on AI inferencing with plans to support more powerful training configurations in the future 13.
The new hardware offerings are integrated with Cisco's Intersight management and automation platform, providing improved device management and easier integration into existing infrastructure environments 13. Additionally, the AI PODs come with licenses for NVIDIA AI Enterprise, HPC-X, and Red Hat OpenShift, offering a comprehensive software stack for AI development and deployment 23.
While Cisco may be entering the AI infrastructure market later than some competitors, the timing could be advantageous for its target audience of enterprise customers 1. Recent research indicates that 80% of companies engaged in GenAI work are interested in running some applications on-premises, suggesting a growing market for on-site AI infrastructure 1.
Cisco's Chief Product Officer, Jeetu Patel, emphasized the renewed importance of data centers, stating, "Data centers are cool again" 1. This sentiment is echoed by the increasing demand for on-premises AI solutions due to data gravity, privacy, and governance concerns 14.
The introduction of these AI-focused products positions Cisco to capitalize on the projected growth in on-site data centers and the rapid transition to hybrid AI models 1. As enterprises seek to balance cloud and on-premises resources for AI workloads, Cisco's comprehensive offering of servers, networking equipment, and preconfigured solutions could play a significant role in shaping the future of enterprise AI infrastructure 134.
With these new products, Cisco aims to lower the barrier to entry for AI adoption in enterprises, addressing challenges such as infrastructure overhaul, skill set development, and security concerns 4. As the AI market continues to evolve, Cisco's strategic shift towards AI-ready infrastructure could have far-reaching implications for both the company and its enterprise customers.
Reference
[4]
Cisco and NVIDIA announce a major partnership expansion to simplify AI-ready data center networks, combining Cisco's networking expertise with NVIDIA's AI technologies to drive enterprise AI adoption.
9 Sources
9 Sources
Cisco, a global leader in networking technology, has announced its comprehensive AI strategy. The company aims to provide secure, ethical, and innovative AI solutions for enterprises, focusing on responsible AI development and implementation.
2 Sources
2 Sources
Cisco introduces new high-performance switches powered by Silicon One G200 chip, designed to meet the growing demands of AI/ML workloads in enterprise and hyperscale data centers.
3 Sources
3 Sources
Cisco introduces a new architecture to help service providers adapt their networks for the increasing demands of AI workloads, offering opportunities for monetization and improved efficiency.
3 Sources
3 Sources
Cisco Systems increases its annual revenue forecast, citing strong demand for cloud networking gear amid the AI boom. The company reports significant growth in AI-related infrastructure orders and addresses potential impacts of US tariffs.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved