2 Sources
2 Sources
[1]
A New ERA of AI Factories: NVIDIA Unveils Enterprise Reference Architectures
Global enterprises can now tap into new reference architectures to build high-performance, scalable and secure data centers. As the world transitions from general-purpose to accelerated computing, finding a path to building data center infrastructure at scale is becoming more important than ever. Enterprises must navigate uncharted waters when designing and deploying infrastructure to support these new AI workloads. Constant developments in model capabilities and software frameworks, along with the novelty of these workloads, mean best practices and standardized approaches are still in their infancy. This state of flux can make it difficult for enterprises to establish long-term strategies and invest in infrastructure with confidence. To address these challenges, NVIDIA is unveiling Enterprise Reference Architectures (Enterprise RAs). These comprehensive blueprints help NVIDIA systems partners and joint customers build their own AI factories -- high-performance, scalable and secure data centers for manufacturing intelligence. Building AI Factories to Unlock Enterprise Growth NVIDIA Enterprise RAs help organizations avoid pitfalls when designing AI factories by providing full-stack hardware and software recommendations, and detailed guidance on optimal server, cluster and network configurations for modern AI workloads. Enterprise RAs can reduce the time and cost of deploying AI infrastructure solutions by providing a streamlined approach for building flexible and cost-effective accelerated infrastructure, while ensuring compatibility and interoperability. Each Enterprise RA includes recommendations for: Businesses that deploy AI workloads on partner solutions based upon Enterprise RAs, which are informed by NVIDIA's years of expertise in designing and building large-scale computing systems, will benefit from:
[2]
Nvidia hands out blueprints for the creation of scalable 'AI factories' - SiliconANGLE
Nvidia hands out blueprints for the creation of scalable 'AI factories' Nvidia Corp. said today it wants to help organizations plan and build advanced, future-proof "artificial intelligence factories" to support a new generation of intelligent applications that will come online in the years ahead. When Nvidia talks about AI factories, what it really means is high-performance and secure data centers designed to "manufacture intelligence," and it recognizes that the task of building them can be daunting for any enterprise. The challenge is that organizations must navigate uncharted waters, because no one has built AI factories before. With the rapid pace of development in large language model capabilities and software frameworks, best practices and standardized approaches for building such data centers are still in their infancy. This makes it difficult to invest in data center infrastructure with any confidence. Nvidia wants to change that, and to do so it is unveiling a series of Enterprise Reference Architectures, or blueprints that can help organizations to ensure their AI factories can evolve and scale up to support the latest innovations for years to come. The blueprints are said to provide detailed recommendations on the full-stack hardware and software needed for AI factories, and guidance on aspects such as the most optimal server, cluster and network configurations. According to Nvidia, by using its Enterprise RAs, companies will be able to build and deploy "cost-effective accelerated infrastructure" that's interoperable with various third-party hardware and software components, so it can easily be updated in future. Naturally, Nvidia believes that most organization's AI factories will need to integrate lots of its own hardware, so the reference architectures provide recommendations for Nvidia-certified servers featuring its graphics processing units, which are the workhorse for most AI applications. The reference architectures also provide guidelines for AI-optimized networking using the Nvidia Spectrum-X AI Ethernet platform and the company's BlueField-3 data processing units to ensure peak performance and the flexibility to scale in future. Nvidia's AI Enterprise platform, which includes microservices such as Nvidia NeMo and Nvidia NIM for building and deploying AI applications, is another component of the reference architectures. So is Nvidia Base Command Manager Essentials, which provides tools for infrastructure provisioning, workload management and resource monitoring. The AI chip leader said its blueprints will be made available to companies through its server manufacturing partners, such as Dell Technologies Inc., Hewlett-Packard Enterprise Co., Super Micro Computer Inc. and Lenovo Group Ltd. That means enterprises still have a lot of flexibility in terms of the underlying server platforms they want to use to power their AI factories. Perhaps the biggest benefit of using Nvidia's reference architectures is being able to get up and running faster, as customers can simply follow its structured approach instead of trying to figure things out for themselves. Nvidia also professes confidence that the blueprints will ensure companies can squeeze maximum performance out of their server hardware. The other key advantage has to do with scale. The future-proofed reference architectures are designed in such a way that they can easily be upgraded as more innovations in terms of hardware and software become available. "Enterprise RAs reduce the time and cost of deploying AI infrastructure solutions by providing a streamlined approach for building flexible and cost-effective accelerated infrastructure," said Bob Petter, vice president and general manager of enterprise platforms at Nvidia. Although following the blueprints inevitably requires making a commitment to using Nvidia's hardware and software, it's likely that many organizations will do so, said Holger Mueller of Constellation Research Inc. According to the analyst, most enterprises simply don't have the necessary skills and experience to go about creating the infrastructure for AI projects by themselves. And they're not helped by the fast-moving nature of AI. "Nvidia plays a key role in making almost every generative AI project work, and its blueprints will make it much easier for organizations to build and upgrade their on-premises AI architectures," Mueller said. "So long as enterprises are happy using Nvidia's chips, and many are, this is a win-win scenario. The enterprise gets to go live with their AI projects sooner, while Nvidia bags another long term customer."
Share
Share
Copy Link
NVIDIA introduces Enterprise Reference Architectures to help organizations build high-performance, scalable, and secure data centers for AI workloads, addressing the challenges of designing and deploying infrastructure for modern AI applications.
NVIDIA has unveiled Enterprise Reference Architectures (Enterprise RAs), a set of comprehensive blueprints designed to help organizations build high-performance, scalable, and secure data centers for AI workloads. As the world transitions from general-purpose to accelerated computing, these reference architectures aim to address the challenges enterprises face when designing and deploying infrastructure to support new AI workloads
1
.The rapid development of AI model capabilities and software frameworks has left many organizations struggling to establish long-term strategies and invest in infrastructure with confidence. NVIDIA's Enterprise RAs provide a solution by offering full-stack hardware and software recommendations, along with detailed guidance on optimal server, cluster, and network configurations for modern AI workloads
1
2
.The Enterprise RAs include several key components:
2
These blueprints are designed to be flexible, allowing organizations to choose underlying server platforms from NVIDIA's partners such as Dell Technologies, Hewlett-Packard Enterprise, Super Micro Computer, and Lenovo
2
.Organizations that deploy AI workloads based on NVIDIA's Enterprise RAs can expect several benefits:
1
2
Related Stories
The introduction of Enterprise RAs is expected to significantly impact enterprise AI adoption. By providing a structured approach to building AI factories, NVIDIA aims to help organizations overcome the challenges of navigating uncharted waters in AI infrastructure development
2
.Bob Petter, Vice President and General Manager of Enterprise Platforms at NVIDIA, emphasized the efficiency gains: "Enterprise RAs reduce the time and cost of deploying AI infrastructure solutions by providing a streamlined approach for building flexible and cost-effective accelerated infrastructure"
2
.Holger Mueller, an analyst at Constellation Research Inc., believes that NVIDIA's blueprints will be crucial for many organizations lacking the necessary skills and experience to create AI infrastructure independently. He states, "Nvidia plays a key role in making almost every generative AI project work, and its blueprints will make it much easier for organizations to build and upgrade their on-premises AI architectures"
2
.As the AI landscape continues to evolve rapidly, NVIDIA's Enterprise Reference Architectures offer a promising solution for organizations looking to build robust, scalable, and future-proof AI factories. By providing comprehensive guidance and leveraging NVIDIA's expertise in large-scale computing systems, these blueprints are poised to accelerate the adoption and deployment of AI infrastructure across various industries.
Summarized by
Navi
27 Aug 2024
20 May 2025•Technology
20 May 2025•Technology
1
Business and Economy
2
Business and Economy
3
Policy and Regulation