Curated by THEOUTPOST
On Wed, 13 Nov, 8:01 AM UTC
5 Sources
[1]
Nutanix Enterprise AI: Scalable AI Solution for multicloud workloads - SiliconANGLE
Inside Nutanix's AI evolution: Simplifying AI/ML workloads with scalable solutions As the urge to support artificial intelligence and machine learning workloads at an enterprise scale skyrockets, having an AI-ready platform is vital. Nutanix has addressed this need with its Enterprise AI solution, which simplifies the deployment of large language models by leveraging its hyperconverged infrastructure. This scalable solution meets the demanding needs of AI/ML workloads and provides the flexibility to operate in private, public and hybrid cloud environments, according to Luke Congdon (pictured), senior director of product management at Nutanix Inc. "We just released what we're calling Nutanix Enterprise AI, and it's our new product line to say we are an AI company," Congdon said. "We are offering generative AI on-premises now with inference endpoints, with security, with cost control, with simplicity, which is what we've always been trying to do." Congdon spoke with theCUBE Research's Savannah Peterson and Rob Strechay at KubeCon + CloudNativeCon NA, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed the key role that Nutanix Enterprise AI plays in simplifying infrastructure management. (* Disclosure below.) To drive AI-driven innovations, partnerships play a vital role, as critical issues, such as comprehensive data management and analytics capabilities, must be effectively addressed. As a result, Nutanix teams up with various players, such as Hugging Face Inc. and Nvidia Corp, to make solutions such as Nutanix Enterprise AI a reality, according to Congdon. "Partnerships, I think, are the way to do it," he said. "Even on the NAI announcement, we partnered with Hugging Face for access to their hub because they've got all the models in the world. What I love about it is that these are kind of turbocharged engines and, for the most part, they're free. They're Apache II or Meta-licensed. We've also partnered with Nvidia, and we've been doing that for years both on the GPU side for virtual desktop, as well for their AI for Enterprise suite and their NIMs products." To streamline IT operations by supporting Kubernetes deployments and multicloud environments, Nutanix made a strategic acquisition of D2iQ Inc. This move was meant to simplify containerized application development, making it easier to manage complex and distributed workloads across different infrastructures, Congdon pointed out. "We'd like to make sure that you're going to get production-level Kubernetes with security, with load balancing, with ingress, with everything else that you need, because Kubernetes is really great, but it's one key important orchestrator piece," he said. "You need so much more. What's really unique about them coming to Nutanix is they got what has traditionally really been hard for Kubernetes ... stateful storage across objects, files, volumes, anything that you need."
[2]
Nutanix Expands its AI Platform for Enterprise Cloud
Nutanix has officially launched Nutanix Enterprise AI, a new solution aimed at transforming enterprise AI infrastructure for large scale generative AI applications. This platform will allow organisations to manage, deploy and scale LLMs across any Kubernetes platform, enabling seamless operation at the edge, in data centres, or on public cloud services like AWS EKS, Azure AKS, and Google GKE. Nutanix's latest offering focuses on security, scalability and cost predictability, made possible through partnerships with NVIDIA and Hugging Face. Designed with NVIDIA's AI Enterprise software stack and NVIDIA's NIM microservices, Nutanix Enterprise AI enables optimised model performance, offering customers the flexibility to deploy AI models wherever needed, whether on premises, at the edge, or across cloud platforms. This approach provides the performance and security required for enterprise applications, while Hugging Face integration ensures compatibility with popular open foundation models, further expanding the platform's adaptability. "Our goal with Nutanix Enterprise AI is to provide a simple, secure way for customers to manage their GenAI infrastructure in the location of their choice with predictable costs," said Thomas Cornely, senior vice president of product management at Nutanix. "Thanks to our deep collaboration with partners like NVIDIA and Hugging Face, this platform empowers organisations to deploy AI solutions confidently, with enterprise grade controls." Nutanix Enterprise AI can help customers with overcoming AI skill shortages with tools, allowing IT admins to manage AI systems alongside data scientists using NVIDIA accelerated computing. It simplifies the deployment of AI platforms, offering a user-friendly, UI driven workflow for consistent performance across on premises and cloud environments, including AWS, Azure, and Google Cloud, and supports model standards like Hugging Face. Nutanix also addresses data privacy and security concerns by enabling users to run AI on controlled resources with secure role-based access and monitoring features, even in highly secure environments. Finally, the platform extends Nutanix's enterprise grade reliability, scalability, and security to GenAI workloads. Justin Boitano, vice president of Enterprise AI at NVIDIA, highlighted the importance of the hybrid model for GenAI applications, stating, "Integrating NVIDIA NIM into Nutanix Enterprise AI provides a consistent multicloud model with secure APIs, allowing customers to deploy AI across diverse environments with the high performance and security needed for business critical applications."
[3]
Nutanix Extends AI Platform To Public Cloud
Nutanix (NASDAQ: NTNX), a leader in hybrid multicloud computing, today announced that it extended the company's AI infrastructure platform with a new cloud native offering, Nutanix Enterprise AI (NAI), that can be deployed on any Kubernetes platform, at the edge, in core data centres, and on public cloud services like AWS EKS, Azure AKS, and Google GKE. The NAI offering delivers a consistent hybrid multicloud operating model for accelerated AI workloads, enabling organisations to leverage their models and data in a secure location of their choice while improving return on investment (ROI). Leveraging NVIDIA NIM for optimised performance of foundation models, Nutanix Enterprise AI helps organisations securely deploy, run, and scale inference endpoints for large language models (LLMs) to support the deployment of generative AI (GenAI) applications in minutes, not days or weeks. Generative AI is an inherently hybrid workload, with new applications often built in the public cloud, fine-tuning of models using private data occurring on-premises, and inferencing deployed closest to the business logic, which could be at the edge, on-premises or in the public cloud. This distributed hybrid GenAI workflow can present challenges for organisations concerned about complexity, data privacy, security, and cost. Nutanix Enterprise AI provides a consistent multicloud operating model and a simple way to securely deploy, scale, and run LLMs with NVIDIA NIM optimised inference microservices as well as open foundation models from Hugging Face. This enables customers to stand up enterprise GenAI infrastructure with the resiliency, day 2 operations, and security they require for business-critical applications, on-premises or on AWS Elastic Kubernetes Service (EKS), Azure Managed Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). Additionally, Nutanix Enterprise AI delivers a transparent and predictable pricing model based on infrastructure resources, which is important for customers looking to maximize ROI from their GenAI investments. This is in contrast to hard-to-predict usage or token-based pricing. Nutanix Enterprise AI is a component of Nutanix GPT-in-a-Box 2.0. GPT-in-a-Box also includes Nutanix Cloud Infrastructure, Nutanix Kubernetes Platform, and Nutanix Unified Storage along with services to support customer configuration and sizing needs for on-premises training and inferencing. For customers looking to deploy in public cloud, Nutanix Enterprise AI can be deployed in any Kubernetes environment but is operationally consistent with on-premises deployments. "With Nutanix Enterprise AI, we're helping our customers simply and securely run GenAI applications on-premises or in public clouds. Nutanix Enterprise AI can run on any Kubernetes platform and allows their AI applications to run in their secure location, with a predictable cost model," said Thomas Cornely, SVP, Product Management, Nutanix. Nutanix Enterprise AI can be deployed with the NVIDIA full-stack AI platform and is validated with the NVIDIA AI Enterprise software platform, including NVIDIA NIM, a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing. Nutanix-GPT-in-a-Box is also an NVIDIA-Certified System, also ensuring reliability of performance. "Generative AI workloads are inherently hybrid, with training, customisation, and inference occurring across public clouds, on-premises systems, and edge locations," said Justin Boitano, vice president of enterprise AI at NVIDIA. "Integrating NVIDIA NIM into Nutanix Enterprise AI provides a consistent multicloud model with secure APIs, enabling customers to deploy AI across diverse environments with the high performance and security needed for business-critical applications." Nutanix Enterprise AI can help customers: Key use cases for customers leveraging Nutanix Enterprise AI include: enhancing customer experience with GenAI through analysis of customer feedback and documents; accelerating code and content creation by leveraging co-pilots and intelligent document processing; leveraging fine-tuning models on domain-specific data to accelerate code and content generation; strengthening security, including leveraging AI models for fraud detection, threat detection, alert enrichment, and automatic policy creation; and improving analytics by leveraging fine-tuned models on private data. Nutanix Enterprise AI, running on-premises, at the edge or in public cloud, and Nutanix GPT-in-a-Box 2.0 are currently available to customers. For more information, please visit Nutanix.com/enterprise-ai. Nutanix is a global leader in cloud software, offering organizations a single platform for running applications and managing data, anywhere. With Nutanix, companies can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of hyperconverged infrastructure, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively. Learn more at www.nutanix.com or follow us on social media @nutanix.
[4]
Nutanix Extends AI Platform to Public Cloud
Nutanix Enterprise AI provides an easy-to-use, unified generative AI experience on-premises, at the edge and now in public clouds Nutanix (NASDAQ: NTNX), a leader in hybrid multicloud computing, today announced that it extended the company's AI infrastructure platform with a new cloud native offering, Nutanix Enterprise AI (NAI), that can be deployed on any Kubernetes platform, at the edge, in core data centers, and on public cloud services like AWS EKS, Azure AKS, and Google GKE. The NAI offering delivers a consistent hybrid multicloud operating model for accelerated AI workloads, enabling organizations to leverage their models and data in a secure location of their choice while improving return on investment (ROI). Leveraging NVIDIA NIM for optimized performance of foundation models, Nutanix Enterprise AI helps organizations securely deploy, run, and scale inference endpoints for large language models (LLMs) to support the deployment of generative AI (GenAI) applications in minutes, not days or weeks. Generative AI is an inherently hybrid workload, with new applications often built in the public cloud, fine-tuning of models using private data occurring on-premises, and inferencing deployed closest to the business logic, which could be at the edge, on-premises or in the public cloud. This distributed hybrid GenAI workflow can present challenges for organizations concerned about complexity, data privacy, security, and cost. Nutanix Enterprise AI provides a consistent multicloud operating model and a simple way to securely deploy, scale, and run LLMs with NVIDIA NIM optimized inference microservices as well as open foundation models from Hugging Face. This enables customers to stand up enterprise GenAI infrastructure with the resiliency, day 2 operations, and security they require for business-critical applications, on-premises or on AWS Elastic Kubernetes Service (EKS), Azure Managed Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). Additionally, Nutanix Enterprise AI delivers a transparent and predictable pricing model based on infrastructure resources, which is important for customers looking to maximize ROI from their GenAI investments. This is in contrast to hard-to-predict usage or token-based pricing. Nutanix Enterprise AI is a component of Nutanix GPT-in-a-Box 2.0. GPT-in-a-Box also includes Nutanix Cloud Infrastructure, Nutanix Kubernetes Platform, and Nutanix Unified Storage along with services to support customer configuration and sizing needs for on-premises training and inferencing. For customers looking to deploy in public cloud, Nutanix Enterprise AI can be deployed in any Kubernetes environment but is operationally consistent with on-premises deployments. "With Nutanix Enterprise AI, we're helping our customers simply and securely run GenAI applications on-premises or in public clouds. Nutanix Enterprise AI can run on any Kubernetes platform and allows their AI applications to run in their secure location, with a predictable cost model," said Thomas Cornely, SVP, Product Management, Nutanix. Nutanix Enterprise AI can be deployed with the NVIDIA full-stack AI platform and is validated with the NVIDIA AI Enterprise software platform, including NVIDIA NIM, a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing. Nutanix-GPT-in-a-Box is also an NVIDIA-Certified System, also ensuring reliability of performance. "Generative AI workloads are inherently hybrid, with training, customization, and inference occurring across public clouds, on-premises systems, and edge locations," said Justin Boitano, vice president of enterprise AI at NVIDIA. "Integrating NVIDIA NIM into Nutanix Enterprise AI provides a consistent multicloud model with secure APIs, enabling customers to deploy AI across diverse environments with the high performance and security needed for business-critical applications." Nutanix Enterprise AI can help customers: Address AI skill shortages. Simplicity, choice, and built-in features mean IT admins can be AI admins, accelerating AI development by data scientists and developers adapting quickly using the latest models and NVIDIA accelerated computing.Remove barriers to building an AI-ready platform. Many organizations looking to adopt GenAI struggle with building the right platform to support AI workloads, including maintaining consistency across their on-premises infrastructure and multiple public clouds. Nutanix Enterprise AI addresses this with a simple UI-driven workflow that can help customers deploy and test LLM inference endpoints in minutes, offering customer choice with support for NVIDIA NIM microservices which run anywhere, ensuring optimized model performance across cloud and on prem environments. Hugging Face and other model standards are also supported. Additionally, native integration with Nutanix Kubernetes Platform keeps alignment with the ability to leverage the entire Nutanix Cloud Platform or provide customers with the option to run on any Kubernetes runtime, including AWS EKS, Azure AKS, or Google Cloud GKE with NVIDIA accelerated computing.Mitigate data privacy and security concerns. Helping mitigate privacy and security risks is built into Nutanix Enterprise AI by enabling customers to run models and data on compute resources they control. Additionally, Nutanix Enterprise AI delivers an intuitive dashboard for troubleshooting, observability, and utilization of resources used for LLMs, as well as quick and secure role-based access controls (RBAC) to ensure LLM accessibility is controllable and understood. Organizations requiring hardened security will also be able to deploy in air-gapped or dark-site environments.Bring enterprise infrastructure to GenAI workloads. Customers running Nutanix Cloud Platform for business-critical applications can now bring the same resiliency, Day 2 operations, and security to GenAI workloads for an enterprise infrastructure experience. Key use cases for customers leveraging Nutanix Enterprise AI include: enhancing customer experience with GenAI through analysis of customer feedback and documents; accelerating code and content creation by leveraging co-pilots and intelligent document processing; leveraging fine-tuning models on domain-specific data to accelerate code and content generation; strengthening security, including leveraging AI models for fraud detection, threat detection, alert enrichment, and automatic policy creation; and improving analytics by leveraging fine-tuned models on private data. Nutanix Enterprise AI, running on-premises, at the edge or in public cloud, and Nutanix GPT-in-a-Box 2.0 are currently available to customers. For more information, please visit Nutanix.com/enterprise-ai. Supporting Quotes: "Thanks to the deep collaboration between the Nutanix and Hugging Face teams, customers of Nutanix Enterprise AI are able to seamlessly deploy the most popular open models in an easy to use, fully tested stack - now also on public clouds," said Jeff Boudier, Head of Product at Hugging Face."By providing a consistent experience from the enterprise to public cloud, Nutanix Enterprise AI aims to provide a user-friendly infrastructure platform to support organizations at every step of their AI journey, from public cloud to the edge," said Dave Pearson, Infrastructure Research VP at IDC. About Nutanix Nutanix is a global leader in cloud software, offering organizations a single platform for running applications and managing data, anywhere. With Nutanix, companies can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of hyperconverged infrastructure, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively. Learn more at www.nutanix.com or follow us on social media @nutanix.
[5]
Nutanix offers cloud-native AI deployment platform with predictable pricing
Nutanix offers cloud-native AI deployment platform with predictable pricing Nutanix Inc. today is breaking out the inferencing components of its GPT-in-a-Box toolkit in the form of a cloud-native artificial intelligence infrastructure platform that runs on any Kubernetes installation. That can be at the edge, in private data centers and on public cloud services like Amazon Web Services Inc.'s Elastic Kubernetes Service, Microsoft Corp.'s Azure Kubernetes Service and Google LLC's Google Kubernetes Engine. Nutanix Enterprise AI provides a consistent multi-cloud operating model that can cut the deployment of generative AI applications from days to minutes, the company said. "It gives you automation to stand up an inference endpoint, download a model and deploy the systems in a secure and controlled way," said Thomas Cornely, senior vice president of product management at Nutanix. Introduced last summer, GPT-in-a-Box is positioned as a full-stack, software-defined platform that includes all the elements needed to build AI-ready infrastructure. It's intended for companies building models from scratch, "but a lot of people are starting in the cloud," Cornley said. "We can meet them where they are." Nutanix Enterprise AI does away with the infrastructure, Nutanix Kubernetes Platform, and Nutanix Unified Storage components of the integrated offering, leaving it to customers to provide those elements. The release is a break from tradition for Nutanix. "It's the first time we've released something that's cloud-native on day one," Cornley said. "We more typically work on-prem and extend in the cloud." The offering is aimed at data scientists, who often have to provision infrastructure for generative AI models. "That's not something data scientists are good at," Cornley said. "They want to focus on the application and consuming the model, not standing up the infrastructure." Generative AI workloads are often inherently hybrid, with applications built in the public cloud, fine-tuning occurring on-premises and inferencing determined by business need. That presents issues of complexity, data privacy, security and cost that Nutanix is addressing. Simple and consistent Simplicity and consistency are "true to the Nutanix story," Cornley said. "It's about how to make it simple for customers who don't want the risk of things not doing what they're supposed to do. This is about deploying in a predictable, cost-effective fashion and keeping in control." Nutanix Enterprise AI comes with built in support for Nvidia Corp.'s AI Microservices, a set of services that help deploy AI models consistently across different platforms. It also supports open-source foundation models from Hugging Face Inc. However, customers can deploy whatever LLMS they choose. Nutanix Enterprise is being offered under what the company calls a "transparent and predictable pricing model" based on the infrastructure customers use, such as the number of CPU cores and graphic processing units. Provisioning is done in a point-and-click workflow similar to the way cloud servers are provisioned Resource-based pricing contrasts with most cloud services, which price based on usage. "We see a lot of customers reacting to AI the same way they first reacted to the cloud; they don't what they are getting and how much it's going to cost," Cornley said. "This is a subscription term license paid per year based on resources. That's common to entire portfolio." Pricing specifics weren't provided.
Share
Share
Copy Link
Nutanix launches Enterprise AI, a cloud-native solution for deploying and managing AI workloads across multicloud environments, offering simplified infrastructure management and predictable pricing.
Nutanix, a leader in hybrid multicloud computing, has unveiled Nutanix Enterprise AI (NAI), a cloud-native solution designed to simplify the deployment and management of AI workloads across various environments. This new offering extends Nutanix's AI infrastructure platform, enabling organizations to leverage their AI models and data securely and efficiently 123.
NAI can be deployed on any Kubernetes platform, whether at the edge, in core data centers, or on public cloud services such as AWS EKS, Azure AKS, and Google GKE. The solution provides a consistent hybrid multicloud operating model for accelerated AI workloads, allowing organizations to:
Nutanix Enterprise AI aims to tackle several key challenges faced by organizations in implementing AI solutions:
Nutanix has collaborated with key industry players to enhance the capabilities of Enterprise AI:
Nutanix Enterprise AI offers flexibility in deployment and supports various use cases:
Thomas Cornely, SVP of Product Management at Nutanix, emphasized the solution's ability to simplify and secure GenAI applications across different environments 34. Justin Boitano, VP of Enterprise AI at NVIDIA, highlighted the importance of a consistent multicloud model for diverse AI deployments 23.
As organizations continue to grapple with the complexities of AI implementation, Nutanix Enterprise AI represents a significant step towards simplifying infrastructure management and enabling more widespread adoption of AI technologies in enterprise environments.
Reference
[2]
[4]
Nutanix and Nvidia partner to address challenges in enterprise AI adoption, offering solutions for hybrid cloud environments and full-stack accelerated computing to meet the demands of generative and agentic AI.
3 Sources
3 Sources
Nutanix's Enterprise Cloud Index reveals widespread GenAI strategy implementation, with organizations facing challenges in scaling AI workloads and prioritizing security. Despite expected cost increases, most firms anticipate ROI within 2-3 years.
2 Sources
2 Sources
NetApp, a leader in data management and storage solutions, announces significant advancements in AI-driven data infrastructure and strategic partnerships with major tech giants. The company's latest innovations aim to transform enterprise AI capabilities and data management.
6 Sources
6 Sources
As AI continues to transform enterprise computing, companies are navigating new infrastructure paradigms. From cloud-based solutions to custom on-premises setups, businesses are exploring various options to gain a competitive edge in the AI-driven landscape.
4 Sources
4 Sources
The rise of AI is transforming data centers and enterprise computing, with new infrastructure requirements and challenges. Companies like Penguin Solutions are offering innovative solutions to help businesses navigate this complex landscape.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved