9 Sources
9 Sources
[1]
Nutanix expands agentic AI infrastructure for neoclouds - SiliconANGLE
Nutanix expands agentic AI infrastructure platform as token costs threaten to spiral Managing AI infrastructure across the full stack is getting more complex -- and more expensive. Now, Nutanix Inc. is tackling both problems with an expanded agentic AI infrastructure platform that gives service providers and enterprises a single control plane for accelerated computing. The expansion focuses on two additions to the company's AI stack, according to Anindo Sengupta (pictured, left), vice president of product management at Nutanix. Service Provider Central lets providers build multi-tenant GPU clouds and sell AI service catalogs, including GPU-as-a-service and Kubernetes-as-a-service, to enterprises facing long silicon wait times. Simultaneously, a new AI gateway inside Nutanix Enterprise AI governs which agents access which models and at what cost. "The AI gateway is really around cost and governance," Sengupta told theCUBE. "As agents sprawl, models and tools need to be controlled and governed. What we've announced is the capability to really drive governance around models and tools using Nutanix's agentic AI." Sengupta and Dan Ciruli (right), vice president and general manager of cloud-native at Nutanix, spoke with theCUBE's John Furrier and co-host Alison Kosik at Nutanix .NEXT, for an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how Nutanix is positioning its agentic AI infrastructure platform as the middleware layer between models and chips for the enterprise. (* Disclosure below.) Underpinning both announcements is Nutanix Kubernetes Platform Metal, which the company describes as the only dual-native platform supporting any combination of VMs, virtualized Kubernetes and bare metal Kubernetes from a single control plane, according to Ciruli. NKP also ships with CN-AOS, an enterprise-grade storage layer, and an AI platform-as-a-service catalog of open-source AI projects announced at Nvidia GTC. This will give developers a prepackaged environment for building agentic applications. "When you walked into the .NEXT keynote, the large font letters said 'Run anything, anywhere,'" Ciruli said. "That's our mission. We do want to enable that. I think that customers -- enterprises -- are going to find many reasons to run in service providers." But where enterprises choose to run their workloads will increasingly come down to one thing: cost. The economics of agentic AI will push enterprises to rethink where they run inference, Ciruli noted. A single user action in an agentic workflow can trigger hundreds of downstream agent calls, each consuming tokens at scale and driving up costs. Navigating those tradeoffs will give rise to an entirely new discipline: AI FinOps. "Right now it's very, very easy to get access to a model -- it's just an API call to get access to a model, but they will charge you per token," Ciruli said. "I think customers will very quickly have to start thinking about, 'Do we call an API where we're going to pay per token? Do we use some infrastructure at a service provider where we're paying for time, but then we get to generate all the tokens? Or does it make economic sense to buy some hardware, run it on-prem and now we're just buying electricity?' Absolutely, there'll be AI FinOps to help you optimize that." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Nutanix .NEXT:
[2]
.NEXT 2026 -why Nutanix CEO Rajiv Ramaswami is betting on agentic AI being a hybrid enterprise application
Nutanix kicked off its annual .NEXT 2026 user conference in Chicago with a series of announcements extending its platform for the agentic AI era. As per the corporate blah blah, these included: At an analyst side event at the gig, CEO Rajiv Ramaswami drilled down into the company's strategy in helping users navigate the shift towards enterprise-scale agentic adoption. His pitch: Nutanix truly delivers a unified modern platform, powering the apps of tomorrow, the AI apps of tomorrow, but also the mission-critical apps of today, enabling our customers to use us across a wide variety of fronts, running their existing business, modernizing everything and innovating in the AI future. There's a basic customer proposition in play here, he argues: As we talk to customers today, there's a lot that's on their minds. On the one side, every CIO that I talk to, every customer that I talk to, is thinking about how can they operationalize AI in their enterprise while dealing with the complexity that it brings to the table. There's AI in the public cloud, AI on-prem, AI everywhere, and they struggle to figure out how to deal with it and how to operationalize it in their companies and get tangible ROIs on it. The AI factories are here, but then again, putting it all together to make this thing work for them is no easy task. At the same time, we've got the geo-political situation that we're all sitting in. What that means for us as a company is that there is a lot more focus on sovereignty. So far, so pretty familiar pitch. What does Nutanix bring to the party that others don't? Ramaswami argues: The value proposition that [customers] see from Nutanix - we deliver them the simplicity of experience while giving them great total cost of ownership on the other side. We give them the flexibility to use us for many different use cases and give them at the same time, the control, the security they need and the performance they need for their business critical applications, all the while making sure we support them fantastically, in a fantastic way. Customers buy into Nutanix for the long haul, he adds: They start with us, and they continue to grow with us. What Nutanix does uniquely, I would say more so than anybody else in the industry, is that we deliver that single unified platform for today and for tomorrow...If customers want to modernize their infrastructure, we have a solution for them. They want to re-use their existing hardware while modernizing, we have a solution for them. They want to run in the public cloud, we have a solution for them. They want to modernize their applications, go to a cloud native framework, we have a solution. And now with our agentic AI platform, we're enabling them to run their agentic AI applications. Agentic AI is going to be a "true hybrid application" for most enterprises, he predicts: There's going to be applications that run in the public cloud, there's going to be applications that run in the private cloud and the edges, and there's going to be many applications that run in these so-called neo-clouds, which are a whole host of new service providers that provide AI services. We are focusing on capturing the opportunity across all of these. The reason that AI will be hybrid is, again, if you look at the private cloud and the edge, you've got sovereignty being a big push. You've got regulation being another big push. You've got the proximity to data being another reason. You've got the need to do real-time inferencing for a lot of these new use cases that are being -- coming up at the edge and in these manufacturing sites and other places. And then there's customers who are going to be consuming this, of course, in the public cloud and in neo-clouds as well. They can get there for a good subset of applications, that will also be an option. We expect that the slew of AI applications will continue to be hybrid, just like today's world is hybrid. The rise of AI factories helps to some extent here, he argues, but end user needs are moving on: When we started with AI factories, they were specialized elements. They were serving the needs of a sub-set of users, kind of a small portion of the enterprise. But this is exploding. It's exploding because of scale at which people are now building and deploying these applications is just tremendous. You're servicing more and more business users. You're servicing more and more developers, AI engineers and the number of agents is exploding. This leads to a problem: You've now got these AI factories that deliver critical infrastructure for all these needs. They need to be operated by the infrastructure admins and the platform engineers..It's a constrained resource. We need to optimize it. We need to provide security, governance, all of these things. All of these things Nutanix already does for the compute-centric world, and now it aims to do the same for the AI inferencing and agentic world, boasts Ramaswami: What we do in very simple terms is to make this a cloud operating model for these AI factories. We provide a turnkey experience so that customers don't have to do the work of integrating everything, and they can start being consumers of infrastructure for their AI use cases rather than trying to go put it all together and run it all themselves. The Nutanix stack underpinning this has a number of core elements - a set of AI services with an underlying Kubernetes platform to run them; a data foundation to stream the data that AI applications need with low latency and high performance; and the ability to manage all the shared infrastructure across multiple tenants and multiple users. Ramaswami argues: Essentially, what we deliver with this cloud operating model is a turnkey platform that allows companies to go build and run their AI apps with all the stuff that they need to do so, the security, the control, the governance, being able to manage all of this. It's all the best performance and drive to the lowest cost per token, which is a unit of intelligence. He cited an unnamed EMEA-based sovereign digital services provider as an end user exemplar of how all this works in practice: Sovereignty is very important for them. They are a digital services provider to many people, and they have been a customer for several years now. They started out with a standard use case with us. Modernizing their HCI (Hyper-Converged Infrastructure), they ran all their databases on our platform. And then over time, they consolidated the vast majority of their enterprise applications onto our platform. That was the second stage. And then in the third stage, they're now deploying Nutanix to create a shared AI infrastructure for their multiple tenants. So that again, they can provide a shared service to all their tenants, maximize the utilization of their shared infrastructure and deliver this in a secure way using our agentic AI stack. Truly, we are the platform of the future. Well, time will tell on that bold prediction, but it was a compelling sales pitch and few would question the long-term loyalty displayed by the Nutanix customer faithful.
[3]
AI infrastructure modernization drives storage rethink - SiliconANGLE
NetApp and Nutanix say storage has become the last line of defense in the AI era Companies are rethinking their technology foundations as AI infrastructure modernization and security demands grow. The result is surging demand for flexible platforms that can run legacy and modern applications simultaneously while keeping data secure and AI-ready. NetApp Inc. and Nutanix Inc. are now working together in an effort to help customers modernize their infrastructure, according to Ketan Shah (pictured, left), vice president of products at Nutanix. The partnership addresses a gap in the virtualization market as organizations look for more flexibility in how they run and protect workloads across hybrid environments. "NetApp and Nutanix are combining forces to help customers modernize the infrastructure. Not just today for virtualization -- [taking the] simplicity and agility of Nutanix with the agility and resilience of NetApp -- but also taking them on a journey to modernize as the apps evolve with containers and AI," Shah said. "That's really what this is about." Shah and Sandeep Singh (right), senior vice president and general manager of enterprise storage at NetApp, spoke with theCUBE's John Furrier and co-host Alison Kosik at Nutanix .NEXT, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed AI infrastructure modernization and the growing importance of secure, AI-ready data platforms. (* Disclosure below.) The two companies announced a collaboration at .NEXT 2026 to integrate NetApp Intelligent Data Infrastructure with the Nutanix Cloud Platform, giving customers greater choice to optimize their virtualization and data strategies across on-premises, cloud and containerized environments. The integration combines NetApp ONTAP's data management capabilities with Nutanix's unified hybrid multicloud operations -- with storage emerging as the layer where AI readiness, cyber resilience and infrastructure modernization all converge, according to Singh. "When you think about storage, it very quickly elevates from not only just storing data, but being able to analyze it, protect it and secure it, and then enable AI to have access to it. When you think about the cybersecurity angle ... storage becomes the last line of defense for customers. It's not a matter of if, it's a matter of when," Singh said. "When you think about the AI perspective, it's critical for AI to be able to have the context of your enterprise data." As the partnership roadmap extends into agentic AI, the infrastructure challenge shifts from simply running AI workloads to governing them at scale. Unmanaged AI deployments -- shadow AI -- represent a growing operational risk as agent proliferation outpaces visibility into how those workloads consume compute, storage and network resources, Shah noted. The answer is not a separate AI infrastructure layer but deeper integration of AI governance into the core platform itself. "We don't think AI will be just another silo of infrastructure," Shah said. "We talk about shadow AI as an emerging thing. We think that should be integrated into the core infrastructure for efficiency, scale and economics." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Nutanix .NEXT:
[4]
Agentic infrastructure is the new model for AI enterprise - SiliconANGLE
Today's applications, tomorrow's AI workloads -- Nutanix is building the platform for both, says CEO The enterprise computing stack is undergoing its most consequential transformation in decades, as agentic infrastructure shifts from an experimental workload to the organizing logic of every new application. That shift is forcing a fundamental rethink of what infrastructure must actually do -- not just host workloads, but orchestrate intelligent agents, govern data pipelines and optimize the economics of inference at scale. The question for every platform company is whether its architecture was designed for this moment or merely adapted to it, according to Rajiv Ramaswami (pictured), president and chief executive officer of Nutanix Inc. Now, the answer defines everything the company is building toward. "We truly want to be the the platform company where all applications run," Ramswami told theCUBE. "Today's applications, tomorrow's applications, in this new AI world. We want to become the platform of choice for our customers all around the world." Ramaswami spoke with theCUBE's John Furrier and co-host Alison Kosik at Nutanix .NEXT, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed the company's evolution from a hyperconverged infrastructure provider to a full agentic infrastructure platform, including its expanding ecosystem, sovereign cloud opportunity and vision for the next five years. (* Disclosure below.) Nutanix's platform evolution carries a clear technical mandate: making GPU resources work harder. As Nutanix has expanded its platform with new agentic infrastructure capabilities announced at .NEXT, the underlying engineering work centers on eliminating the idle compute that inflates cost per token. The same optimization logic Nutanix applied to CPU virtualization a decade ago now applies directly to GPU workloads, Ramaswami explained. "You want to make maximum use of the GPU that you buy," he said. "GPUs sitting idle is bad, because think about it -- on the one hand you're spending more and more tokens, and if you're going to need to buy more and more GPUs to go and use that, it's not efficient. Same thing as we saw in compute-centric workloads back before virtualization came on -- utilization was very low. With virtualization [utlization] became much higher. The same thing is happening now with GPUs." The ecosystem story at .NEXT proved equally central. More than 100 partners sponsored the event -- spanning major cloud, server, storage and chip providers -- a signal that reflects the network effect of a genuine platform rather than a product line, according to Ramaswami. A recent strategic partnership with Advanced Micro Devices Inc. -- in which AMD committed up to $250 million in investment and joint engineering to co-develop an open agentic AI platform -- reinforced that the ecosystem is hardening into something structural. Cost per token has become the defining unit of economics across every customer conversation, he added. "The value of a platform is directly tied to the ecosystem around it. All these partners are seeing the value of the platform that we bring today to the market and realize the value of being integrated together with us," Ramaswami said. "Now we've got a whole new ecosystem that we are just starting to build around AI." Sovereignty is emerging as a second major growth vector. Governments worldwide are building sovereign AI clouds to keep data and economic value within national borders, and Nutanix is positioning itself as the platform of choice for those deployments. Government initiatives to finance and populate these buildouts are creating a direct pipeline of anchor customers for Nutanix's hybrid platform, Ramaswami noted. "The whole move towards sovereignty is here to stay," he said. "You want to have your own infrastructure. You want to be in control of it yourself. You want your citizens to run it and manage it and not be dependent on outside parties. That represents a huge opportunity for Nutanix, because we enable sovereign clouds to be built that meet these needs." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Nutanix .NEXT 2026:
[5]
Nutanix CEO On Treating AI As Core Infrastructure: 'This Is Now About Your Competitive Edge'
As agentic AI moves from pilots into production, technology executives face mounting pressure to govern cost, data, and risk -- forcing a rethink of platforms, not just workloads. At the Nutanix .NEXT 2026 conference, the message was clear: Agentic AI is no longer a future concept; it is now an operational reality. Yet as AI moves from pilot projects into production, it is exposing hard questions surrounding governance, cost control, data sovereignty and infrastructure flexibility. [RELATED: 5 Rules To Getting Started With AI Governance] In his keynote officially kicking off the event, Nutanix CEO Rajiv Ramaswami and several of Nutanix's enterprise customers spoke on why AI adoption is forcing IT leaders to rethink their operational platforms, not just their workloads. Here are some key takeaways from Nutanix's CEO and its customers: Ramaswami described a shift away from simple AI prompting toward autonomous agents that operate continuously across the enterprise. "We are rapidly moving from an era of prompting to the era of delegating and empowering autonomy with agents," Ramaswami said. "This is now about your competitive edge." He emphasized that organizations are now dealing with more users, more agents and more data -- often with limited infrastructure resources. As a result, AI must be treated like core infrastructure, requiring standardized platforms and operational controls rather than fragmented tools. The keynote and customer stories made it clear that regulated industries can't let AI run amok. Clear rules on where data lives, who can access what, and how AI is allowed to operate are mandatory as AI spreads. That concern is already shaping real-world decisions. Dan Regalado, CIO of Wynn North America, who joined Ramaswami on stage at the NEXT event, emphasized that regulatory constraints and data residency both define how AI can be deployed. "Our gaming data cannot leave the state," Regalado said. "Data security and data residency are non-negotiable for every one of our resorts." Even organizations experimenting with AI in the public cloud are reassessing where production workloads belong. "We've done it in the cloud," Regalado added, "but we're actively researching how we might do it better on‑prem or in a hybrid model -- because governance and control matter." As AI use scales, tracking usage and managing costs are no longer just financial concerns. Nutanix highlighted the growing importance of usage metering and "cost per token" visibility to prevent AI initiatives from becoming budget liabilities. [RELATED: Analysis: How The Midmarket Can Deliver ROI With AI] "As you use more and more models, you run into challenges around tracking usage and managing cost," Ramaswami said. For Wynn, cost efficiency is already influencing platform decisions. "Cost efficiency is one of the major factors we're evaluating," Regalado said, "as we decide whether to stay purely in the cloud or invest on‑prem." AI without metering also becomes a budget liability. CIOs need infrastructure that surfaces usage clearly and supports predictable budgeting. Customer examples showed that many IT teams often lack the staff to manage a growing mix of AI tools. The customers conveyed a very specific message: Reducing operational complexity through unified platforms is critical to sustaining innovation without expanding head count. For many organizations, AI ambition can clash with limited staffing, said Josh Hostetler, lead platform engineer at Tire Rack. "I'm on a platform engineering team of three people," Hostetler said. "Our goal was to reduce administrative burden without adding another tech stack or more engineers." Tire Rack modernized incrementally -- starting with foundational workloads and evolving toward containers -- without overwhelming the team. "We started simple," Hostetler said. "Stability mattered as much as scale." AI platforms that increase operational complexity can halt adoption. From supply chain constraints to cloud cost concerns, CIOs are evaluating where workloads should run. Hybrid and on‑premises options are gaining renewed attention as AI demands more platform adaptability. [RELATED: What Midmarket CIOs Must Prove By EOY 2026: Fewer Platforms, Faster Security, Measurable Outcomes] Stephen Hall, vice president of infrastructure and operations at BlueCross BlueShield of Tennessee, underscored the importance of adaptability as AI reshapes infrastructure planning. "Infrastructure leaders need adaptability," Hall said, "because the industry will keep evolving."
[6]
Ecosystem partnerships emerge AI infrastructure moat - SiliconANGLE
The single-vendor world is collapsing -- and Dell and Nutanix say AI factories are finishing the job Enterprise AI is accelerating demand for tightly integrated ecosystem partnerships as organizations confront a rapidly expanding landscape of platforms, hardware choices and agentic workloads. The shift from simple hyperconverged infrastructure to multi-layered AI factory deployments has made ecosystem diversity a strategic imperative for platform companies and their hardware partners alike. But as the number of integration points multiplies, the companies that streamline choice and interoperability for customers will capture the next wave of enterprise spending, according to Gregory Lehrer (pictured, right), vice president of business development and ecosystems at Nutanix Inc. "If you don't have a strong ecosystem, if you don't have integration, you cannot scale because the customer needs are very diverse," Lehrer told theCUBE. "I grew up in a world where you have one stack Microsoft Corp. or one stack Dell Technologies Inc. or one stack whatever. [That] world is done, it doesn't exist anymore. All of us need to be very interchangeable." Lehrer and Todd Lieb (left), vice president of cloud partnerships at Dell, spoke with theCUBE's John Furrier and co-host Alison Kosik at Nutanix .NEXT, during an exclusive broadcast on theCUBE, SiliconANGLE Media's livestreaming studio. They discussed how ecosystem partnerships are reshaping enterprise AI infrastructure delivery across AI factories, Kubernetes and hybrid cloud environments. (* Disclosure below.) The Dell-Nutanix relationship illustrates how ecosystem partnerships have evolved from simple hardware certification into strategic, multi-layer platform co-engineering. With more than 4,000 customers deploying the Dell AI Factory, the demand for integrated software platforms on top of that infrastructure is surging, Lieb explained. "In my mind, the Dell AI Factory is a platform, and then at the top there's models and use cases," Lieb said. "Nutanix is the platform that sits in the middle, and that's very powerful. For enterprises to bring AI to life, you got to have the AI factory platform from Dell, the Nutanix platform in the middle with all the controls and tools, and then you run on top." Nutanix added more than 1,000 new customers in its most recent quarter -- the highest number in eight years -- and is racing to keep up with certification demand, Lehrer noted. The backlog of Independent Software Vendors seeking certification on the Nutanix Cloud Platform reflects how rapidly the AI ecosystem is expanding. "My number one priority is to resolve a backlog of ISVs [that want] to be certified on the Nutanix platform," Lehrer said. "This is a condition of success because it's not about the number of logos, it's about the quality -- what the customer wants." Here's the complete video interview, part of SiliconANGLE's and theCUBE's coverage of Nutanix .NEXT 2026:
[7]
Nutanix Goes Big On Agentic AI, Adds Multi-Tenant Cloud Capabilities
Nutanix is using this week's Nutanix .NEXT conference to introduce a complete agentic AI platform, solidify new partnerships with leading storage vendors Everpure and NetApp, and show its determination to migrate customers away from arch-rival VMware with a big push to help its channel partners work in multi-tenant environments. Hybrid multi-cloud computing and hyperconverged infrastructure technology developer Nutanix Tuesday opened its Nutanix .NEXT conference with multiple changes to its product offerings aimed at helping channel partners and customers explore new ways to not only advance their cloud and AI agendas but also more easily migrate off the competitive VMware platform. Nutanix .NEXT, held this week in Chicago, is expected to draw about 5,000 attendees, said Lee Caswell, senior vice president of product and solutions marketing for the San Jose, Calif.-based company. The flagship news from Nutanix .NEXT is a major update to the Nutanix Cloud Platform aimed at helping customers manage their supply chain risks and de-risk their Broadcom VMware estates, Caswell (pictured above) said during a pre-show press conference. [Related: AMD Commits $250M To Nutanix To Accelerate Enterprise Agentic AI Infrastructure] Nutanix is a leading competitor to VMware, and is on a big push to convince VMware customers who may be dissatisfied with that company and its sales and licensing changes it was acquired by Broadcom in 2023. The latest version of Nutanix Cloud Platform includes Service Provider Central, or SP Central, to give customers granular control over GPU-enabled AI resources now to the DL cloud providers, Caswell said. Also highlighted with the Nutanix Cloud Platform is Nutanix Agentic AI, which was originally introduced at last month's Nvidia GTC conference, he said. "Nutanix Agentic AI is a full-stack AI solution to go and offer full access to large language models that are curated and certified, running on certified GPUs, including an AI gateway," he said. "Partners are coming to us as a path into the enterprise user taking full advantage of our v4 APIs, as well as our open catalog, to make sure that they can get their solution, their software, brought into our complete solution, if you will." Nutanix also showed its Kubernetes orchestrated container solutions, including a new offering for running Kubernetes on bare metal, Caswell said. "One of the reasons why, by the way, the Nutanix approach to containers is working so well right now, is that we're the only company that offers what we call a dual native approach to running containers," he said. "You can either run containers as most do in on-prem environments on an enterprise hypervisor -- that's our AHV offering running Kubernetes, our NKP on that. Or the other mode is to run directly on bare metal." That second approach is a brand-new offering, Nutanix Kubernetes Platform Metal, Caswell said. "NKP Metal now offers the same security and networking support for Kubernetes running on bare metal that we have in Kubernetes running on VMs," he said. "In addition, we've got the same data services -- snapshots, replication, DR (disaster recovery) -- running in a bare metal environment with our container NKP Metal platform as we do in NKP running in virtual machines. This really gives us an interesting opportunity to give customers full choice of running Kubernetes either in virtual environments or on bare metal." Caswell said Nutanix expects that, in virtual private data centers, customers will prefer to run Kubernetes on virtual machines, with Kubernetes extended to edge environments via NKP Metal. "Our view is that the ability to have containers that can be portable across the full hybrid cloud, all the way from the edge into the data center and then to the public cloud with the same operating model, the same security method, and the same data services, is a unique offering from Nutanix, allowing customers to go and bring containers into their environments at their pace," he said. Nutanix also introduced Nutanix Unified Storage 5.3 to its cloud platform aimed at driving the transformation of object storage into a performance storage tier that AI Factories require, Caswell said. This new release uses smart tiering to help seamlessly move data to Google Cloud and OVHCloud S2 while adding multi-tenant object scaling and quotas, he said. Later this year, NUS will add RDMA (remote direct memory access) acceleration for S3-compatible object storage. Nutanix is also expanding its Nutanix Elevate service provider program in a number of ways, including giving channel partners the ability to work in multi-tenant environments for the first time, Caswell said. "We have service providers coming to Nutanix in part because many of them lost their ability to be a service provider within the traditional VMware environment because of licensing changes, and so we've got a huge uptick here." The change gives channel partners an enterprise-grade multi-tenant infrastructure as a service or IaaS offering based on the Nutanix Cloud Platform. It is slated to be available in the second half of the year. John White, chief operating officer at US Signal, a Grand Rapids, Mich.-based solution provider and -- thanks to its 2024 acquisition of OneNeck, a long-time Nutanix channel partner -- told CRN that Nutanix's new focus on multi-tenant environments is key to helping move customers from he called the "debacle" of VMware after that company was acquired by Broadcom. It really shows that Nutanix is listening to its channel partners, White said. "They have a huge opportunity in the space, and they are trying to help their partners exceed and excel," he said. "For the whole multi-tenant side of things, we have been looking to provide an option to VMware." White said his team has been working with Nutanix on the multi-tenancy issue for some time. He said he worked with another channel partner as early as 2005 to help bring multi-tenant virtualization to VMware. "When US Signal first got into Nutanix, that was our biggest ask," he said. "Our customers, they sometimes have 10 virtual servers, and they don't need to buy three or four hosts. That's overkill for them. We need something a little bit more right sized. So we've been pushing Nutanix to do multi-tenancy since early '24, and we've been a huge design partner with them, working with their teams, helping them work out the kinks, telling them what features are missing to get us parallel with what we had with VMware so we can start to migrate customers to Nutanix." US Signal also provides cloud services to the MSP community where multi-tenancy is critical to take advantage of US Signal's shared VMware infrastructure, White said. "MSPs don't want to go and buy hosts," he said. "So I have hundreds of customers with thousands of VMs that are locked up in VMware right now until we can find them alternatives. And so that's what we see as a big upside with this." Nutanix .NEXT is also seeing a lot of new strategic partnerships with Nutanix, Caswell said. Everpure, which until early this year was known as Pure Storage, is at the event for the first time, he said. "Who would have thought maybe three years ago that Everpure would be on our list of sponsors?" he said. "Nutanix has moved beyond just the HCI model to now support external storage systems. We started with Dell PowerFlex, of course, but you can see Everpure is now a platinum sponsor. Very interesting, because this means for Everpure customers and resellers, the opportunity to go and work together, including working with Cisco on our FlashStack with Nutanix." NetApp will also be at .NEXT showing with Nutanix how they are working to bring all the Nutanix capabilities to NetApp customers, Caswell said. Nutanix is also planning to add support for NetApp Ontap and will expand support for NetApp AFF all-flash A-series and some FAS hybrid flash systems. "And for partners, the endorsement from Cisco that FlexPod will also support Nutanix plus NetApp plus Cisco jointly together," he said. "And as we know, once Cisco gets Cisco-validated designs together around FlexPod, we'll see a great appetite from the channel and resellers and customers for taking that jointly certified solution." Nutanix has added synchronous disaster recovery support for Dell PowerFlex storage, and is planning to support Dell PowerStore, Dell Private Cloud automation, and Dell PowerFlex Ultra5 environments Nutanix, meanwhile, expanded the list of AMD CPU-based servers it supports, and said it will add support for AMD GPU-accelerated compute servers targeting AI workloads. Nutanix is expanding support for Lenovo ThinkSystem storage, Lenovo ThinkSystem servers, and XC One automation. Also new from Nutanix is the extension of Nutanix database-as-a-service offering to now include MongoDB. Previously, it supported SQL Server, Oracle, and Postgres, Caswell said.
[8]
Nutanix expands platforms for agentic AI and hybrid multicloud operations - SiliconANGLE
Nutanix expands platforms for agentic AI and hybrid multicloud operations Nutanix Inc. is used its .NEXT conference this week to outline a broad expansion of its cloud platform, positioning the company to support what it describes as the next phase of enterprise artificial intelligence with agent-driven applications running across hybrid and multicloud environments. Central to today's announcements are new capabilities in the Nutanix Cloud Platform, including infrastructure for agentic AI, expanded Kubernetes support, deeper ecosystem integrations and enhanced management tools. Nutanix said the updates reflect growing enterprise demand for platforms that can handle increasingly complex workloads while navigating hardware supply constraints and regulatory requirements. "The theme of the show is one platform and one experience only," said Lee Caswell, senior vice president of product and solutions marketing. "The idea is that platforms are expanding to include workloads of all types that can run anywhere." Nutanix is also extending a previously announced agentic AI stack with new multitenant and service provider capabilities designed to support "neoclouds," or providers offering on-demand access to AI infrastructure and services. The company said the platform will enable these providers to deliver services such as graphics processing units-as-a-service and Kubernetes-as-a-service, while also supporting enterprise use cases that require governance, cost control and data sovereignty. A new management layer allows providers to allocate shared AI infrastructure while maintaining isolation between tenants. Caswell emphasized that managing cost and performance will be critical as AI workloads scale. He emphasized the importance of monitoring token usage in large language model environments to avoid unexpected expenses, describing it as "important as customers look at how they deploy AI without getting surprised about the costs." Another element of the announcement is NKP Metal, an extension of the Nutanix Kubernetes Platform that allows Kubernetes workloads to run directly on bare-metal infrastructure. The offering is aimed at performance-sensitive use cases such as AI training and edge deployments that rely on dense GPU configurations. Running Kubernetes on bare metal usually introduces operational complexity in provisioning physical servers and managing firmware. Nutanix is attempting to address that by applying its existing automation, lifecycle management and data services to such environments. The company is also promoting what it calls a "dual-native" architecture, enabling organizations to run containers either on virtual machines or directly on bare metal under a unified management and security model. Caswell said Nutanix is the only company that offers such an approach. It's intended to offer organizations flexibility as they balance performance, cost and operational consistency across environments. Nutanix is also expanding its ecosystem of hardware and cloud partners to include support for additional storage platforms, server vendors and cloud environments. The company highlighted new and planned collaborations with leading vendors such as Cisco Systems Inc., Dell Technologies Inc., Lenovo Group Ltd. and NetApp Inc., as well as expanded support for Advanced Micro Devices Inc.-based systems. The integrations are intended to let organizations reuse existing infrastructure and mitigate supply chain constraints that have affected hardware availability. The platform also adds zero-copy migration capabilities from VMware Inc. vSphere environments to Nutanix AHV virtual disks, enabling in-place workload conversion without duplicating data. For enterprises expanding across multiple clouds and geographic regions, Nutanix is also emphasizing data sovereignty and operational control. Updates to Nutanix Cloud Clusters expand support to additional hyperscaler cloud regions, including sovereign environments. This allows organizations to meet regulatory requirements while maintaining workload portability, Nutanix said. The company is also enhancing Nutanix Cloud Manager with multisite and multidomain capabilities. Version 2.0 provides a single control plane for managing distributed infrastructure, including air-gapped and highly regulated environments. New features include integrated cost governance, AIOps and self-service capabilities, all delivered with a unified interface. By bringing cost management on-premises, Nutanix said it eliminates the need for customers to rely on cloud-based tools in sensitive environments. The company is also expanding its Data Lens ransomware protection offering to provide analytics and governance capabilities in on-premises and air-gapped deployments. Taken together, the announcements reflect Nutanix's effort to reposition its platform beyond its roots in hyperconverged infrastructure toward a broader role in AI and multicloud operations. The company is betting that enterprises will increasingly seek integrated platforms that can support AI workloads alongside traditional applications, while providing consistent management, governance and cost control across environments. Caswell framed the shift as a continuation of Nutanix's longstanding "run anything anywhere" philosophy, extended to include AI and modern application architectures. "We're bringing the simplicity that Nutanix has been known for into bare metal," he said. "This allows us to consolidate management and free up time for other, more high-value capabilities." Many of the new capabilities are available now. Nutanix Agentic AI, Remote Direct Memory Access for Nutanix Unified Storage, enhanced service provider features and the NKP Metal deployment option will be available in the second half of 2026.
[9]
Nutanix unveils new tools aim to simplify AI, cloud and enterprise IT: All you need to know
Nutanix also announced the deepening of its partnership with NetApp to improve how enterprises manage data across cloud and on-premise environments. It's been a few years since artificial intelligence, aka AI, broke into the scene, and it's surprising that in 2026 too, it's still a major buzzword. In today's day and age, whether you're a teacher, a student, a working professional, or even running a small business, there's a lot of dependence on AI among the general public. But when we talk about enterprise tech, the situation often looks complicated. And this is because not every company has the necessary resources to run large AI models with ease. This is where companies like Nutanix step in. Also read: 5 Free Co-op Steam games you should be playing with friends Nutanix is hosting the NEXT 2026 event in Chicago, and on day 1, multiple announcements showed how the brand is trying to position itself as a simpler alternative in an increasingly complex cloud and AI landscape. Of course, the intent of the announcements has echoes of making enterprise tech easier to run, more flexible to scale, and less dependent on expensive hardware upgrades. Nutanix is now building toward a future where businesses can run traditional apps and advanced AI systems side by side without needing completely different setups. Read on to know all that was just announced. Companies should not need entirely new infrastructure just to run AI; they should be able to run modern AI workloads using the infrastructure they already have. And this is more complicated than it looks. With its cloud platform, that's exactly what Nutanix is enabling businesses to do. The Nutanix Cloud Platform is being called a 'full-stack solution' that can handle virtual machines, containers, and AI workloads together. This includes the upcoming Nutanix Agentic AI platform that integrates compute, storage, networking, and Kubernetes into one system. Alongside this, NKP Metal will allow businesses to run Kubernetes directly on physical servers, improving performance for AI training and edge workloads. The company is also expanding its partner ecosystem with brands like Dell Technologies, Cisco, and AMD. This gives enterprises more flexibility when choosing hardware, especially at a time when GPU supply remains tight. The broader goal here is to reduce dependency on specialised hardware while still enabling high-performance AI deployments. Nutanix also announced the deepening of its partnership with NetApp to improve how enterprises manage data across cloud and on-premise environments. The collaboration will bring NetApp's Intelligent Data Infrastructure, built on its ONTAP storage systems, into the Nutanix Cloud Platform later this year, alongside the Nutanix AHV hypervisor. By combining NetApp ONTAP's data management capabilities with Nutanix's hybrid multicloud platform, enterprises can modernise their virtualisation stack without completely overhauling existing systems. In other words, it gives organisations more flexibility to modernise their infrastructure without having to rebuild everything from scratch. This is particularly relevant for companies juggling traditional workloads alongside newer cloud and AI applications. This flexibility allows organisations to run workloads where it makes the most sense, whether that is on-premise or in the cloud, without needing to redesign their entire infrastructure. In a market where hardware constraints and rising costs are still major concerns, this approach gives businesses more control over how they scale. One of the biggest advantages here is faster migration. With NFS-based integration between the two platforms, companies will be able to move virtual machines more quickly using tools like NetApp Shift and Nutanix Move. This promises data-in-place conversions in minutes, reducing downtime and speeding up deployment. The integration also promises to simplify day-to-day operations. Data management can be handled directly through ONTAP, while compute and storage can scale independently. This reduces operational overhead and makes it easier to manage large environments. At the same time, administrators get more granular control at the virtual machine level, allowing them to fine-tune performance, storage, and recovery settings from a unified interface. In addition to this, it also brings added security, including AI-powered ransomware protection. Looking ahead, both companies are also planning deeper integration with Nutanix's Agentic AI platform, which could further strengthen AI-driven workloads and data pipelines. Nutanix is also betting on a new category of companies called neoclouds. These are providers that offer AI services rather than just raw computing power. With updates to its Agentic AI platform, Nutanix is making it easier for these providers to deliver complete AI solutions, not just GPU access. At the centre of this is a new multitenant system that allows multiple customers to share infrastructure while keeping their data separate and secure. It also introduces usage-based billing, so businesses only pay for what they use. This shift is important because it moves AI from being an expensive experiment to something companies can actually deploy at scale. Another key announcement was the Service Provider Central, a new platform designed for companies that manage IT services for others. It allows providers to handle multiple customers from a single interface while still promising to offer each one a private and secure environment. The platform also supports usage tracking and budgeting within the same interface, helping organisations better understand and control their cloud spending. Alongside this, the brand also introduced a verification programme to help customers identify trusted service providers built on its platform. The overall goal is to make cloud services easier to manage while also expanding Nutanix's partner ecosystem. With NKP Metal, Nutanix is addressing another long-standing challenge. Running applications directly on physical servers can offer better performance, especially for AI and edge workloads, but it is usually harder to manage. And NKP Metal aims to bring cloud-like simplicity to these environments. It allows businesses to run modern, container-based applications directly on physical hardware while still using familiar tools for automation and management. Nutanix is also combining support for both virtual machines and containers in one system. This unified approach reduces complexity and makes it easier for organisations to handle different types of workloads without separate setups. Database operations are another area getting attention. Nutanix has integrated its database service with MongoDB to simplify how enterprises manage large-scale databases. The biggest improvement here is automation. Tasks that once took days, such as setting up database clusters, can now be done in minutes. Recovery is also faster and more precise, with the ability to restore data down to specific points in time. For businesses dealing with critical data, this can significantly reduce downtime and operational risk. In addition to this, Nutanix is enhancing its storage and data capabilities with updates like Unified Storage and Data Lens 2.0. These bring features such as ransomware protection, data governance, and improved visibility across distributed systems. So for enterprises dealing with sensitive or large volumes of data, this adds that extra layer of security and control. Taken together, these announcements show a clear strategy. Nutanix is not just adding features. Instead, it is trying to reduce the overall complexity of running modern IT systems. Whether it is AI, cloud, or data management, the company is focusing on unifying everything under one platform while giving businesses more control and flexibility. Dave Pearson, Group Vice President, Global Lead, Core Infrastructure, IDC, said in a press release, 'The market is facing multiple pressures as organizations grapple with the uncertainty and potential cost increases from AI transformation and modernization initiatives, virtualization market changes, and hardware supply chain disruptions in both memory and media which are going to take several quarters if not years to resolve. By expanding its ecosystem and providing alternative deployment options, including on-ramps to public cloud, Nutanix is providing a path for customers to make the changes they need to make, ensure long-term platform choice, and deploy critical AI and modern workloads without being held hostage by a constrained infrastructure supply.' Matt Kimball, VP and Principal Analyst, Moor Insights and Strategy, talks about the brand's partnership with NetApp and says, 'This collaboration reflects a broader industry shift toward solutions that combine infrastructure modernisation with intelligent data services. Nutanix and NetApp are giving customers a long-term, stable foundation that supports traditional virtualised workloads today and positions them for the cloud-native, AI-driven environments of tomorrow that will require data to be managed and used more effectively. Also read: Samsung Galaxy S26 Ultra vs S26 Plus vs S26: Which one should you actually buy? So, in a market that often pushes companies toward more specialised and expensive solutions, Nutanix seems to be making a different pitch. Keep things simple, make better use of what you already have, and scale when needed without unnecessary overhead.
Share
Share
Copy Link
Nutanix unveiled major expansions to its agentic AI infrastructure platform at .NEXT 2026, introducing an AI gateway for governance and Service Provider Central for multi-tenant GPU clouds. As agent sprawl drives token costs skyward, CEO Rajiv Ramaswami positions the company as the unified platform bridging legacy applications and tomorrow's AI workloads across hybrid environments.
Nutanix rolled out significant expansions to its agentic AI infrastructure platform at its annual .NEXT 2026 conference in Chicago, addressing the mounting challenges enterprises face as AI workloads transition from experimental pilots to production-scale deployments
1
. The company introduced two critical additions: an AI gateway within Nutanix Enterprise AI that governs which agents access which models and at what cost, and Service Provider Central, which enables providers to build multi-tenant GPU clouds and sell AI service catalogs including GPU-as-a-service and Kubernetes-as-a-service to enterprises facing silicon shortages1
.
Source: Digit
CEO Rajiv Ramaswami emphasized that the enterprise computing stack is undergoing its most consequential transformation in decades, with Nutanix positioning itself as the platform where both today's applications and tomorrow's AI workloads run
4
. "We truly want to be the platform company where all applications run," Ramaswami told theCUBE, describing how agentic infrastructure has shifted from an experimental workload to the organizing logic of every new application4
.
Source: CRN
The economics of agentic AI are forcing enterprises to fundamentally rethink where they run inferencing workloads. A single user action in an agentic workflow can trigger hundreds of downstream agent calls, each consuming tokens at scale and driving up costs
1
. Dan Ciruli, vice president and general manager of cloud-native at Nutanix, explained that customers now face complex tradeoffs: "Do we call an API where we're going to pay per token? Do we use some infrastructure at a service provider where we're paying for time, but then we get to generate all the tokens? Or does it make economic sense to buy some hardware, run it on-prem and now we're just buying electricity?"1
.
Source: SiliconANGLE
This cost control challenge is giving rise to an entirely new discipline: AI FinOps. Nutanix highlighted the growing importance of usage metering and cost per token visibility to prevent AI initiatives from becoming budget liabilities
5
. "As you use more and more models, you run into challenges around tracking usage and managing cost," Ramaswami noted5
. For Wynn North America, cost efficiency has become a major factor in evaluating whether to maintain cloud-based AI deployments or invest in on-premises infrastructure5
.As agent proliferation outpaces visibility into how AI workloads consume resources, unmanaged AI deployments—what Nutanix calls shadow AI—represent a growing operational risk
3
. "As agents sprawl, models and tools need to be controlled and governed," explained Anindo Sengupta, vice president of product management at Nutanix1
. The company's AI gateway addresses this by integrating AI governance directly into the core platform rather than treating it as a separate infrastructure layer3
.Regulated industries face particularly stringent requirements. Dan Regalado, CIO of Wynn North America, emphasized that "our gaming data cannot leave the state. Data security and data residency are non-negotiable for every one of our resorts"
5
. This regulatory reality is shaping deployment decisions, with organizations reassessing whether production workloads belong in public cloud or hybrid environments where governance and control can be maintained5
.Data sovereignty has emerged as a major growth vector for Nutanix, with governments worldwide building sovereign AI clouds to keep data and economic value within national borders
4
. "The whole move towards sovereignty is here to stay," Ramaswami said, noting that government initiatives to finance these buildouts are creating a direct pipeline of anchor customers4
. Ramaswami predicts that agentic AI will be a "true hybrid application" for most enterprises, with workloads distributed across public cloud, private cloud, edge locations, and neo-clouds—a new class of service providers offering specialized AI services2
.The flexibility to run workloads across multiple environments matters because proximity to data, real-time inferencing requirements, and regulatory constraints all influence where AI applications can execute
2
. Stephen Hall, vice president of infrastructure and operations at BlueCross BlueShield of Tennessee, underscored this reality: "Infrastructure leaders need adaptability because the industry will keep evolving"5
.Related Stories
Nutanix's platform evolution centers on making GPU resources work harder by eliminating idle compute that inflates cost per token. "GPUs sitting idle is bad, because think about it—on the one hand you're spending more and more tokens, and if you're going to need to buy more and more GPUs to go and use that, it's not efficient," Ramaswami explained
4
. The same virtualization logic that improved CPU utilization a decade ago now applies directly to GPU workloads4
.Underpinning these capabilities is Nutanix Kubernetes Platform Metal, described as the only dual-native platform supporting any combination of VMs, virtualized Kubernetes, and bare metal Kubernetes from a single control plane
1
. The company also announced a partnership with NetApp to integrate NetApp Intelligent Data Infrastructure with Nutanix Cloud Platform, addressing the reality that storage has become the last line of defense in the AI era3
. "When you think about the cybersecurity angle, storage becomes the last line of defense for customers. It's not a matter of if, it's a matter of when," said Sandeep Singh, senior vice president and general manager of enterprise storage at NetApp3
.More than 100 partners sponsored .NEXT 2026, spanning major cloud, server, storage, and chip providers—a signal that reflects the network effect of a genuine platform rather than a product line
4
. A strategic partnership with AMD, in which AMD committed up to $250 million in investment and joint engineering to co-develop an open agentic AI platform, reinforced that the ecosystem is hardening into something structural4
. "The value of a platform is directly tied to the ecosystem around it," Ramaswami said4
.For organizations with limited IT staff, reducing operational complexity through unified platforms has become critical. Josh Hostetler, lead platform engineer at Tire Rack, described his team's reality: "I'm on a platform engineering team of three people. Our goal was to reduce administrative burden without adding another tech stack or more engineers"
5
. As Ramaswami framed it, the shift from prompting to delegating autonomous agents means AI must now be treated like core infrastructure: "This is now about your competitive edge"5
.Summarized by
Navi
[2]
08 May 2025•Technology

13 Nov 2024•Technology

16 Mar 2026•Technology
