2 Sources
2 Sources
[1]
Vast Data expands AI Operating System with global control plane, zero-trust agent framework and deeper Nvidia integration - SiliconANGLE
Vast Data expands AI Operating System with global control plane, zero-trust agent framework and deeper Nvidia integration Vast Data Inc. continues its evolution from a storage provider to an artificial intelligence infrastructure platform today with a series of announcements at its Forward 2026 conference, highlighted by the broadest expansion yet of what it calls its AI Operating System. They include the introduction of a global control plane for hybrid and multicloud deployments, a zero-trust framework for agentic AI systems, deeper integration with Nvidia Corp.'s accelerated computing stack, and new ecosystem partnerships spanning video intelligence, cybersecurity and cloud services. The company is betting that enterprises building mission-critical AI systems will prioritize tightly integrated data, compute, governance and orchestration under a single operating model. "Data in particular is critical to AI strategies," said Vast co-founder Jeff Denworth, "especially as enterprises expand inference pipelines and bring regulated and enterprise data into AI-driven workflows." At the center of the infrastructure announcements is Polaris, a Kubernetes-based global control plane designed to orchestrate Vast clusters across public cloud, neocloud and on-premises environments. As AI training, inference and data collection increasingly take place in different geographies and under varying compliance regimes, enterprises are wrestling with operational sprawl, Vast said. Polaris introduces a centralized management layer that provisions, upgrades and governs distributed Vast environments while maintaining local data paths. Polaris evolved from a cloud lifecycle manager into a broader orchestration framework capable of connecting hybrid deployments through lightweight agents rather than full-stack installations everywhere, said Jonsi Stefansson, Vast's general manager of cloud. The architecture centralizes intelligence while preserving distributed execution, enabling global policy management, fleet visibility and nondisruptive upgrades without forcing data to be centralized. Polaris integrates with major hyperscalers, including Microsoft Corp., Amazon Web Services Inc., Google LLC and Oracle Corp. It's positioned as complementary to its DataSpace global namespace, which abstracts data location. Polaris abstracts infrastructure location, allowing AI pipelines to operate against what appears to be a single logical environment. Vast is also introducing two new services - PolicyEngine and TuningEngine -- to address what executives described as the trust barrier to large-scale enterprise AI adoption. PolicyEngine acts as an inline policy enforcement point across the AI Operating System. It governs agent access to shared memory, tools, knowledge bases and other agents using fine-grained permissions and AI-derived context. Enforcement occurs before actions are executed, and the system generates tamper-proof audit logs to support replay, explainability and regulatory compliance. Denworth described the approach as mediating every type of input and output within the system, enabling redaction or transformation of sensitive data before exposure to models or agents. The goal is to maintain a zero-trust posture across AI workflows while preserving operational flexibility. TuningEngine complements the control plane by managing model evolution. It collects telemetry and feedback from agentic workflows, processes data through extract-transform-load pipelines, and feeds curated outputs into fine-tuning frameworks such as LoRA, supervised fine-tuning and reinforcement learning. The result is a closed-loop system in which candidate models are trained, benchmarked and redeployed within the same platform. By embedding fine-tuning inside the enterprise boundary, Vast aims to support customers that cannot rely on hyperscaler-hosted AI labs but still require continuous model improvement. "If we don't handle fine-tuning, then that's going to be a security gap," Denworth said. Both engines will roll out over the course of the year. Denworth said the announcements are being made in advance so customer input can be incorporated. Vast also deepened its collaboration with Nvidia, introducing CNode-X, a new graphics processing unit-accelerated server configuration that runs the Vast AI Operating System directly on Nvidia-powered infrastructure. The servers will be offered through partners including Cisco Systems Inc. and Supermicro Inc. The architecture embeds Nvidia Compute Unified Device Architecture libraries into core Vast services, accelerating real-time SQL analytics, vector search, retrieval-augmented generation pipelines and inference workloads. The system integrates Nvidia cuDF DataFrame library for GPU-accelerated SQL execution via the Sirius open-source query engine, Nvidia cuVS for vector search acceleration and Nvidia Inference Microservices for scalable inference pipelines. Vast said early benchmarks of the Sirius integration showed up to 44% reduction in query time and 80% reduction in query cost. The platform also supports Nvidia Context Memory Storage and BlueField-4 data processing units to accelerate shared key-value cache access in long-context inference scenarios. Denworth characterized the work with Nvidia as extensive, noting that the companies are collaborating on more than two dozen joint engineering initiatives. In the application ecosystem, Vast announced a partnership with TwelveLabs Inc., a developer of multimodal video foundation models including Marengo for embeddings and Pegasus for deep video understanding. The companies will provide a customer-managed deployment path for TwelveLabs' models on the Vast AI Operating System. Historically delivered primarily through public cloud services, the models will now be deployable in on-premises and sovereign environments where data residency, governance and cost constraints limit cloud-only architectures. Vast also announced a strategic partnership with CrowdStrike Holdings Inc. to integrate enterprise threat detection and response into the AI lifecycle. The integration connects telemetry from the Vast AI Operating System to the CrowdStrike Falcon platform, enabling coordinated detection across data ingestion, model training and runtime inference environments. The companies said the joint solution helps mitigate risks such as data poisoning, unauthorized access and malware injection into AI workflows. Executives described the partnership as extending across infrastructure, workloads and data layers, and as complementary to both companies' existing collaborations with Nvidia. Finally, Vast expanded its Cosmos Community for third-party partners into a unified global program encompassing channel, cloud, technology alliance and systems integration partners. The new framework formalizes routes to market and provides centralized training, enablement and governance resources through a partner portal. Executives said the goal is to make the AI Operating System extensible in both "northbound" and "southbound" directions, meaning that it covers the gamut from hardware components to AI frameworks, MLOps platforms, analytics engines and industry-specific applications. Vast is scaling up rapidly. Denworth said the 10-year-old firm has now surpassed $4 billion in cumulative software bookings and exceeded $500 million in contracted annual recurring revenue. The company said it tripled total sales in its most recent fiscal year and reached operating profitability while remaining free-cash-flow-positive.
[2]
VAST Data Unveils A Platform For Secure, Trusted, And Self-Learning Agentic AI S...
Today at VAST Forward 2026, VAST Data, the AI Operating System company, announced the VAST Data PolicyEngine and VAST Data TuningEngine, two new computing services that will allow the next generation of the VAST AI Operating System to deliver key requirements for organisations looking to scale their mission-critical AI initiatives. Specifically, PolicyEngine and TuningEngine work in tandem within the VAST DataEngine to create AI systems and interactions that are trusted, explainable, and continuously learning. PolicyEngine governs agentic activity and TuningEngine manages model tuning, working in conjunction to power automatic learning loops that remain aligned with organisational expectations. "Just as people are always learning, so should tomorrow's applications," said Jeff Denworth, Co-Founder at VAST Data. "With the introduction of PolicyEngine and TuningEngine, the VAST AI Operating System has become a thinking machine that customers can deploy wherever they compute - a machine that safeguards every interaction and learns from every outcome, bringing the power of AI within reach of every organisation." AI workflows and agents are increasingly accessing organisational data, using it to produce more information in the form of generated responses, agent-to-agent communications, event logs, and more. Without fine-grained controls on what agents can access and how they communicate with other agents, tools, and remote data products, the chance for data spillage and leakage rises greatly. Without strict controls on how data is accessed and how services communicate, and without tools to log every aspect of an agentic workflow, AI cannot be fully trusted. The VAST PolicyEngine resolves these concerns via an inline policy enforcement engine to safeguard every aspect of agentic interaction and communication. PolicyEngine governs agents' access to shared memory, external tools, knowledge bases, or other agents by permitting access, actions, and communications according to fine-grained, explicit permissions, as well as AI-derived context. Because enforcement occurs before actions execute, and because the system maintains extensive, tamper-proof traces and logs, the system maintains a zero-trust operating posture to ensure that decisions and actions remain observable, explainable, and auditable. VAST AgentEngine is the agentic runtime of the AI OS. This serverless computing environment is simple to program and coordinates multi-agent workflows, model invocation, and agentic tool usage within the VAST AI OS. While AgentEngine has been suitable for the deployment of static models, the completeness of the AI OS stack allows the platform to also support 'learning loops' that use all of the system's telemetry, as well as agent and model feedback, to support fine tuning and reinforcement learning pipelines. The VAST TuningEngine captures outcomes from agentic pipelines and utilises curated feedback to enhance model performance over time. Using popular methods such as LoRA fine tuning, supervised fine tuning, and reinforcement learning, TuningEngine pipelines automatically ingest that data, process it, and suggest new candidate models. Each new candidate can be evaluated and benchmarked within the VAST AI OS, and then manually or automatically deployed into the platform. This will kick off a new learning loop that uses future interactions to improve on the newly deployed, updated model. These new capabilities represent a massive step toward building systems that automatically evolve as they interact with data from the natural world. VAST Data has been working on building such a system since 2016, and unveiled the full extent of its vision in 2023. With today's announcement, VAST AI OS finally creates a closed operational computing loop that observes, reasons, acts, evaluates, and improves - all while fortifying security and explainability by unifying and safeguarding all activities in one unified system. The VAST PolicyEngine and TuningEngine are slated for release by the end of 2026.
Share
Share
Copy Link
VAST Data announced major expansions to its AI Operating System at Forward 2026, introducing PolicyEngine and TuningEngine to enable secure, explainable, and continuously learning agentic AI systems. The platform now includes Polaris, a global control plane for hybrid and multi-cloud deployments, alongside deeper Nvidia integration through CNode-X servers that accelerate inference pipelines and vector search.

VAST Data announced sweeping updates to its AI Operating System at the Forward 2026 conference, positioning the platform as a comprehensive solution for enterprises deploying mission-critical agentic AI systems. The centerpiece of the announcement includes PolicyEngine and TuningEngine, two new computing services designed to address trust, explainability, and continuous learning challenges that have hindered large-scale AI adoption
1
2
.PolicyEngine functions as an inline policy enforcement point that governs agent access to shared memory, tools, knowledge bases, and other agents using fine-grained permissions and AI-derived context. The zero-trust framework enforces policies before actions execute and maintains tamper-proof audit logs to support replay, explainability, and regulatory compliance. "Without fine-grained controls on what agents can access and how they communicate with other agents, tools, and remote data products, the chance for data spillage and leakage rises greatly," according to the company's announcement
2
. VAST co-founder Jeff Denworth described the approach as mediating every type of input and output within the system, enabling redaction or transformation of sensitive data before exposure to models or agents1
.TuningEngine complements PolicyEngine by managing model evolution and creating closed-loop learning systems. The service collects telemetry and feedback from agentic workflows, processes data through extract-transform-load pipelines, and feeds curated outputs into fine-tuning frameworks including LoRA fine-tuning, supervised fine-tuning, and reinforcement learning
1
. TuningEngine pipelines automatically ingest data, process it, and suggest new candidate models that can be evaluated and benchmarked within the VAST AI Operating System before deployment2
.By embedding fine-tuning inside the enterprise boundary, VAST Data aims to support customers that cannot rely on hyperscaler-hosted AI labs but still require continuous model improvement. "If we don't handle fine-tuning, then that's going to be a security gap," Denworth said
1
. Both engines work in tandem within the VAST DataEngine to create AI systems that are trusted, explainable, and continuously learning, with PolicyEngine governing agentic activity while TuningEngine manages model tuning to power automatic learning loops aligned with organizational expectations2
.VAST Data introduced Polaris, a Kubernetes-based global control plane designed to orchestrate VAST clusters across public cloud, neocloud, and on-premises environments. As AI training, inference pipelines, and data collection increasingly occur across different geographies under varying compliance regimes, enterprises face operational sprawl
1
. Polaris provides a centralized management layer that provisions, upgrades, and governs distributed VAST environments while maintaining local data paths.Jonsi Stefansson, VAST's general manager of cloud, explained that Polaris evolved from a cloud lifecycle manager into a broader orchestration framework capable of connecting hybrid deployments through lightweight agents rather than full-stack installations. The architecture centralizes intelligence while preserving distributed execution, enabling global policy management, fleet visibility, and nondisruptive upgrades without forcing data centralization
1
. Polaris integrates with Microsoft, Amazon Web Services, Google, and Oracle, positioning itself as complementary to VAST's DataSpace global namespace.Related Stories
VAST Data deepened its collaboration with Nvidia through CNode-X, a new GPU-accelerated server configuration that runs the VAST AI Operating System directly on Nvidia-powered infrastructure. The servers will be offered through partners including Cisco Systems and Supermicro
1
. The architecture embeds Nvidia Compute Unified Device Architecture libraries into core VAST services, accelerating real-time SQL analytics, vector search, retrieval-augmented generation pipelines, and inference workloads.The system integrates Nvidia cuDF DataFrame library for GPU-accelerated SQL execution via the Sirius open-source query engine, Nvidia cuVS for vector search acceleration, and Nvidia Inference Microservices for scalable inference pipelines
1
. The Nvidia integration addresses VAST's bet that enterprises building mission-critical AI systems will prioritize tightly integrated data, compute, data governance, and orchestration under a single operating model.The PolicyEngine and TuningEngine announcements represent what executives described as addressing the trust barrier to large-scale enterprise AI adoption. Without strict controls on how data is accessed and how services communicate, and without tools to log every aspect of an agentic workflow, AI cannot be fully trusted
2
. The system maintains extensive, tamper-proof traces and logs to ensure that decisions and actions remain observable, explainable, and auditable.Both PolicyEngine and TuningEngine are slated for release by the end of 2026
2
. Denworth said the announcements are being made in advance so customer input can be incorporated1
. These capabilities represent a step toward building systems that automatically evolve as they interact with data, creating a closed operational computing loop that observes, reasons, acts, evaluates, and improves while fortifying security and explainability2
.Summarized by
Navi
[1]
22 May 2025•Technology

03 Oct 2024•Technology

02 Oct 2024

1
Technology

2
Policy and Regulation

3
Science and Research
