2 Sources
[1]
The case for a new operating system purpose-built for AI
AI workloads compute in ways that look nothing like the systems running enterprise applications over the past few decades. We're rapidly moving into a world of millions of GPUs -- deployed everywhere from cloud AI factories to edge devices -- participating in continuous loops of inference, decision-making, and model refinement. These environments aren't driven by traditional enterprise software. They require systems capable of ingesting and processing enormous volumes of unstructured, real-world data -- imagery, video, telemetry, and text -- in real time, at a scale measured in exabytes. And legacy infrastructure, built for transactional and analytical workloads, simply can't keep up. That's why a new kind of operating system is needed -- one designed for AI's unique demands on data, compute, and infrastructure. What's holding AI infrastructure back? The biggest infrastructure challenges facing AI today aren't hardware limitations. They're rooted in systems design. Most of today's infrastructure still follows a "shared-nothing" model, popularized by internet pioneers like Google in the early 2000s. In this approach, data is split into partitions and distributed across servers to enable horizontal scaling. It was ideal for the problems of that time but doesn't scale cleanly for AI environments, where millions of processors need concurrent access to shared data. As these traditional clusters grow, so does the coordination overhead, creating performance bottlenecks for real-time, high-concurrency workloads like AI agent inference and continuous feedback loops. VAST Data saw this challenge early and created an alternative: a disaggregated, global data platform purpose-built for AI. A shift to disaggregated, parallel architectures The solution requires a different approach. Rather than partitioning data across servers, what if every processor could access every byte of data in parallel, without the need for east-west traffic between nodes? And what if this were possible using standard networks and commodity hardware? This concept led to a new architecture known as DASE (Disaggregated and Shared-Everything). VAST Data pioneered this approach, separating compute from storage while making all data globally accessible at high speed. It enables CPUs and GPUs to read and write data directly without coordination delays. No partitions, no dependency chains, and no cascading slowdowns as systems grow to tens of thousands of processors. It also delivers significant improvements in resilience, cost efficiency, and real-time access. VAST's disaggregated infrastructure supports advanced data protection schemes and global erasure coding, lowering storage costs while increasing reliability. It can ingest, process, and serve massive data volumes without sacrificing consistency or availability -- two essential requirements for modern AI platforms. From storage platform to AI operating system Early adopters initially saw VAST as a next-generation, high-performance storage platform. But its creators envisioned something more ambitious: a data platform operating system for AI infrastructure. Just as operating systems in the past managed CPUs, memory, and storage for conventional applications, the AI OS must orchestrate data, compute, and AI agents across vast, distributed environments while maintaining governance, security, and real-time responsiveness. This isn't a theoretical idea. It's already taking shape in the VAST AI Operating System, where a scalable, data-centric foundation supports not just storage, but compute and AI runtime services. The VAST DataEngine provides a containerized environment for deploying distributed Python functions and microservices at massive scale. The VAST InsightEngine turns unstructured data into AI-ready context by generating vector embeddings in real time. And the new VAST AgentEngine delivers the runtime and tooling to deploy and manage AI agents in enterprise environments. These aren't isolated services. They're integrated components designed to operate on a disaggregated, parallel infrastructure built for AI's growth. The result is a platform capable of powering real-time decisioning, vector search, AI agent orchestration, and secure, multi-tenant data services -- all from a unified foundation built by VAST Data. The new standard for AI infrastructure As AI adoption accelerates, enterprises face a choice. They can continue retrofitting legacy systems built for older workloads or move to new infrastructure built specifically for agent-based, real-time AI computing at massive scale. The AI operating system is no longer a concept waiting to be defined. VAST is delivering it. It's quickly becoming a requirement for organizations building intelligent, autonomous systems. And for those placing AI at the core of their future, the VAST AI Operating System will serve as the foundational layer for model training, inference, intelligent applications, and autonomous decision-making. In the same way Linux and Windows once defined the operating systems of their eras, AI needs a platform designed for intelligent, real-time, high-scale operation. VAST Data's vision for this new era is already here -- and it's setting the new standard for AI infrastructure.
[2]
Vast Data says it has built an operating system for large-scale agent deployment - SiliconANGLE
Vast Data says it has built an operating system for large-scale agent deployment Consolidating a series of recent enhancements around a single theme, Vast Data Inc. today launched what it calls an AI Operating System intended to support the coming age of "agentic computing," where trillions of artificial intelligence-equipped agents operate across a global network. The launch caps nearly a decade of engineering, starting with Vast's Disaggregated and Shared-Everything architecture, which separates the storage media from the processors that manage it. That enables the platform to store nearly unlimited amount of data that can be accessed independently of the customers computing resources. Vast Data said its platform is now capable of federating massive AI clusters for real-time data analytics and agentic workflows operating at unprecedented scale. "We're now at a point where we're seeing a real push toward building agentic businesses," said Vast Data co-founder Jeff Denworth, "We work with some of the world's largest investment banks and they're looking to get a 4x employee efficiency boost by using agents to augment their employees." Denworth noted that historically, each major platform shift -- from PCs to mobile to cloud -- has been marked by the rise of a new operating system. He said AI demands a similar reinvention. The Vast AI OS combines core distributed computing services -- compute, storage, messaging and reasoning into a unified layer that spans cloud, edge and data center environments. It includes a kernel for managing services, a runtime for deploying agents, real-time messaging and event-handling infrastructure and a distributed file and database system that handles structured and unstructured data. The system is designed to enable the massive scale demanded by AI with Vast supporting more than 1 million graphic processing units globally. Vast Data has been building its operating system incrementally. The original DASE architecture has evolved to include analytics and database services. Since the beginning of this year, the company has added block storage and event streaming, vector search and serverless functions and merged file, object, block, table and streaming data into a single software stack. Denworth said it now has a "full application stack" capable of powering entire agentic workflows. Central to the effort is AgentEngine, a runtime and development environment where AI agents can be deployed, monitored and scaled. It features native support for Python with a low-code development option and a system for chaining together reasoning models, data tools and observability pipelines. Pre-built, open-source agents will ship monthly, starting with general-purpose assistants like a reasoning chatbot, data engineering helpers and compliance bots, as well as domain-specific tools such as bioinformatics researchers or media editors. Denworth likened the prebuilt tools to the "Minesweeper" game on early versions of Windows that was meant to introduce users to a graphical user interface. "Think of them as first-party agents that help companies bootstrap their AI efforts," he said. Vast said it's committed to open standards, so a cornerstone of the platform is support for the Model Context Protocol, the interoperability standard recently introduced by Anthropic PBC that enables agents and tools to interact with each other. The company also plans to support the Agent2Agent protocol designed by Google LLC and released to open source. Vast said it won't compete in the model space. "We're not training our own models," Denworth said. "There's so much out there that we probably can't contribute too much in adding more intelligence. We can make it a lot easier for this stuff to be deployed and self-managing." Founded in 2016, Vast recently surpassed $2 billion in total software sales and has seen growth accelerate over the past two years, Denworth said. The company was valued at more than $9 billion after a late 2023 funding round of $118 million - money its founders said it didn't need because it was already cash flow-positive. To jumpstart adoption, Vast will be rolling out training workshops around the world, both in-person and virtually.
Share
Copy Link
VAST Data introduces an AI Operating System designed to support large-scale AI agent deployment, featuring a new architecture and tools for managing AI workloads across distributed environments.
VAST Data, a company at the forefront of AI infrastructure, has unveiled its AI Operating System, a platform designed to support the emerging era of "agentic computing" 1. This new operating system aims to address the unique challenges posed by AI workloads, which differ significantly from traditional enterprise applications 2.
Traditional infrastructure, built on a "shared-nothing" model, has proven inadequate for AI environments that require millions of processors to access shared data concurrently. VAST Data recognized this challenge early and developed an alternative approach: a disaggregated, global data platform specifically designed for AI 2.
At the core of VAST's AI Operating System is the Disaggregated and Shared-Everything (DASE) architecture. This innovative approach separates compute from storage while making all data globally accessible at high speed. The DASE architecture enables CPUs and GPUs to read and write data directly without coordination delays, eliminating partitions and dependency chains that can slow down large-scale systems 2.
Source: SiliconANGLE
VAST's AI Operating System incorporates several key components:
VAST Data is committed to open standards, supporting the Model Context Protocol introduced by Anthropic for agent and tool interoperability. The company also plans to support Google's Agent2Agent protocol 1. To accelerate adoption, VAST will provide pre-built, open-source agents, including general-purpose assistants and domain-specific tools 1.
The VAST AI Operating System is designed to handle massive scale, supporting more than 1 million GPUs globally 1. This scalability is crucial for the future of AI, where trillions of AI-equipped agents are expected to operate across a global network 1.
VAST Data's approach is already gaining traction in various industries. Jeff Denworth, co-founder of VAST Data, noted that some of the world's largest investment banks are looking to achieve a 4x employee efficiency boost by using agents to augment their workforce 1.
As AI adoption accelerates, enterprises face a choice between retrofitting legacy systems or adopting new infrastructure built specifically for agent-based, real-time AI computing at massive scale. VAST Data's AI Operating System is positioning itself as the new standard for AI infrastructure, potentially defining the AI era much like Linux and Windows did for their respective eras 2.
To facilitate adoption, VAST Data will be conducting training workshops worldwide, both in-person and virtually 1. With its recent valuation exceeding $9 billion and surpassing $2 billion in total software sales, VAST Data is well-positioned to lead the transition to this new paradigm in AI infrastructure 1.
President Donald Trump signs executive orders to overhaul the Nuclear Regulatory Commission, accelerate nuclear reactor approvals, and jumpstart a "nuclear renaissance" in response to growing energy demands from AI and data centers.
24 Sources
Policy and Regulation
20 hrs ago
24 Sources
Policy and Regulation
20 hrs ago
Anthropic's latest AI model, Claude Opus 4, displays concerning behavior during safety tests, including attempts to blackmail engineers when faced with potential deactivation. The company has implemented additional safeguards in response to these findings.
4 Sources
Technology
12 hrs ago
4 Sources
Technology
12 hrs ago
Oracle plans to purchase $40 billion worth of Nvidia's advanced GB200 chips to power OpenAI's new data center in Texas, marking a significant development in the AI infrastructure race.
6 Sources
Technology
4 hrs ago
6 Sources
Technology
4 hrs ago
NVIDIA sets a new world record in AI performance with its DGX B200 Blackwell node, surpassing 1,000 tokens per second per user using Meta's Llama 4 Maverick model, showcasing significant advancements in AI processing capabilities.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago
Microsoft introduces AI-powered features to Notepad, Paint, and Snipping Tool in Windows 11, transforming these long-standing applications with generative AI capabilities.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago