NVIDIA launches Physical AI Data Factory Blueprint to accelerate robotics and autonomous vehicles

2 Sources

Share

NVIDIA introduced its Physical AI Data Factory Blueprint, an open reference architecture designed to automate training data generation for robotics, vision AI agents and autonomous vehicles. Cloud providers Microsoft Azure and Nebius are integrating the blueprint, while companies like Uber, Skild AI and Teradyne Robotics use it to scale development. NVIDIA also partnered with T-Mobile and Nokia to deploy AI-RAN infrastructure for distributed edge AI computing.

NVIDIA Unveils Open Blueprint to Transform Physical AI Development

NVIDIA announced the Physical AI Data Factory Blueprint, an open reference architecture that automates how training data is generated, augmented and evaluated for AI systems that interact with the real world

1

. The blueprint addresses a critical bottleneck in robotics and autonomous vehicle development: the massive-scale processing required to train models capable of navigating real-world environments. By unifying data workflows, the architecture reduces costs, time and complexity associated with building physical AI at scale

1

.

Rev Lebaredian, vice president of Omniverse and simulation technologies at NVIDIA, emphasized the importance of this shift: "Physical AI is the next frontier of the AI revolution, where success depends on the ability to generate massive amounts of data"

2

. The blueprint enables developers to transform limited training data into large, diverse datasets that capture edge cases and long-tail scenarios—situations that are expensive and impractical to record in real-world conditions

1

.

Cloud Partners and Early Adopters Drive Adoption

Microsoft Azure and Nebius are integrating the Data Factory Blueprint into their cloud infrastructure, enabling developers to convert accelerated computing power into high-volume generation of training data

1

. Microsoft Azure has made the blueprint available on GitHub as part of an open physical AI toolchain, offering integration with Azure IoT Operations, Microsoft Fabric, Real-Time Intelligence, Microsoft Foundry and GitHub Copilot

1

.

Leading physical AI developers are already applying the blueprint. Uber is using it to accelerate autonomous vehicle development, while Skild AI applies the architecture to advance general-purpose robot foundation models

1

. Companies including FieldAI, Hexagon Robotics, Linker Vision, Milestone Systems, RoboForce, Teradyne Robotics and others are testing the Azure physical AI toolchain to scale data generation across perception, mobility and reinforcement learning pipelines

1

2

.

Source: NVIDIA

Source: NVIDIA

NVIDIA Cosmos Powers Data Workflows

At the core of the blueprint sits NVIDIA Cosmos, a suite of open world foundation models that curate, augment and evaluate training data

1

. Cosmos Curator processes and annotates large-scale real-world and synthetic datasets, while Cosmos Transfer exponentially expands curated data by multiplying real and simulated inputs across different environments and lighting conditions

1

. Cosmos Evaluator, now available on GitHub, automatically scores and filters generated data to ensure physical accuracy and training readiness

1

.

NVIDIA OSMO, an open-source orchestration framework, manages these workflows across compute environments and now integrates with coding agents such as Claude Code, OpenAI Codex and Cursor

1

2

. This integration enables AI-native operations where agents proactively manage resources and resolve bottlenecks, allowing developers to focus on model development rather than infrastructure management

1

.

AI-RAN Infrastructure Extends Physical AI Reach

NVIDIA also partnered with T-Mobile and Nokia to bring vision AI agents into next-generation AI-RAN infrastructure, transforming wireless networking into platforms that distribute edge AI computing across broad regions

2

. T-Mobile became the first U.S. company to pilot NVIDIA's AI-RAN infrastructure with Nokia's anyRAN software, working with select NVIDIA physical AI partners to demonstrate how cellular sites can support distributed edge AI workloads on 5G networks

2

.

Jensen Huang, founder and CEO of NVIDIA, stated: "Telecommunication networks are evolving into the AI infrastructure enabling billions of devices -- from vision AI agents to robots and autonomous vehicles -- to see, hear and act in real time"

2

. The AI-RAN infrastructure addresses a critical gap by providing low-latency, secure connectivity that offloads heavy computation from devices to nearby edge locations, enabling use cases like smart city operations, automated utility inspection and real-time industrial safety

2

.

This infrastructure approach solves a key challenge: very small models running on devices lack the intelligence for significant processing, while extremely large models in distant datacenters introduce delays. By running appropriately sized models on nearby servers through 5G networks, developers can balance computational power with response time for robotics applications that demand real-time decision-making

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo