NVIDIA is upleveling its Omniverse platform with NVIDIA Apollo, a new family of open models that combine AI surrogate models with traditional physical simulation methods. In the short run, this will speed simulations by tens to thousands of times, enabling developers to integrate real-time capabilities into their simulation software across a broad range of industries.
This is a big deal because the gold standards in simulation often require days or weeks to run. The AI techniques are faster but make mistakes in unintuitive ways. Apollo will make it easier for engineers to test ideas quickly and then ground-truth the best ones or combinations for more robust validation.
The NVIDIA Omniverse is at the center of the action. It builds on the OpenUSD specification to translate data between engineering tools and formats from different vendors, as well as open-source representations. NVIDIA laid the foundation for rendering compelling 3D visualizations that model basic physics, suitable for making movies and training autonomous cars and robots.
Apollo essentially uplevels this foundation to support more rigorous mechanical and electrical physics, improving scalability, performance, and accuracy across a variety of fields. Surrogate AI models are becoming a big deal in the engineering community since they can run faster and on simpler hardware. But building useful surrogate AI models has traditionally required a combination of subject matter, AI, and data engineering expertise.
Apollo helps automate many of these processes to reduce the expertise required for a new model and allows teams to move faster with pretrained starting points, called checkpoints, and reference workflows. It is also making it easier to develop workflows that go beyond Large Language Models (LLM), such as neural operators, diffusion models and transformers. For example, while the basic transformer is the workhorse in LLMs, in physical AI, transformers help AI systems learn relationships within a single sensor type at different time scales or across different sensing modalities.
This initiative could also make it easier for startups to develop and test world foundation models for new domains. I recently covered how Applied Computing was developing one such model to guide the development of self-learning systems in the energy sector. In these systems, LLMs play a supporting role alongside other AI techniques, such as surrogate AI models, stream processing, and context engineering.
Apollo will make it easier for more companies to build these kinds of systems in domains such as:
Early Apollo partners include Applied Materials, Cadence, LAM Research, Luminary Cloud, KLA, PhysicsX, Rescale, Siemens and Synopsys. This allows companies like Northrop Grumman to explore thousands of design tweaks to accelerate the development of spacecraft thrusters and improve chip designs.
Multiphysics simulations have been particularly challenging because they involve bouncing back and forth across different categories of physics models using different file formats. For example, an engineer might find a way to decrease electrical interference on a chip that inadvertently increases overheating. Applied Materials used the new libraries to speed these kinds of simulations thirty-five times.
There are multiple aspects to the speedup. The surrogate models themselves are faster. Also, the new NVIDIA tools make it easier to start with pre-configured templates, reducing setup time. For example, Synopsys is finding that they can run fluid simulation workloads about 50 times faster, and this can be multiplied by another 10 times thanks to the higher accuracy of the initial solution. As a result, they can run a complex simulation in forty minutes that previously took several weeks.
Michael Mara, Founding AI/ML Engineer at Luminary Cloud, said that Apollo models provide a baseline to generate customer-specific data to improve accuracy on the design space individual customers care about:
When you have a trained Physics AI model, you can get engineering estimates in seconds instead of hours, and many of the complicated details of computational physics are baked into the model instead of requiring expertise at evaluation time. At Luminary, we're seeing customers use this to bring physics knowledge up in the design process, allowing physics constraints and optimization to be done much earlier, short-circuiting weeks or months of iteration between design and engineering. Ultimately, this helps our customers be more competitive by getting to market faster, creating new, better products, and reducing development risks.
Down the road, Apollo will also make it easier to support new workflows across the enterprise. It is simultaneously expanding the breadth of the Omniverse to support IoT use cases, autonomous driving, and robot simulators for training.
Too much ink has been spilled pondering whether the current AI mania, driving overinvestment in new infrastructure, will crash as the current bubble bursts. To the extent it relies on simply scaling LLMs that are becoming increasingly commoditized, this seems likely. It seems akin to how early railroad pioneers built competing lines between city pairs that only made economic sense after consolidation and liquidation.
LLMs are good at synthesizing information about what people have written about the world. Apollo demonstrates one path towards grounding LLMs that, with real-world data, physical AI, and, more broadly, statistical models, can help objectively explain what is happening in the world. In the short run, this shows tremendous promise in applying AI to engineering problems with real-world impact. The harsh reality is that much of this kind of data lies buried across disparate file formats, and the NVIDIA Omniverse is increasingly serving as a sort of universal translator among these tools. NVIDIA Apollo essentially uplevels this translator to the level of disparate AI algorithms.
It's also important to appreciate that NVIDIA Apollo is billed as open, which it is at the software level. Though for the time being, it only runs on NVIDIA hardware. It remains to be seen whether other hardware vendors can either port it to run efficiently on their own architectures or collaborate to build an open-source alternative using NVIDIA's approach as a template.
In the long run, it shows a path to combining some of these techniques across the enterprise, such as optimizing supply chains, responding to customer questions more accurately, and identifying and mitigating risks. But this is going to take time. It's taken a few years for NVIDIA Omniverse to be adopted by enterprise vendors to streamline their processes and workflows across tools. Apollo's support for physical AI algorithms and, more broadly, for other mathematical techniques will likely take some time to achieve broader adoption.