7 Sources
7 Sources
[1]
Nvidia wants to be the Android of generalist robotics | TechCrunch
Nvidia released a new stack of robot foundation models, simulation tools, and edge hardware at CES 2026, moves that signal the company's ambition to become the default platform for generalist robotics, much as Android became the operating system for smartphones. Nvidia's move into robotics reflects a broader industry shift as AI moves off the cloud and into machines that can learn how to think in the physical world, enabled by cheaper sensors, advanced simulation, and AI models that increasingly can generalize across tasks. Nvidia revealed details on Monday about its full-stack ecosystem for physical AI, including new open foundation models that allow robots to reason, plan, and adapt across many tasks and diverse environments, moving beyond narrow task-specific bots, all of which are available on Hugging Face. Those models include: Cosmos Transfer 2.5 and Cosmos Predict 2.5, two world models for synthetic data generation and robot policy evaluation in simulation; Cosmos Reason 2, a reasoning vision language model (VLM) that allows AI systems to see, understand, and act in the physical world; and Isaac GR00T N1.6, its next-gen vision language action (VLA) model purpose-built for humanoid robots. GR00T relies on Cosmos Reason as its brain, and it unlocks whole-body control for humanoids so they can move and handle objects simultaneously. Nvidia also introduced Isaac Lab-Arena at CES, an open-source simulation framework hosted on GitHub that serves as another component of the company's physical AI platform, enabling safe virtual testing of robotic capabilities. The platform promises to address a critical industry challenge: as robots learn increasingly complex tasks, from precise object handling to cable installation, validating these abilities in physical environments can be costly, slow, and risky. Isaac Lab-Arena tackles this by consolidating resources, task scenarios, training tools, and established benchmarks like Libero, RoboCasa, and RoboTwin, creating a unified standard where the industry previously lacked one. Supporting the ecosystem is Nvidia OSMO, an open-source command center that serves as connective infrastructure that integrates the entire workflow from data generation through training across both desktop and cloud environments. And to help power it all, there's the new Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family. Nvidia is pitching it as a cost-effective on-device compute upgrade that delivers 1200 teraflops of AI compute and 64 gigabytes of memory while running efficiently at 40 to 70 watts. Nvidia is also deepening its partnership with Hugging Face to let more people experiment with robot training without needing expensive hardware or specialized knowledge. The collaboration integrates Nvidia's Isaac and GR00T technologies into Hugging Face's LeRobot framework, connecting Nvidia's 2 million robotics developers with Hugging Face's 13 million AI builders. The developer platform's open-source Reachy 2 humanoid now works directly with Nvidia's Jetson Thor chip, letting developers experiment with different AI models without being locked into proprietary systems. The bigger picture here is that Nvidia is trying to make robotics development more accessible, and it wants to be the underlying hardware and software vendor powering it, much like Android is the default for smartphone makers. There are early signs that Nvidia's strategy is working. Robotics is the fastest growing category on Hugging Face, with Nvidia's models leading downloads. Meanwhile robotics companies from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics are already using Nvidia's tech.
[2]
Nvidia's physical AI models clear the way for next-gen robots - here's what's new
Nvidia releases new physical AI models at CES. Partners unveil next-generation robots. The robots span a wide range of use cases and industries. As AI models continue to gain popularity, there is an increased focus on developing hardware that bridges the gap between a device's screen and the world around us. As a result, physical AI is an emerging theme at CES, and Nvidia unveiled models to accelerate the development of these robots. "The ChatGPT moment for robotics is here. Breakthroughs in physical AI -- models that understand the real world, reason and plan actions -- are unlocking entirely new applications," said Jensen Huang, founder and CEO of Nvidia. Also: CES 2026 live updates To drive that momentum forward, Nvidia unveiled new open Nvidia Cosmos and GR00T models during its Las Vegas keynote event on Monday. The company stated that these models are designed to enable developers to allocate less time and resources to pretraining and more to building next-generation robots. In particular, the releases include Nvidia Cosmos Transfer 2.5 and Nvidia Cosmos Predict 2.5, open, fully customizable world models that also understand the real world around you, including its physics and spatial properties. This is useful for creating synthetic data and simulations that emulate realistic life scenarios for evaluating robots' performance, necessary because testing these physical AI developments, such as autonomous vehicles, is often too risky to conduct in real life. Also: How DeepSeek's new way to train advanced AI models could disrupt everything - again Nvidia Cosmos Reason 2 is an open-reasoning vision language model (VLM) that allows intelligent machines "to see, understand, and act in the physical world like humans," according to Nvidia. Moreover, using Nvidia Cosmos Reason 2, physical AI can make decisions as humans do, using reason, prior knowledge, understanding of physics, and more. Lastly, Nvidia Isaac GR00T N1.6 is an open-reasoning vision language action (VLA) model specifically designed for humanoid robots, enabling full-body control and leveraging Nvidia Cosmos Reason for the additional benefits discussed above. The new models are all available on Hugging Face. Benchmarking and simulations are essential for ensuring the safe development of autonomous systems, but they are often one of the most challenging components of robotics due to the difficulty in creating these simulations. To help bridge this gap, Nvidia released new open-source frameworks on GitHub, including the Nvidia Isaac Lab-Arena and Nvidia OSMO. The Nvidia Isaac Lab-Arena is an open-source framework designed for large-scale robot policy evaluation and benchmarking in simulation, according to the blog post. It was designed in close collaboration with Lightwheel, an embodied AI infrastructure company, and connects to industry-leading benchmarks. Also: Are our homes ready for a real-life Rosie the Robot? SwitchBot thinks so Nvidia Osmo is designed to help developers with the robot training workflow. In particular, it can help speed up the process by allowing developers to run workflows, such as model training, across different compute environments from one central command center. Nvidia said it is working with Hugging Face to integrate open-source Isaac and GR00T technologies into the LeRobot open-source robotics framework, making it easier for developers of all experience levels to access Nvidia technologies in robot development. GR00T N1.6 and Isaac Lab-Arena are now available in the LeRobot library. Part of the collaboration also makes Hugging Face's open-source Reachy 2 humanoid robot work seamlessly with Nvidia's Jetson Thor hardware. Similarly, Hugging Face's open-source Reachy Mini tabletop robot is fully interoperable with Nvidia DGX Spark. Leading robotics companies, including Boston Dynamics, Richtech, Humanoid, LG Electronics, and Neura Robotics, have all debuted new robots and autonomous machines built using Nvidia technologies, integrating the company's Jetson Thor robotics platform. Also: As Meta fades in open-source AI, Nvidia senses its chance to lead These robots all assist with different tasks. For example, Richtech Robotics is launching Dex, a humanoid robot for industrial environments, while LG Electronics unveiled a new home robot for indoor household tasks. The CES announcements include a new Nvidia Blackwell-powered Jetson T4000 module, which the company claimed delivers four times the performance of the previous generation.
[3]
NVIDIA Releases New Physical AI Models as Global Partners Unveil Next-Generation Robots
* From mobile manipulators to humanoids, Boston Dynamics, Caterpillar, Franka Robots, Humanoid, LG Electronics and NEURA Robotics debut new robots and autonomous machines built on NVIDIA technologies. * NVIDIA releases new NVIDIA Cosmos and GR00T open models and data for robot learning and reasoning, Isaac Lab-Arena for robot evaluation and the OSMO edge-to-cloud compute framework to simplify robot training workflows. * NVIDIA and Hugging Face integrate NVIDIA Isaac open models and libraries into LeRobot to accelerate the open-source robotics community. * The NVIDIA Blackwell architecture-powered Jetson T4000 module is now available, delivering 4x greater energy efficiency and AI compute. CES -- NVIDIA today announced new open models, frameworks and AI infrastructure for physical AI, and unveiled robots for every industry from global partners. The new NVIDIA technologies speed workflows across the entire robot development lifecycle to accelerate the next wave of robotics, including building generalist-specialist robots that can quickly learn many tasks. Global industry leaders including Boston Dynamics, Caterpillar, Franka Robotics, Humanoid, LG Electronics and NEURA Robotics are using the NVIDIA robotics stack to debut new AI-driven robots. "The ChatGPT moment for robotics is here. Breakthroughs in physical AI -- models that understand the real world, reason and plan actions -- are unlocking entirely new applications," said Jensen Huang, founder and CEO of NVIDIA. "NVIDIA's full stack of Jetson robotics processors, CUDA, Omniverse and open physical AI models empowers our global ecosystem of partners to transform industries with AI-driven robotics." New Open Models Advance Robot Learning and Reasoning Turning today's costly, single-task and hard-to-program machines into reasoning generalist-specialist robots requires enormous capital and expertise to build foundation models. NVIDIA is building open models that allow developers to bypass resource-intensive pretraining and focus on creating the next generation of AI robots and autonomous machines. These new models, all available on Hugging Face, include: * NVIDIA Cosmosâ„¢ Transfer 2.5 and NVIDIA Cosmos Predict 2.5 -- open, fully customizable world models that enable physically based synthetic data generation and robot policy evaluation in simulation for physical AI. * NVIDIA Cosmos Reason 2, an open reasoning vision language model (VLM) that enables intelligent machines to see, understand and act in the physical world like humans. * NVIDIA Isaacâ„¢ GR00T N1.6, an open reasoning vision language action (VLA) model, purpose-built for humanoid robots, that unlocks full body control and uses NVIDIA Cosmos Reason for better reasoning and contextual understanding. Franka Robotics, NEURA Robotics and Humanoid are using GR00T-enabled workflows to simulate, train and validate new behaviors for robots. Salesforce is using Agentforce, Cosmos Reason and the NVIDIA Blueprint for video search and summarization to analyze video footage captured by its robots and reduce incident resolution times by 2x. LEM Surgical is using NVIDIA Isaac for Healthcare and Cosmos Transfer to train the autonomous arms of its Dynamis surgical robot, powered by NVIDIA Jetson AGX Thorâ„¢ and Holoscan. XRlabs is using Thor and Isaac for Healthcare to enable surgical scopes, starting with exoscopes, to guide surgeons with real-time AI analysis. New Open-Source Simulation and Compute Frameworks for Robotics Development Scalable simulation is essential for training and evaluating robots, but current workflows remain fragmented and difficult to manage. Benchmarking is often manual and hard to scale, while end-to-end pipelines require complex orchestration across disparate compute resources. NVIDIA today released new open-source frameworks on GitHub that simplify these complex pipelines and accelerate the transition from research to real-world use cases. NVIDIA Isaac Lab-Arena is an open-source framework, available on GitHub, that provides a collaborative system for large-scale robot policy evaluation and benchmarking in simulation, with the evaluation and task layers designed in close collaboration with Lightwheel. Isaac Lab-Arena connects to industry-leading benchmarks like Libero and Robocasa, standardizing testing and ensuring robot skills are robust and reliable before deployment to physical hardware. NVIDIA OSMO is a cloud-native orchestration framework that unifies robotic development into a single, easy-to-use command center. OSMO lets developers define and run workflows such as synthetic data generation, model training and software-in-the-loop testing across different compute environments -- from workstations to mixed cloud instances -- speeding up development cycles. OSMO is now available and used by robot developers such as Hexagon Robotics, and integrated into the Microsoft Azure Robotics Accelerator toolchain. NVIDIA and Hugging Face Accelerate Open-Source Physical AI Development Robotics is now the fastest-growing category on Hugging Face, where NVIDIA's open models and datasets lead downloads among a surging open-source community. To bolster this community, NVIDIA is working with Hugging Face to integrate open-source Isaac and GR00T technologies into the leading LeRobot open-source robotics framework, providing streamlined access to integrated software and hardware tools that accelerate end-to-end development. This collaboration unites NVIDIA's 2 million robotics developers with Hugging Face's global community of 13 million AI builders. GR00T N models and Isaac Lab-Arena are now available in the LeRobot library for easy fine-tuning and evaluation. Hugging Face's open-source Reachy 2 humanoid will be fully interoperable with the NVIDIA Jetson Thorâ„¢ robotics computer, letting developers run any VLA, including GR00T N1.6. Hugging Face's open-source Reachy Mini tabletop robot is also fully interoperable with NVIDIA DGX Sparkâ„¢ to build custom experiences with NVIDIA large language models, and voice and computer vision open models that run locally. Humanoid Robot Developers Adopt NVIDIA Jetson Thor NVIDIA Jetson Thor meets the massive computing requirements for humanoid robots with reasoning. At CES, humanoid developers are showcasing new state-of-the-art robots now integrated with Jetson Thor. NEURA Robotics is launching a Porsche-designed Gen 3 humanoid, as well as a smaller-sized humanoid optimized for dexterous control. Richtech Robotics is launching Dex, a mobile humanoid for sophisticated manipulation and navigation across complex industrial environments. AGIBOT is introducing humanoids for both industrial and consumer sectors, and Genie Sim 3.0, a robot simulation platform integrated with Isaac Sim. LG Electronics unveiled a new home robot built to perform a wide range of indoor household tasks. Boston Dynamics, Humanoid and RLWRLD have all integrated Jetson Thor into their existing humanoids to enhance their navigation and manipulation capabilities. Bringing Physical AI to the Industrial Edge Providing a cost-effective, high-performance upgrade path for NVIDIA Jetson Orinâ„¢ customers, the new NVIDIA Jetsonâ„¢ T4000 module brings the NVIDIA Blackwell architecture to autonomous machines and general robotics for $1,999 at 1,000-unit volume. It delivers 4x the performance of the previous generation with 1,200 FP4 TFLOPS and 64GB of memory, all within a configurable 70-watt envelope ideal for energy-constrained autonomy. NVIDIA IGX Thor, which will be available later this month, extends robotics to the industrial edge, offering high-performance AI computing with enterprise software support and functional safety. Archer is using IGX Thor to bring AI to aviation, advancing critical capabilities in aircraft safety, airspace integration and autonomy-ready systems. Partners including AAEON, Advantech, ADLINK, Aetina, AVerMedia, Connect Tech, EverFocus, ForeCR, Lanner, RealTimes, Syslogic, Vecow and YUAN offer Thor-powered systems equipped for edge AI, robotics and embedded applications. In addition, Caterpillar is expanding its collaboration with NVIDIA to bring advanced AI and autonomy to equipment and job sites in construction and mining. Caterpillar CEO Joe Creed will share details alongside NVIDIA Vice President of Robotics and Edge AI Deepu Talla during a CES keynote on Wednesday, Jan. 7. Learn more by watching NVIDIA Live at CES. Featured image courtesy of Caterpillar (top left), LEM Surgical (top right), AGIBOT (bottom left) and Franka Robotics (bottom right).
[4]
A year ago, Nvidia's Jensen Huang said the 'ChatGPT moment' for robotics was around the corner. Now he says it's 'nearly here.' But is it? | Fortune
Nvidia-watchers had plenty to celebrate at CES this week, with news that the company's latest GPU, Vera Rubin, is now fully in production. Those powerful AI chips -- the picks and shovels of the AI boom -- are, after all, what helped make Nvidia the world's most valuable company. But in his keynote address, CEO Jensen Huang once again made clear that Nvidia does not see itself as simply a chip company. It is also a software company, with its reach extending across nearly every layer of the AI stack -- and with a major bet on physical AI: AI systems that operate in the real world, including robotics and self-driving cars. In a press release touting Nvidia's CES announcements, a quote attributed to Huang declared that "the ChatGPT moment for robotics is here." Breakthroughs in physical AI -- models that understand the real world, reason, and plan actions -- "are unlocking entirely new applications," he said. In the keynote itself, however, Huang was more measured, saying the ChatGPT moment for physical AI is "nearly here." It might sound like splitting hairs, but the distinction matters -- especially given what Huang said at last year's CES, when he introduced Nvidia's Cosmos world platform and described robotics' "ChatGPT moment" as merely "around the corner." So has that moment really arrived, or is it still stubbornly out of reach? Huang himself seemed to acknowledge the gap. "The challenge is clear," he said in yesterday's keynote. "The physical world is diverse and unpredictable." Nvidia is also no flash in the pan when it comes to physical AI. Over the past decade, the company has laid the groundwork by developing an ecosystem of AI software, hardware, and simulation systems for robots and autonomous vehicles. But it has never been about building its own robots or AVs. As Rev Lebaredian, Nvidia's vice president of simulation technology, told Fortune last year, the strategy is still about supplying the picks and shovels. There's no doubt that Nvidia has progressed in that regard over the past year. On the self-driving front, today it unveiled the Alpamayo family of open AI models, simulation tools and datasets meant to help AVs safely operate across a range of rare, complex driving scenarios, which are considered the some of the toughest challenges for autonomous systems to safely master. Nvidia also released new Cosmos and GR00T open models and data for robot learning and reasoning, and touted companies including Boston Dynamics, Caterpillar, Franka Robots, Humanoid, LG Electronics and NEURA Robotics, which are debuting new robots and autonomous machines built on Nvidia technologies. Even with increasingly capable models, simulation tools, and computing platforms, Nvidia is not building the self-driving cars or the robots themselves. Automakers still have to turn those tools into systems that can safely operate on public roads -- navigating regulatory scrutiny, real-world driving conditions, and public acceptance. Robotics companies, meanwhile, must translate AI into machines that can reliably manipulate the physical world, at scale, and at a cost that makes commercial sense. That work -- integrating hardware, software, sensors, safety systems, and real-world constraints -- remains enormously difficult, slow, and capital-intensive. And it's far from clear that faster progress in AI alone is enough to overcome those hurdles. After all, the ChatGPT moment wasn't just about the model under the hood. Those had existed for several years. It was about the user experience and a company that was able to capture lightning in a bottle. Nvidia has captured lightning in a bottle before -- GPUs turned out to be the unlikely but perfect engine for modern AI. Whether that kind of luck can be repeated in physical AI, a far messier and less standardized domain, is still an open question.
[5]
Nvidia introduces open-source AI models for humanoid robots, autonomous vehicles - SiliconANGLE
Nvidia introduces open-source AI models for humanoid robots, autonomous vehicles Nvidia Corp. has released more than a half-dozen artificial intelligence models designed for autonomous systems such as self-driving cars. The algorithms, which are all available under an open-source license, made their debut today at the CES electronics show in Las Vegas. They're rolling out alongside several development tools and a computing module for robots called the Jetson T4000. Nvidia's new lineup of open-source AI models is headlined by Alpamayo 1 (pictured), a so-called VLA, or vision-language-action, algorithm with 10 billion parameters. It can use footage from an autonomous vehicle's cameras to generate driving trajectories. Alpamayo 1 has a chain-of-thought mechanism, which means that it breaks down the navigation tasks it receives into smaller steps. According to Nvidia, that approach has two benefits. One is that Alpamayo 1 can explain each step of its reasoning workflow, which makes it easier to evaluate the soundness of navigation decisions. The chain-of-reasoning mechanism also helps the model tackle tricky driving situations. It's not designed to run in autonomous vehicles. Instead, Nvidia sees developers using it to train such vehicles' navigation models. According to the company, the algorithm lends itself to tasks such as evaluating the reliability of autonomous driving software. In the future, Nvidia plans to release larger Alpamayo models that will support a broader range of reasoning use cases. "Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions -- it's the foundation for safe, scalable autonomy," said Nvidia Chief Executive Officer Jensen Huang. Alpamayo 1 is available alongside three additions to Nvidia's existing Cosmos series of world foundation models. Like Alpamayo 1, the new models can be used to develop software for self-driving cars. They can also power other types of autonomous systems including industrial robots. The first two models, Cosmos Transfer 2.5 and Cosmos Predict 2.5, are designed to generate training data for robots' AI software. That training data takes the form of synthetic video footage. Cosmos Transfer 2.5 can, for example, generate a clip that depicts an industrial robot in a car factory. Cosmos Predict 2.5 offers similar features along with the ability to simulate how an object might behave in the future. A user could upload a photo of a bus and ask the model to simulate where the bus will be five seconds into the future. The third new addition to the Cosmos model series is called Cosmos Reason 2.0. According to Nvidia, it can equip a robot with the ability to analyze footage of its environment and automatically carry out actions. Cosmos Reason powers Isaac GR00T N1.6, another new model that Nvidia debuted today. Isaac GR00T N1.6 is a VLA model like Alpamayo 1, but it's optimized to power humanoid robots rather than autonomous vehicles. Nvidia's researchers trained the algorithm on a dataset comprised of sensory measurements from bimanual, semi-humanoid and humanoid robots. "Salesforce, Milestone, Hitachi, Uber, VAST Data and Encord are using Cosmos Reason for traffic and workplace productivity AI agents," Kari Briski, the vice president of generative AI software at Nvidia, wrote in a blog post. "Franka Robotics, Humanoid and NEURA Robotics are using Isaac GR00T to simulate, train and validate new behaviors for robots before scaling to production." Nvidia's robotics-focused algorithms are rolling out alongside a pair of more general-purpose model families called Nemotron Speech and Nemotron RAG. The former series is headlined by a speech recognition model that the company says can provide 10 times the performance of comparable alternatives. Nemotron RAG includes embedding and reranking models. Embedding models turn data into mathematical representations that AI applications understand. Reranking is one of the steps involved in the RAG, or retrieval-augmented generation, workflow. After an AI application uses RAG to retrieve the files needed to answer a prompt, a reranking model highlights the most relevant files. Nvidia's AI models are joined by a trio of development tools that are likewise available under an open-source license. The first tool, AlpaSim, enables developers to create simulated environments in which autonomous driving models can be trained. The software makes it possible to customize details such as traffic conditions and a simulated vehicle's sensor array. For added measure, developers can inject sensor noise to evaluate how well their AI models filter erroneous data. Nvidia is also rolling out a second simulation framework called Isaac Lab-Arena. It's designed to ease the task of training AI models for robots. According to the company, Isaac Lab-Arena enables developers to measure AI models' performance using popular third-party benchmarks such as Robocasa, which is mainly used to evaluate household robots. Software teams can use Nvidia's third new tool, OSMO, to manage their simulation workloads. It's an orchestrator that also lends itself to managing other AI development workflows such as synthetic data generation pipelines and model training jobs. Nvidia says that OSMO can orchestrate workloads across public clouds and developer workstations. Manufacturers can use a new Nvidia computing module called the Jetson Jetson T4000 to power their robots. It's based on the company's Blackwell graphics processing unit architecture. An industrial robot maker, for example, could use the module to run its systems' AI-powered factory floor navigation software.
[6]
Nvidia's Jensen Huang Says Robots Have 1 Key Weakness, but There Might Be a Solution
Nvidia has released a slew of new AI models and technology, all designed to help businesses create the next generation of robots. During his annual keynote presentation at the Consumer Electronics Show in Las Vegas, Nvidia CEO Jensen Huang explained that the roadblock keeping AI-powered robots from being truly helpful is that they don't understand some of the fundamental things about existing as a physical being on Earth. Children instinctively learn to understand phenomena like gravity, object permanence, inertia, friction, and cause and effect, said Huang, but a typical AI model has no way of grokking "the common sense of the physical world." Sporting a shimmering crocodile-patterned leather jacket, Huang described Nvidia's solution: simulations. Developers can drop digital twins of their robots into Omniverse, Nvidia's platform for generating physically realistic 3D sandboxes, and use the company's Cosmos family of AI models to run thousands upon thousands of simulations, creating the training data that will allow the robot to function in the real world. Imagine you wanted to create a robot arm to help out in the kitchen. You could generate a virtual kitchen in Omniverse, drop a digital twin of your robot into the simulation, and direct it to pick up and put down different types of fruit. The "synthetic data" generated through the simulation is then used to train Nvidia's family of robotics models, enabling the actual robot to replicate its virtual counterpart's actions.
[7]
CES 2026: Industry giants double down on physical AI with foundation models, processor for robots
Nvidia CEO Jensen Huang kickstarted CES 2026 with a keynote address where he announced new open foundation models that will allow robots to reason, plan and adapt to new tasks and environments. The foundation models are part of a full stack ecosystem Nvidia has developed for physical AI and has made available to physical AI developers via Github and Hugging Face. "The ChatGPT moment for physical AI is here -- when machines begin to understand, reason and act in the real world," said Huang. The new foundation models include two world models- Cosmos Transfer and Predict 2.5- that can generate high-fidelity synthetic data and facilitate simulation testing. It also includes Cosmos Reason 2, a vision language model (VLM) capable of real-world situational awareness and reasoning. Then there is Isaac GR00T N1.6, a vision language action (VLA) model that uses Cosmos Reason to enable whole-body coordination, allowing humanoid robots to handle objects. VLA models allow robots to analyze their environment and decide the most appropriate action. Nvidia also announced Isaac Lab-Arena, an open-source simulation framework hosted on GitHub for developers looking for a virtual testing platform for robots. Huang also showed a video of how robots are being trained in photorealistic and simulated worlds on Isaac Sim and Isaac Lab. Nvidia wasn't the only company that is going all-in on physical AI, as the demand for AI grows beyond chatbots and productivity apps into the physical world. Qualcomm also announced a full-stack architecture for physical AI including the new Dragonwing IQ10 processor for humanoid robots. The architecture supports advanced perception and motion planning with end-to-end AI models such as VLAs and VLMs. Advanced perception allows the robot to understand depth, texture, and the relation between objects, while motion planning with end-to-end AI models allows the robot to learn the entire sequence of actions at once. Qualcomm claims that the new Dragonwing IQ10 processor will drive more real-world deployment of physical AI. The chipmaker is in talks with Kuka Robotics to build robotics solutions based on it. Further, Qualcomm claims that its architecture will enable robots to reason and adapt to complex environments in real time. It will also allow seamless scaling across diverse robotic platforms. What is physical AI and how is it changing robotics These announcements at CES signal a shift in expectation from robots. While fixed function robots excel at mass production, they can't iterate instantly or adapt to new orders. For enterprises looking for operational agility, it makes more sense to invest in physical AI driven robots that can perceive, adapt in minutes and teach themselves to handle complex tasks. Physical AI refers to all those systems that use AI to understand, interact and adapt to the physical world. It can be put into anything ranging from humanoid robots, drones, self-driving cars to smart sensor systems. With Physical AI, robots can learn by watching human movements and navigate in more complex environments more naturally. For instance, when a robot with physical AI sees a new part that it has never seen before it can still figure out how to pick it up immediately. Earlier, robots had to be programmed with new instructions to handle new parts. They could only do fixed, repetitive tasks based on set instructions in a controlled environment. The robotics industry has given several demonstrations of this new generation of AI-powered robots in the last few months. They were put on fashion runways alongside humans where they showed their spatial awareness to navigate on a slippery and crowded runway. In November, Chinese EV firm Xpeng unveiled a humanoid robot named Iron that walks naturally like a human using artificial muscles and a flexible spine. Iron also has an AI-powered brain that allows it to interact like a human. Many firms are already building and deploying these AI driven intelligent robots in factories. US-based pharmaceutical company PharmAgri is planning to deploy up to 10,000 Tesla Optimus humanoid robots in its plants. BMW is working with Figure AI to deploy humanoid robots in its South Carolina factory. Google DeepMind has also partnered with Apptronik to train robots on Gemini Robotics VLA models, so they can understand human commands and act on them without being pre-programmed. What is driving the need for physical AI Experts believe that the need for physical AI is strategic as many countries face shortage of skilled workers. According to Goldman Sachs, the humanoid robot market is expected to reach $38 billion by 2035. The manufacturing cost of humanoid robots has also dropped from a range of $50,000 (low-end models) and $250,000 (state-of-the art models) to a range of $30,000 and $150,000. According to EY, physical AI will become an integral part of daily life and work in the next five years. Adoption of intelligent robots will grow to handle service-related tasks in assembly lines, office cleaning, and waste management. In healthcare, they can sort laundry, maintenance and cleanliness.
Share
Share
Copy Link
Nvidia released a comprehensive stack of robot foundation models, simulation tools, and edge hardware at CES 2026, signaling its ambition to become the default platform for generalist robotics. The company introduced Cosmos and Isaac GR00T AI models, Isaac Lab-Arena simulation framework, and the Blackwell-powered Jetson T4000, while deepening its partnership with Hugging Face to make robot development more accessible.
Nvidia released a comprehensive robotics ecosystem at CES 2026, revealing its strategy to become the default platform for physical AI development. The announcement includes new open-source AI models, simulation frameworks, and hardware that together form what the company describes as a full-stack solution for building robots that can learn, reason, and adapt across diverse tasks and environments
1
. "The ChatGPT moment for robotics is here," declared Jensen Huang, founder and CEO of Nvidia, though he struck a more measured tone in his keynote, suggesting the moment is "nearly here"4
.
Source: TechCrunch
The move reflects a broader industry shift as AI transitions from cloud-based systems to machines operating in the physical world. Nvidia is betting that robotics will follow the smartphone trajectory, where a single platform—Android—became the default operating system for manufacturers
1
. Early indicators suggest traction: robotics has become the fastest-growing category on Hugging Face, with Nvidia's models leading downloads, while industry giants from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics are already deploying Nvidia's technology1
.Nvidia introduced multiple AI models designed to accelerate robot development, all available on Hugging Face
3
. The Cosmos Transfer 2.5 and Cosmos Predict 2.5 are world models that enable synthetic data generation and robot policy evaluation in simulation, addressing the costly and risky nature of physical testing2
. These models understand real-world physics and spatial properties, creating realistic scenarios for evaluating autonomous systems like self-driving cars2
.Cosmos Reason 2 represents a significant advancement as an open reasoning vision language model that allows intelligent machines to see, understand, and act in the physical world like humans
3
. The model enables physical AI to make decisions using reasoning, prior knowledge, and understanding of physics2
.Isaac GR00T N1.6, the next-generation vision-language-action model, is purpose-built for humanoid robots and unlocks full-body control capabilities
1
. By leveraging Cosmos Reason as its brain, GR00T enables humanoids to move and handle objects simultaneously1
. Companies including Franka Robotics, NEURA Robotics, and Humanoid are using GR00T-enabled workflows to simulate, train, and validate new robot behaviors3
.
Source: CXOToday
Nvidia introduced Isaac Lab-Arena, an open-source simulation framework hosted on GitHub that consolidates resources, task scenarios, training tools, and established benchmarks like Libero, RoboCasa, and RoboTwin
1
. The platform addresses a critical challenge: validating increasingly complex robot capabilities in physical environments can be costly, slow, and risky1
. Isaac Lab-Arena was designed in collaboration with Lightwheel, an embodied AI infrastructure company, to provide large-scale robot policy evaluation and benchmarking3
.Nvidia OSMO serves as an open-source command center that integrates the entire workflow from data generation through robot training across both desktop and cloud environments
1
. This cloud-native orchestration framework lets developers define and run workflows such as synthetic data generation, model training, and software-in-the-loop testing across different compute environments, speeding up development cycles3
. OSMO is already being used by robot developers like Hexagon Robotics and is integrated into the Microsoft Azure Robotics Accelerator toolchain3
.Powering this ecosystem is the Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family
1
. Nvidia positions it as a cost-effective on-device compute upgrade that delivers 1,200 teraflops of AI compute and 64 gigabytes of memory while running efficiently at 40 to 70 watts1
. The module delivers four times the performance and energy efficiency of the previous generation2
.Nvidia deepened its partnership with Hugging Face to make robot training accessible to developers without expensive hardware or specialized knowledge
1
. The collaboration integrates Nvidia's Isaac and GR00T technologies into Hugging Face's LeRobot framework, connecting Nvidia's 2 million robotics developers with Hugging Face's 13 million AI builders1
. GR00T N1.6 and Isaac Lab-Arena are now available in the LeRobot library2
.The collaboration extends to hardware compatibility: Hugging Face's open-source Reachy 2 humanoid now works directly with Nvidia's Jetson Thor chip, letting developers experiment with different AI models without being locked into proprietary systems
1
. Hugging Face's Reachy Mini tabletop robot is fully interoperable with Nvidia DGX Spark2
.Related Stories
Leading robotics companies unveiled new robots and autonomous machines built using Nvidia technologies at CES
2
. These robots assist with diverse tasks across industries: Richtech Robotics launched Dex, a humanoid robot for industrial environments, while LG Electronics unveiled a new home robot for indoor household tasks2
.Salesforce is using Agentforce, Cosmos Reason, and the Nvidia Blueprint for video search to analyze footage captured by its robots, reducing incident resolution times by 2x
3
. In healthcare, LEM Surgical is using Nvidia Isaac for Healthcare and Cosmos Transfer to train the autonomous arms of its Dynamis surgical robot, powered by Nvidia Jetson AGX Thor and Holoscan3
.
Source: NVIDIA
While Nvidia has laid groundwork over the past decade developing an ecosystem of AI software, hardware, and simulation systems for robots and autonomous vehicles, significant hurdles remain
4
. Jensen Huang acknowledged the gap in his keynote: "The challenge is clear. The physical world is diverse and unpredictable"4
.Nvidia is not building robots or autonomous vehicles itself—its strategy remains focused on supplying the picks and shovels
4
. This means automakers and robotics companies must still translate these tools into systems that can safely operate in real-world conditions while navigating regulatory scrutiny, public acceptance, and commercial viability4
. That work—integrating hardware, software, sensors, safety systems, and real-world constraints—remains enormously difficult, slow, and capital-intensive4
. Whether faster progress in AI models alone can overcome those hurdles remains an open question, as the ChatGPT moment wasn't just about the model but about user experience and capturing lightning in a bottle4
.Summarized by
Navi
12 Aug 2025•Technology

07 Jan 2025•Technology

19 Mar 2025•Technology

1
Policy and Regulation

2
Technology
3
Technology
