10 Sources
10 Sources
[1]
Nvidia wants to be the Android of generalist robotics | TechCrunch
Nvidia released a new stack of robot foundation models, simulation tools, and edge hardware at CES 2026, moves that signal the company's ambition to become the default platform for generalist robotics, much as Android became the operating system for smartphones. Nvidia's move into robotics reflects a broader industry shift as AI moves off the cloud and into machines that can learn how to think in the physical world, enabled by cheaper sensors, advanced simulation, and AI models that increasingly can generalize across tasks. Nvidia revealed details on Monday about its full-stack ecosystem for physical AI, including new open foundation models that allow robots to reason, plan, and adapt across many tasks and diverse environments, moving beyond narrow task-specific bots, all of which are available on Hugging Face. Those models include: Cosmos Transfer 2.5 and Cosmos Predict 2.5, two world models for synthetic data generation and robot policy evaluation in simulation; Cosmos Reason 2, a reasoning vision language model (VLM) that allows AI systems to see, understand, and act in the physical world; and Isaac GR00T N1.6, its next-gen vision language action (VLA) model purpose-built for humanoid robots. GR00T relies on Cosmos Reason as its brain, and it unlocks whole-body control for humanoids so they can move and handle objects simultaneously. Nvidia also introduced Isaac Lab-Arena at CES, an open-source simulation framework hosted on GitHub that serves as another component of the company's physical AI platform, enabling safe virtual testing of robotic capabilities. The platform promises to address a critical industry challenge: as robots learn increasingly complex tasks, from precise object handling to cable installation, validating these abilities in physical environments can be costly, slow, and risky. Isaac Lab-Arena tackles this by consolidating resources, task scenarios, training tools, and established benchmarks like Libero, RoboCasa, and RoboTwin, creating a unified standard where the industry previously lacked one. Supporting the ecosystem is Nvidia OSMO, an open-source command center that serves as connective infrastructure that integrates the entire workflow from data generation through training across both desktop and cloud environments. And to help power it all, there's the new Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family. Nvidia is pitching it as a cost-effective on-device compute upgrade that delivers 1200 teraflops of AI compute and 64 gigabytes of memory while running efficiently at 40 to 70 watts. Nvidia is also deepening its partnership with Hugging Face to let more people experiment with robot training without needing expensive hardware or specialized knowledge. The collaboration integrates Nvidia's Isaac and GR00T technologies into Hugging Face's LeRobot framework, connecting Nvidia's 2 million robotics developers with Hugging Face's 13 million AI builders. The developer platform's open-source Reachy 2 humanoid now works directly with Nvidia's Jetson Thor chip, letting developers experiment with different AI models without being locked into proprietary systems. The bigger picture here is that Nvidia is trying to make robotics development more accessible, and it wants to be the underlying hardware and software vendor powering it, much like Android is the default for smartphone makers. There are early signs that Nvidia's strategy is working. Robotics is the fastest growing category on Hugging Face, with Nvidia's models leading downloads. Meanwhile robotics companies from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics are already using Nvidia's tech.
[2]
Nvidia's physical AI models clear the way for next-gen robots - here's what's new
Nvidia releases new physical AI models at CES. Partners unveil next-generation robots. The robots span a wide range of use cases and industries. As AI models continue to gain popularity, there is an increased focus on developing hardware that bridges the gap between a device's screen and the world around us. As a result, physical AI is an emerging theme at CES, and Nvidia unveiled models to accelerate the development of these robots. "The ChatGPT moment for robotics is here. Breakthroughs in physical AI -- models that understand the real world, reason and plan actions -- are unlocking entirely new applications," said Jensen Huang, founder and CEO of Nvidia. Also: CES 2026 live updates To drive that momentum forward, Nvidia unveiled new open Nvidia Cosmos and GR00T models during its Las Vegas keynote event on Monday. The company stated that these models are designed to enable developers to allocate less time and resources to pretraining and more to building next-generation robots. In particular, the releases include Nvidia Cosmos Transfer 2.5 and Nvidia Cosmos Predict 2.5, open, fully customizable world models that also understand the real world around you, including its physics and spatial properties. This is useful for creating synthetic data and simulations that emulate realistic life scenarios for evaluating robots' performance, necessary because testing these physical AI developments, such as autonomous vehicles, is often too risky to conduct in real life. Also: How DeepSeek's new way to train advanced AI models could disrupt everything - again Nvidia Cosmos Reason 2 is an open-reasoning vision language model (VLM) that allows intelligent machines "to see, understand, and act in the physical world like humans," according to Nvidia. Moreover, using Nvidia Cosmos Reason 2, physical AI can make decisions as humans do, using reason, prior knowledge, understanding of physics, and more. Lastly, Nvidia Isaac GR00T N1.6 is an open-reasoning vision language action (VLA) model specifically designed for humanoid robots, enabling full-body control and leveraging Nvidia Cosmos Reason for the additional benefits discussed above. The new models are all available on Hugging Face. Benchmarking and simulations are essential for ensuring the safe development of autonomous systems, but they are often one of the most challenging components of robotics due to the difficulty in creating these simulations. To help bridge this gap, Nvidia released new open-source frameworks on GitHub, including the Nvidia Isaac Lab-Arena and Nvidia OSMO. The Nvidia Isaac Lab-Arena is an open-source framework designed for large-scale robot policy evaluation and benchmarking in simulation, according to the blog post. It was designed in close collaboration with Lightwheel, an embodied AI infrastructure company, and connects to industry-leading benchmarks. Also: Are our homes ready for a real-life Rosie the Robot? SwitchBot thinks so Nvidia Osmo is designed to help developers with the robot training workflow. In particular, it can help speed up the process by allowing developers to run workflows, such as model training, across different compute environments from one central command center. Nvidia said it is working with Hugging Face to integrate open-source Isaac and GR00T technologies into the LeRobot open-source robotics framework, making it easier for developers of all experience levels to access Nvidia technologies in robot development. GR00T N1.6 and Isaac Lab-Arena are now available in the LeRobot library. Part of the collaboration also makes Hugging Face's open-source Reachy 2 humanoid robot work seamlessly with Nvidia's Jetson Thor hardware. Similarly, Hugging Face's open-source Reachy Mini tabletop robot is fully interoperable with Nvidia DGX Spark. Leading robotics companies, including Boston Dynamics, Richtech, Humanoid, LG Electronics, and Neura Robotics, have all debuted new robots and autonomous machines built using Nvidia technologies, integrating the company's Jetson Thor robotics platform. Also: As Meta fades in open-source AI, Nvidia senses its chance to lead These robots all assist with different tasks. For example, Richtech Robotics is launching Dex, a humanoid robot for industrial environments, while LG Electronics unveiled a new home robot for indoor household tasks. The CES announcements include a new Nvidia Blackwell-powered Jetson T4000 module, which the company claimed delivers four times the performance of the previous generation.
[3]
Humanoid robots take over CES in Las Vegas as tech industry touts future of AI
Nvidia, which last year became the world's most valuable company, announced a new version of its vision language models called Gr00t for humanoid robots that can turn sensor inputs into robot body control, as well as a version of its Cosmos model for robot reasoning and planning. Huang said he expects to see robots with some human-level capabilities this year. "I know how fast the technology is moving," he said. His company highlighted partnerships with the likes of Boston Dynamics, Caterpillar and LG. Science fiction writers have dreamed of this moment for decades. "The Jetsons" had Rosey, a robot maid. In "Star Wars," C-3PO helped Luke Skywalker save the galaxy. However, in real life, humanoids have so far been unable to demonstrate the intelligence or flexibility that would make them truly useful, a problem that's long eluded engineers. Then came generative AI with the launch of OpenAI's ChatGPT in late 2022. The same deep learning technology that underpins ChatGPT can be used to teach the robots how to walk, use a hand or fold laundry. Many in the industry see self-driving cars as the first major commercial manifestation of physical AI.
[4]
NVIDIA Releases New Physical AI Models as Global Partners Unveil Next-Generation Robots
* From mobile manipulators to humanoids, Boston Dynamics, Caterpillar, Franka Robots, Humanoid, LG Electronics and NEURA Robotics debut new robots and autonomous machines built on NVIDIA technologies. * NVIDIA releases new NVIDIA Cosmos and GR00T open models and data for robot learning and reasoning, Isaac Lab-Arena for robot evaluation and the OSMO edge-to-cloud compute framework to simplify robot training workflows. * NVIDIA and Hugging Face integrate NVIDIA Isaac open models and libraries into LeRobot to accelerate the open-source robotics community. * The NVIDIA Blackwell architecture-powered Jetson T4000 module is now available, delivering 4x greater energy efficiency and AI compute. CES -- NVIDIA today announced new open models, frameworks and AI infrastructure for physical AI, and unveiled robots for every industry from global partners. The new NVIDIA technologies speed workflows across the entire robot development lifecycle to accelerate the next wave of robotics, including building generalist-specialist robots that can quickly learn many tasks. Global industry leaders including Boston Dynamics, Caterpillar, Franka Robotics, Humanoid, LG Electronics and NEURA Robotics are using the NVIDIA robotics stack to debut new AI-driven robots. "The ChatGPT moment for robotics is here. Breakthroughs in physical AI -- models that understand the real world, reason and plan actions -- are unlocking entirely new applications," said Jensen Huang, founder and CEO of NVIDIA. "NVIDIA's full stack of Jetson robotics processors, CUDA, Omniverse and open physical AI models empowers our global ecosystem of partners to transform industries with AI-driven robotics." New Open Models Advance Robot Learning and Reasoning Turning today's costly, single-task and hard-to-program machines into reasoning generalist-specialist robots requires enormous capital and expertise to build foundation models. NVIDIA is building open models that allow developers to bypass resource-intensive pretraining and focus on creating the next generation of AI robots and autonomous machines. These new models, all available on Hugging Face, include: * NVIDIA Cosmos™ Transfer 2.5 and NVIDIA Cosmos Predict 2.5 -- open, fully customizable world models that enable physically based synthetic data generation and robot policy evaluation in simulation for physical AI. * NVIDIA Cosmos Reason 2, an open reasoning vision language model (VLM) that enables intelligent machines to see, understand and act in the physical world like humans. * NVIDIA Isaac™ GR00T N1.6, an open reasoning vision language action (VLA) model, purpose-built for humanoid robots, that unlocks full body control and uses NVIDIA Cosmos Reason for better reasoning and contextual understanding. Franka Robotics, NEURA Robotics and Humanoid are using GR00T-enabled workflows to simulate, train and validate new behaviors for robots. Salesforce is using Agentforce, Cosmos Reason and the NVIDIA Blueprint for video search and summarization to analyze video footage captured by its robots and reduce incident resolution times by 2x. LEM Surgical is using NVIDIA Isaac for Healthcare and Cosmos Transfer to train the autonomous arms of its Dynamis surgical robot, powered by NVIDIA Jetson AGX Thor™ and Holoscan. XRlabs is using Thor and Isaac for Healthcare to enable surgical scopes, starting with exoscopes, to guide surgeons with real-time AI analysis. New Open-Source Simulation and Compute Frameworks for Robotics Development Scalable simulation is essential for training and evaluating robots, but current workflows remain fragmented and difficult to manage. Benchmarking is often manual and hard to scale, while end-to-end pipelines require complex orchestration across disparate compute resources. NVIDIA today released new open-source frameworks on GitHub that simplify these complex pipelines and accelerate the transition from research to real-world use cases. NVIDIA Isaac Lab-Arena is an open-source framework, available on GitHub, that provides a collaborative system for large-scale robot policy evaluation and benchmarking in simulation, with the evaluation and task layers designed in close collaboration with Lightwheel. Isaac Lab-Arena connects to industry-leading benchmarks like Libero and Robocasa, standardizing testing and ensuring robot skills are robust and reliable before deployment to physical hardware. NVIDIA OSMO is a cloud-native orchestration framework that unifies robotic development into a single, easy-to-use command center. OSMO lets developers define and run workflows such as synthetic data generation, model training and software-in-the-loop testing across different compute environments -- from workstations to mixed cloud instances -- speeding up development cycles. OSMO is now available and used by robot developers such as Hexagon Robotics, and integrated into the Microsoft Azure Robotics Accelerator toolchain. NVIDIA and Hugging Face Accelerate Open-Source Physical AI Development Robotics is now the fastest-growing category on Hugging Face, where NVIDIA's open models and datasets lead downloads among a surging open-source community. To bolster this community, NVIDIA is working with Hugging Face to integrate open-source Isaac and GR00T technologies into the leading LeRobot open-source robotics framework, providing streamlined access to integrated software and hardware tools that accelerate end-to-end development. This collaboration unites NVIDIA's 2 million robotics developers with Hugging Face's global community of 13 million AI builders. GR00T N models and Isaac Lab-Arena are now available in the LeRobot library for easy fine-tuning and evaluation. Hugging Face's open-source Reachy 2 humanoid will be fully interoperable with the NVIDIA Jetson Thor™ robotics computer, letting developers run any VLA, including GR00T N1.6. Hugging Face's open-source Reachy Mini tabletop robot is also fully interoperable with NVIDIA DGX Spark™ to build custom experiences with NVIDIA large language models, and voice and computer vision open models that run locally. Humanoid Robot Developers Adopt NVIDIA Jetson Thor NVIDIA Jetson Thor meets the massive computing requirements for humanoid robots with reasoning. At CES, humanoid developers are showcasing new state-of-the-art robots now integrated with Jetson Thor. NEURA Robotics is launching a Porsche-designed Gen 3 humanoid, as well as a smaller-sized humanoid optimized for dexterous control. Richtech Robotics is launching Dex, a mobile humanoid for sophisticated manipulation and navigation across complex industrial environments. AGIBOT is introducing humanoids for both industrial and consumer sectors, and Genie Sim 3.0, a robot simulation platform integrated with Isaac Sim. LG Electronics unveiled a new home robot built to perform a wide range of indoor household tasks. Boston Dynamics, Humanoid and RLWRLD have all integrated Jetson Thor into their existing humanoids to enhance their navigation and manipulation capabilities. Bringing Physical AI to the Industrial Edge Providing a cost-effective, high-performance upgrade path for NVIDIA Jetson Orin™ customers, the new NVIDIA Jetson™ T4000 module brings the NVIDIA Blackwell architecture to autonomous machines and general robotics for $1,999 at 1,000-unit volume. It delivers 4x the performance of the previous generation with 1,200 FP4 TFLOPS and 64GB of memory, all within a configurable 70-watt envelope ideal for energy-constrained autonomy. NVIDIA IGX Thor, which will be available later this month, extends robotics to the industrial edge, offering high-performance AI computing with enterprise software support and functional safety. Archer is using IGX Thor to bring AI to aviation, advancing critical capabilities in aircraft safety, airspace integration and autonomy-ready systems. Partners including AAEON, Advantech, ADLINK, Aetina, AVerMedia, Connect Tech, EverFocus, ForeCR, Lanner, RealTimes, Syslogic, Vecow and YUAN offer Thor-powered systems equipped for edge AI, robotics and embedded applications. In addition, Caterpillar is expanding its collaboration with NVIDIA to bring advanced AI and autonomy to equipment and job sites in construction and mining. Caterpillar CEO Joe Creed will share details alongside NVIDIA Vice President of Robotics and Edge AI Deepu Talla during a CES keynote on Wednesday, Jan. 7. Learn more by watching NVIDIA Live at CES. Featured image courtesy of Caterpillar (top left), LEM Surgical (top right), AGIBOT (bottom left) and Franka Robotics (bottom right).
[5]
A year ago, Nvidia's Jensen Huang said the 'ChatGPT moment' for robotics was around the corner. Now he says it's 'nearly here.' But is it? | Fortune
Nvidia-watchers had plenty to celebrate at CES this week, with news that the company's latest GPU, Vera Rubin, is now fully in production. Those powerful AI chips -- the picks and shovels of the AI boom -- are, after all, what helped make Nvidia the world's most valuable company. But in his keynote address, CEO Jensen Huang once again made clear that Nvidia does not see itself as simply a chip company. It is also a software company, with its reach extending across nearly every layer of the AI stack -- and with a major bet on physical AI: AI systems that operate in the real world, including robotics and self-driving cars. In a press release touting Nvidia's CES announcements, a quote attributed to Huang declared that "the ChatGPT moment for robotics is here." Breakthroughs in physical AI -- models that understand the real world, reason, and plan actions -- "are unlocking entirely new applications," he said. In the keynote itself, however, Huang was more measured, saying the ChatGPT moment for physical AI is "nearly here." It might sound like splitting hairs, but the distinction matters -- especially given what Huang said at last year's CES, when he introduced Nvidia's Cosmos world platform and described robotics' "ChatGPT moment" as merely "around the corner." So has that moment really arrived, or is it still stubbornly out of reach? Huang himself seemed to acknowledge the gap. "The challenge is clear," he said in yesterday's keynote. "The physical world is diverse and unpredictable." Nvidia is also no flash in the pan when it comes to physical AI. Over the past decade, the company has laid the groundwork by developing an ecosystem of AI software, hardware, and simulation systems for robots and autonomous vehicles. But it has never been about building its own robots or AVs. As Rev Lebaredian, Nvidia's vice president of simulation technology, told Fortune last year, the strategy is still about supplying the picks and shovels. There's no doubt that Nvidia has progressed in that regard over the past year. On the self-driving front, today it unveiled the Alpamayo family of open AI models, simulation tools and datasets meant to help AVs safely operate across a range of rare, complex driving scenarios, which are considered the some of the toughest challenges for autonomous systems to safely master. Nvidia also released new Cosmos and GR00T open models and data for robot learning and reasoning, and touted companies including Boston Dynamics, Caterpillar, Franka Robots, Humanoid, LG Electronics and NEURA Robotics, which are debuting new robots and autonomous machines built on Nvidia technologies. Even with increasingly capable models, simulation tools, and computing platforms, Nvidia is not building the self-driving cars or the robots themselves. Automakers still have to turn those tools into systems that can safely operate on public roads -- navigating regulatory scrutiny, real-world driving conditions, and public acceptance. Robotics companies, meanwhile, must translate AI into machines that can reliably manipulate the physical world, at scale, and at a cost that makes commercial sense. That work -- integrating hardware, software, sensors, safety systems, and real-world constraints -- remains enormously difficult, slow, and capital-intensive. And it's far from clear that faster progress in AI alone is enough to overcome those hurdles. After all, the ChatGPT moment wasn't just about the model under the hood. Those had existed for several years. It was about the user experience and a company that was able to capture lightning in a bottle. Nvidia has captured lightning in a bottle before -- GPUs turned out to be the unlikely but perfect engine for modern AI. Whether that kind of luck can be repeated in physical AI, a far messier and less standardized domain, is still an open question.
[6]
Nvidia introduces open-source AI models for humanoid robots, autonomous vehicles - SiliconANGLE
Nvidia introduces open-source AI models for humanoid robots, autonomous vehicles Nvidia Corp. has released more than a half-dozen artificial intelligence models designed for autonomous systems such as self-driving cars. The algorithms, which are all available under an open-source license, made their debut today at the CES electronics show in Las Vegas. They're rolling out alongside several development tools and a computing module for robots called the Jetson T4000. Nvidia's new lineup of open-source AI models is headlined by Alpamayo 1 (pictured), a so-called VLA, or vision-language-action, algorithm with 10 billion parameters. It can use footage from an autonomous vehicle's cameras to generate driving trajectories. Alpamayo 1 has a chain-of-thought mechanism, which means that it breaks down the navigation tasks it receives into smaller steps. According to Nvidia, that approach has two benefits. One is that Alpamayo 1 can explain each step of its reasoning workflow, which makes it easier to evaluate the soundness of navigation decisions. The chain-of-reasoning mechanism also helps the model tackle tricky driving situations. It's not designed to run in autonomous vehicles. Instead, Nvidia sees developers using it to train such vehicles' navigation models. According to the company, the algorithm lends itself to tasks such as evaluating the reliability of autonomous driving software. In the future, Nvidia plans to release larger Alpamayo models that will support a broader range of reasoning use cases. "Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions -- it's the foundation for safe, scalable autonomy," said Nvidia Chief Executive Officer Jensen Huang. Alpamayo 1 is available alongside three additions to Nvidia's existing Cosmos series of world foundation models. Like Alpamayo 1, the new models can be used to develop software for self-driving cars. They can also power other types of autonomous systems including industrial robots. The first two models, Cosmos Transfer 2.5 and Cosmos Predict 2.5, are designed to generate training data for robots' AI software. That training data takes the form of synthetic video footage. Cosmos Transfer 2.5 can, for example, generate a clip that depicts an industrial robot in a car factory. Cosmos Predict 2.5 offers similar features along with the ability to simulate how an object might behave in the future. A user could upload a photo of a bus and ask the model to simulate where the bus will be five seconds into the future. The third new addition to the Cosmos model series is called Cosmos Reason 2.0. According to Nvidia, it can equip a robot with the ability to analyze footage of its environment and automatically carry out actions. Cosmos Reason powers Isaac GR00T N1.6, another new model that Nvidia debuted today. Isaac GR00T N1.6 is a VLA model like Alpamayo 1, but it's optimized to power humanoid robots rather than autonomous vehicles. Nvidia's researchers trained the algorithm on a dataset comprised of sensory measurements from bimanual, semi-humanoid and humanoid robots. "Salesforce, Milestone, Hitachi, Uber, VAST Data and Encord are using Cosmos Reason for traffic and workplace productivity AI agents," Kari Briski, the vice president of generative AI software at Nvidia, wrote in a blog post. "Franka Robotics, Humanoid and NEURA Robotics are using Isaac GR00T to simulate, train and validate new behaviors for robots before scaling to production." Nvidia's robotics-focused algorithms are rolling out alongside a pair of more general-purpose model families called Nemotron Speech and Nemotron RAG. The former series is headlined by a speech recognition model that the company says can provide 10 times the performance of comparable alternatives. Nemotron RAG includes embedding and reranking models. Embedding models turn data into mathematical representations that AI applications understand. Reranking is one of the steps involved in the RAG, or retrieval-augmented generation, workflow. After an AI application uses RAG to retrieve the files needed to answer a prompt, a reranking model highlights the most relevant files. Nvidia's AI models are joined by a trio of development tools that are likewise available under an open-source license. The first tool, AlpaSim, enables developers to create simulated environments in which autonomous driving models can be trained. The software makes it possible to customize details such as traffic conditions and a simulated vehicle's sensor array. For added measure, developers can inject sensor noise to evaluate how well their AI models filter erroneous data. Nvidia is also rolling out a second simulation framework called Isaac Lab-Arena. It's designed to ease the task of training AI models for robots. According to the company, Isaac Lab-Arena enables developers to measure AI models' performance using popular third-party benchmarks such as Robocasa, which is mainly used to evaluate household robots. Software teams can use Nvidia's third new tool, OSMO, to manage their simulation workloads. It's an orchestrator that also lends itself to managing other AI development workflows such as synthetic data generation pipelines and model training jobs. Nvidia says that OSMO can orchestrate workloads across public clouds and developer workstations. Manufacturers can use a new Nvidia computing module called the Jetson Jetson T4000 to power their robots. It's based on the company's Blackwell graphics processing unit architecture. An industrial robot maker, for example, could use the module to run its systems' AI-powered factory floor navigation software.
[7]
Robots with human-type capabilities are coming this year, says Nvidia CEO
"This year," he replied, when I asked him when robots were going to have human-level capabilities. "You're going to see some pretty amazing things," he told me. The CEO of Nvidia - possibly the most powerful figure in global technology - is not alone. There's a widespread feeling in tech that artificial intelligence is ready to escape the screen and enter the physical world. At the Consumer Electronics Show (CES), the world's biggest tech conference, there are robots everywhere you turn. Vacuum cleaner robots. Lawnmower robots. Farming robots. And lots and lots of humanoid robots - creepy or cool, depending on how you see these things (sometimes - often - both). Being the big trend of CES can be a curse. Many technologies hyped to the sky at the annual tech fair have failed to live up to their potential. And it is hard to match the experience of interacting with these robots to the unbridled optimism of figures like Huang. Many of the robots at CES are so basic they're not really robots at all, at least in the sense of being autonomous - as soon as anything gets even a little complicated, someone with a remote control rushes across to take over. That makes them little better than an extremely expensive toy. But the excitement about robotics isn't just hype because there is already a powerful, real example of a robot that has passed what you might call the physical Turing Test - that is, the moment when you can't tell the difference between human and machine. Self-driving cars are roaming the streets of several cities in the US, including the ones around CES, and will be brought to London in 2026. Roads might be a relatively controlled environment, but they are still extremely chaotic, and the risk of getting something wrong is enormous. If robots can handle that situation, the thinking goes, they should be ready for pretty much anything. A lot of the excitement in tech about robotics stems from that example, and from the growth in the capacities of generative AI. 'We have the brain to put inside the robots' If you can get the AI model to run on the device, they reckon you can effectively give the robot a generative AI brain. "We finally have the core ingredient to build the missing piece of robots, which was the robot brain," says Rev Lebaredian, vice president of Omniverse and Simulation Technology at Nvidia. "Once we could do that, then it started making sense to build the robot bodies, because we have the brain to put inside them." The movement of those robot bodies is improving very quickly too, as the techniques which enabled AI to read, write and talk seem to transfer over to physical interaction. It seems to many as if the core technological challenges of robotics have at last been cracked. "I think in general, everybody who is in this industry, who is on the frontier of this research, believes we now have the basic ingredients to build everything we need for the kinds of robots we've been imagining," says Lebaredian. 'Robots will create jobs' There's a common consensus that the home is a step too far for robots at the moment. It's too messy, too potentially dangerous and consumers are too price-conscious. Instead, the focus for companies like Boston Dynamics - arguably the world's leading robotics company - is industry. "We think you need to start on industry first," says Robert Playter, CEO of Boston Dynamics. "We think it's going to be 2028, 2030 when we have robots deployed in factories and probably be five years after that before they're really affordable and in the home." Factories are a bit like roads - mostly controlled environments where the goals are very clear. So you can see the appeal for tech companies. Of course, the people who work in those factories will worry about their jobs. I asked Huang if that concerned him. "Having robots will create jobs," he replied. "We have a labour shortage in the world, not by one or 2,000 people, by tens of millions of people. And it's going to get worse because of population decline. "And so, we need to have more, if you will, AI immigrants to help us in the manufacturing floors and do the type of work that maybe we decided not to do anymore." "What we've seen when we actually deploy robots with our customers - for example, we have a robot that unloads trailers. "People are happy to get out of that job, and they'll go do some other job in the warehouse. They'll operate the robot. The people who were unloading the trailer now operate the robot. "I think there's a huge opportunity to really let the robots do the truly dirty and dangerous stuff." Of course, no one can say for sure. But if robots improve in the way the tech industry expects, we might get to find out fairly soon.
[8]
Jensen Huang Just Made a Bold Prediction About Humanoid Robots -- and Says It Will Happen 'This Year'
Walk the floor of CES and you won't have to search too hard to find a humanoid robot. Walk a bit further and you're likely to see one that's not functioning exactly as its creators intended. But that's not dampening Nvidia CEO Jensen Huang's enthusiasm one bit. In a Q&A session on Tuesday at this year's Consumer Electronics Show, Huang was asked when he expected these robots would have human-level capabilities. His response was quick and confident: "This year." It's the sort of answer you might expect from someone in Huang's position. As with artificial intelligence, the success of the robotics industry is a success for Nvidia. But with demonstrations going awry, even in the next room over, it's a bold assertion. Not only does Huang believe humanoid robots will boast human-level capabilities this year, he says they'll also create jobs, rather than reduce them. "Having robots will create jobs," Huang said. In fact, Huang says, due to population decline, "We will need more 'AI immigrants' to ... do the kind of work we decided not to do anymore." There are some hurdles to clear first, he admits. The technology is moving forward at an extremely fast pace, but there are some areas that still need substantial improvement. Leading that list? Fine motor skills. "Fine motor skills [are] extremely hard -- and the reason for that is building a hand is hard," he says. "The motor technology is hard. We don't just use our eyes. We also use touch. And the robot only has eyes, so it needs to have touch." Locomotion is a challenge as well. Today's robots, for the most part, move like ... well, robots. Huang, though, said there has been incredible progress in that area and he believes it will be the first significant challenge to be solved. After that, he says, gross motor articulation and grasping will be conquered, then fine motor skills. Once those skills are mastered, robots could be able to begin taking some positions that they're unable to fill today. That, understandably, has some people worried about their livelihood. Huang says he thinks the jobs robots will take, however, will be the ones that people don't want now. "The robotic revolution will replace the labor loss and therefore is going to drive up the economy -- and when the economy increases, we hire more people," he says. "There are a lot of jobs that won't be replaced by AI for a very long time. We just need the economy to do well." Not everyone is quite as bullish on robots -- a physical extension of AI -- entering the workforce as Huang. A report from Challenger, Grey and Christmas released last October found that AI was responsible for over 17,000 lost jobs in the first 10 months of 2025. AI and robotics, the firm said, are "not only costing jobs, but also making it difficult to land positions, particularly for entry-level engineers." Nobel laureate and the so-called "Godfather of AI" Geoffrey Hinton, meanwhile, said in September that AI will indeed drive a "huge rise in profits," but that will come at the cost of creating "massive unemployment." Go inside one interesting founder-led company each day to find out how its strategy works, and what risk factors it faces. Sign up for 1 Smart Business Story from Inc. on Beehiiv.
[9]
Nvidia's Jensen Huang Says Robots Have 1 Key Weakness, but There Might Be a Solution
Nvidia has released a slew of new AI models and technology, all designed to help businesses create the next generation of robots. During his annual keynote presentation at the Consumer Electronics Show in Las Vegas, Nvidia CEO Jensen Huang explained that the roadblock keeping AI-powered robots from being truly helpful is that they don't understand some of the fundamental things about existing as a physical being on Earth. Children instinctively learn to understand phenomena like gravity, object permanence, inertia, friction, and cause and effect, said Huang, but a typical AI model has no way of grokking "the common sense of the physical world." Sporting a shimmering crocodile-patterned leather jacket, Huang described Nvidia's solution: simulations. Developers can drop digital twins of their robots into Omniverse, Nvidia's platform for generating physically realistic 3D sandboxes, and use the company's Cosmos family of AI models to run thousands upon thousands of simulations, creating the training data that will allow the robot to function in the real world. Imagine you wanted to create a robot arm to help out in the kitchen. You could generate a virtual kitchen in Omniverse, drop a digital twin of your robot into the simulation, and direct it to pick up and put down different types of fruit. The "synthetic data" generated through the simulation is then used to train Nvidia's family of robotics models, enabling the actual robot to replicate its virtual counterpart's actions.
[10]
CES 2026: Industry giants double down on physical AI with foundation models, processor for robots
Nvidia CEO Jensen Huang kickstarted CES 2026 with a keynote address where he announced new open foundation models that will allow robots to reason, plan and adapt to new tasks and environments. The foundation models are part of a full stack ecosystem Nvidia has developed for physical AI and has made available to physical AI developers via Github and Hugging Face. "The ChatGPT moment for physical AI is here -- when machines begin to understand, reason and act in the real world," said Huang. The new foundation models include two world models- Cosmos Transfer and Predict 2.5- that can generate high-fidelity synthetic data and facilitate simulation testing. It also includes Cosmos Reason 2, a vision language model (VLM) capable of real-world situational awareness and reasoning. Then there is Isaac GR00T N1.6, a vision language action (VLA) model that uses Cosmos Reason to enable whole-body coordination, allowing humanoid robots to handle objects. VLA models allow robots to analyze their environment and decide the most appropriate action. Nvidia also announced Isaac Lab-Arena, an open-source simulation framework hosted on GitHub for developers looking for a virtual testing platform for robots. Huang also showed a video of how robots are being trained in photorealistic and simulated worlds on Isaac Sim and Isaac Lab. Nvidia wasn't the only company that is going all-in on physical AI, as the demand for AI grows beyond chatbots and productivity apps into the physical world. Qualcomm also announced a full-stack architecture for physical AI including the new Dragonwing IQ10 processor for humanoid robots. The architecture supports advanced perception and motion planning with end-to-end AI models such as VLAs and VLMs. Advanced perception allows the robot to understand depth, texture, and the relation between objects, while motion planning with end-to-end AI models allows the robot to learn the entire sequence of actions at once. Qualcomm claims that the new Dragonwing IQ10 processor will drive more real-world deployment of physical AI. The chipmaker is in talks with Kuka Robotics to build robotics solutions based on it. Further, Qualcomm claims that its architecture will enable robots to reason and adapt to complex environments in real time. It will also allow seamless scaling across diverse robotic platforms. What is physical AI and how is it changing robotics These announcements at CES signal a shift in expectation from robots. While fixed function robots excel at mass production, they can't iterate instantly or adapt to new orders. For enterprises looking for operational agility, it makes more sense to invest in physical AI driven robots that can perceive, adapt in minutes and teach themselves to handle complex tasks. Physical AI refers to all those systems that use AI to understand, interact and adapt to the physical world. It can be put into anything ranging from humanoid robots, drones, self-driving cars to smart sensor systems. With Physical AI, robots can learn by watching human movements and navigate in more complex environments more naturally. For instance, when a robot with physical AI sees a new part that it has never seen before it can still figure out how to pick it up immediately. Earlier, robots had to be programmed with new instructions to handle new parts. They could only do fixed, repetitive tasks based on set instructions in a controlled environment. The robotics industry has given several demonstrations of this new generation of AI-powered robots in the last few months. They were put on fashion runways alongside humans where they showed their spatial awareness to navigate on a slippery and crowded runway. In November, Chinese EV firm Xpeng unveiled a humanoid robot named Iron that walks naturally like a human using artificial muscles and a flexible spine. Iron also has an AI-powered brain that allows it to interact like a human. Many firms are already building and deploying these AI driven intelligent robots in factories. US-based pharmaceutical company PharmAgri is planning to deploy up to 10,000 Tesla Optimus humanoid robots in its plants. BMW is working with Figure AI to deploy humanoid robots in its South Carolina factory. Google DeepMind has also partnered with Apptronik to train robots on Gemini Robotics VLA models, so they can understand human commands and act on them without being pre-programmed. What is driving the need for physical AI Experts believe that the need for physical AI is strategic as many countries face shortage of skilled workers. According to Goldman Sachs, the humanoid robot market is expected to reach $38 billion by 2035. The manufacturing cost of humanoid robots has also dropped from a range of $50,000 (low-end models) and $250,000 (state-of-the art models) to a range of $30,000 and $150,000. According to EY, physical AI will become an integral part of daily life and work in the next five years. Adoption of intelligent robots will grow to handle service-related tasks in assembly lines, office cleaning, and waste management. In healthcare, they can sort laundry, maintenance and cleanliness.
Share
Share
Copy Link
Nvidia announced a comprehensive robotics ecosystem at CES 2026, including new foundation models for robot learning and reasoning, open-source simulation frameworks, and Blackwell-powered edge hardware. The company aims to become the default platform for generalist robotics as major partners including Boston Dynamics, Caterpillar, and LG Electronics debut robots built on Nvidia technologies.
Nvidia unveiled an ambitious full-stack ecosystem for physical AI models at CES 2026, signaling its intent to dominate the robotics industry much like Android dominates smartphones. The company released a suite of open foundation models designed to enable robots to reason, plan, and adapt across diverse tasks and environments, moving beyond narrow, single-purpose machines
1
. "The ChatGPT moment for robotics is here," declared Jensen Huang, founder and CEO of Nvidia, during the company's Las Vegas keynote4
. All models are now available on Hugging Face, making them accessible to the company's 2 million robotics developers1
.
Source: ZDNet
The new releases include Cosmos Transfer 2.5 and Cosmos Predict 2.5, world models that understand real-world physics and spatial properties, enabling synthetic data generation and robot policy evaluation in simulation
2
. Cosmos Reason 2, a reasoning vision language model, allows intelligent machines to see, understand, and act in the physical world like humans, making decisions using reason, prior knowledge, and understanding of physics2
. The centerpiece is Isaac GR00T N1.6, a next-generation vision language action model purpose-built for humanoid robots that unlocks whole-body control, enabling them to move and handle objects simultaneously while leveraging Cosmos Reason as its brain1
.
Source: CXOToday
Addressing critical industry challenges around validation and testing, Nvidia introduced Isaac Lab-Arena, an open-source framework hosted on GitHub that provides a collaborative system for large-scale robot policy evaluation and benchmarking in simulation
4
. The platform consolidates resources, task scenarios, training tools, and established benchmarks like Libero, RoboCasa, and RoboTwin, creating a unified standard where the industry previously lacked one1
. This matters because testing physical AI developments in real-world environments can be costly, slow, and risky, particularly for applications like autonomous vehicles2
.Supporting the entire workflow is Nvidia OSMO, a cloud-native orchestration framework that serves as connective infrastructure integrating the entire development process from data generation through training across both desktop and edge-to-cloud environments
1
. OSMO lets developers define and run workflows such as synthetic data generation, model training, and software-in-the-loop testing across different compute environments, from workstations to mixed cloud instances, significantly speeding up development cycles4
. The framework is already being used by robot developers including Hexagon Robotics and has been integrated into the Microsoft Azure Robotics Accelerator toolchain4
.To power its robotics ecosystem, Nvidia introduced the Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family
1
. The module delivers 1,200 teraflops of AI compute and 64 gigabytes of memory while running efficiently at 40 to 70 watts, providing four times greater energy efficiency and AI compute compared to the previous generation2
. This cost-effective on-device compute upgrade addresses a critical need for robotics developers who require powerful processing capabilities without excessive power consumption1
.Nvidia is deepening its partnership with Hugging Face to democratize robot training, integrating Isaac and GR00T technologies into Hugging Face's LeRobot framework
1
. This collaboration connects Nvidia's 2 million robotics developers with Hugging Face's 13 million AI builders, allowing more people to experiment with robot training without expensive hardware or specialized knowledge1
. The developer platform's open-source Reachy 2 humanoid now works directly with Nvidia's Jetson Thor chip, while the Reachy Mini tabletop robot is fully interoperable with Nvidia DGX Spark, letting developers experiment with different AI models without being locked into proprietary systems2
.Related Stories
Major robotics companies are already adopting Nvidia's platform for generalist robotics applications. Boston Dynamics, Caterpillar, Franka Robots, Humanoid, LG Electronics, and NEURA Robotics unveiled next-generation robots and autonomous machines built using Nvidia technologies at CES
4
. These robots span diverse use cases across industries, from Richtech Robotics launching Dex, a humanoid robot for industrial environments, to LG Electronics unveiling a new home robot for indoor household tasks2
. Franka Robotics, NEURA Robotics, and Humanoid are using GR00T-enabled workflows to simulate, train, and validate new behaviors for robots4
.
Source: NVIDIA
Early indicators suggest Nvidia's strategy is gaining traction. Robotics has become the fastest-growing category on Hugging Face, with Nvidia's models leading downloads
1
. Jensen Huang expressed confidence that robots with some human-level capabilities will emerge this year, noting "I know how fast the technology is moving"3
. However, questions remain about whether the ChatGPT moment for robotics has truly arrived. While Huang's press release declared the moment is "here," his keynote was more measured, saying it's "nearly here"5
. The distinction matters because translating AI capabilities into machines that can reliably manipulate the physical world at scale and at commercially viable costs remains enormously difficult, slow, and capital-intensive5
.Summarized by
Navi
19 Mar 2025•Technology

19 Mar 2025•Technology

07 Jan 2025•Technology

1
Business and Economy

2
Policy and Regulation

3
Technology
