3 Sources
3 Sources
[1]
Robots are using game theory keep humans safe - Earth.com
Picture an auto factory humming with energy. A robot rapidly assembles car doors while a human checks each one for quality. Their partnership looks seamless - with machines handling speed and strength, and people applying judgment and dexterity. These scenes symbolize the growing integration of humans and robots across industries. But as the collaboration deepens, so do the risks. Human error, unpredictability, and miscommunication can create moments robots are not prepared to handle - moments can have serious consequences. Researchers are now tackling this challenge. At CU Boulder, Professor Morteza Lahijanian and his team are building processes that help robots make safe yet effective decisions around people. In a new study, Lahijanian and graduate students Karan Muvvala and Qi Heng Ho have introduced algorithms that allow robots to manage risk while completing tasks. "How do we go from very structured environments where there is no human, where the robots are doing everything by themselves, to unstructured environments where there are a lot of uncertainties and other agents?" Lahijanian asked. Robots, like humans, rely on mental models to make decisions. When working with a person, a robot predicts possible actions and adjusts accordingly. "If you're a robot, you have to be able to interact with others," said Lahijanian. "You have to put yourself out there and take a risk and see what happens. But how do you make that decision, and how much risk do you want to tolerate?" The team drew inspiration from game theory, a mathematical framework originally from economics. In robotics, game theory treats each robot as a player in a game. Winning means completing a task, but with humans involved, the game gets unpredictable. Instead of ensuring robots always win, the researchers proposed "admissible strategies." With this approach, a robot aims to finish its job while minimizing harm. Safety remains the top priority. "In choosing a strategy, you don't want the robot to seem very adversarial," Lahijanian said. "In order to give that softness to the robot, we look at the notion of regret. "Is the robot going to regret its action in the future? And in optimizing for the best action at the moment, you try to take an action that you won't regret." Back at the auto factory, imagine the human making repeated mistakes. Using the new algorithms, the robot might fix those errors safely. If that fails, it could relocate the task to a safer space, protecting both the product and the worker. Much like a chess master predicting several moves ahead, robots can anticipate human choices. Perfect prediction is impossible, but proactive strategies prioritize safety. "If you want to have collaboration between a human and a robot, the robot has to adjust itself to the human. We don't want humans to adjust themselves to the robot," Lahijanian said. This flexibility allows robots to work with novices and experts alike. Regardless of the human partner's skill level, the robot must respond safely and intelligently. The auto factory is only one example. Hospitals could also benefit. Imagine a nurse and a robot sharing patient care tasks. The robot might deliver medications or carry equipment, leaving the nurse free to focus on judgment-driven decisions. If errors occur, the robot could adjust without creating new risks. Construction is another field where collaboration could shine. Robots could take on heavy lifting while humans manage fine-detail tasks like alignment or inspection. Agriculture also stands to gain. Machines could harvest crops at scale while farmers concentrate on resource management and sustainable practices. When robots work safely with humans, they can provide clear benefits. Industries facing labor shortages, such as elder care, could see relief. Physically demanding jobs might also become safer for human workers. Lahijanian emphasized that robots are not meant to replace human talent but to expand it. "Human-robot collaboration is about combining complementary strengths: humans contribute intelligence, judgment, and flexibility, while robots offer precision, strength, and reliability," he said. "Together, they can achieve more than either could alone, safely and efficiently." The research was presented at the International Joint Conference on Artificial Intelligence in August 2025. -- - Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[2]
Robot regret: New research helps robots make safer decisions around humans
Imagine for a moment that you're in an auto factory. A robot and a human are working next to each other on the production line. The robot is busy rapidly assembling car doors while the human runs quality control, inspecting the doors for damage and making sure they come together as they should. Robots and humans can make formidable teams in manufacturing, health care and numerous other industries. While the robot might be quicker and more effective at monotonous, repetitive tasks like assembling large auto parts, the person can excel at certain tasks that are more complex or require more dexterity. But there can be a dark side to these robot-human interactions. People are prone to making mistakes and acting unpredictably, which can create unexpected situations that robots aren't prepared to handle. The results can be tragic. New and emerging research could change the way robots handle the uncertainty that comes hand-in-hand with human interactions. Morteza Lahijanian, an associate professor in CU Boulder's Ann and H.J. Smead Department of Aerospace Engineering Sciences, develops processes that let robots make safer decisions around humans while still trying to complete their tasks efficiently. In a new study presented at the International Joint Conference on Artificial Intelligence in August 2025, Lahijanian and graduate students Karan Muvvala and Qi Heng Ho devised new algorithms that help robots create the best possible outcomes from their actions in situations that carry some uncertainty and risk. "How do we go from very structured environments where there is no human, where the robots are doing everything by themselves, to unstructured environments where there are a lot of uncertainties and other agents?" Lahijanian asked. "If you're a robot, you have to be able to interact with others. You have to put yourself out there and take a risk and see what happens. But how do you make that decision, and how much risk do you want to tolerate?" Similar to humans, robots have mental models that they use to make decisions. When working with a human, a robot will try to predict the person's actions and respond accordingly. The robot is optimized for completing a task -- assembling an auto part, for example -- but ideally, it will also take other factors into consideration. In the new study, the research team drew upon game theory, a mathematical concept that originated in economics, to develop the new algorithms for robots. Game theory analyzes how companies, governments and individuals make decisions in a system where other "players" are also making choices that affect the ultimate outcome. In robotics, game theory conceptualizes a robot as being one of numerous players in a game that it's trying to win. For a robot, "winning" is completing a task successfully -- but winning is never guaranteed when there's a human in the mix, and keeping the human safe is also a top priority. So instead of trying to guarantee a robot will always win, the researchers proposed the concept of a robot finding an "admissible strategy." Using such a strategy, a robot will accomplish as much of its task as possible while also minimizing any harm, including to a human. "In choosing a strategy, you don't want the robot to seem very adversarial," said Lahijanian. "In order to give that softness to the robot, we look at the notion of regret. Is the robot going to regret its action in the future? And in optimizing for the best action at the moment, you try to take an action that you won't regret." Let's go back to the auto factory where the robot and human are working side-by-side. If the person makes mistakes or is not cooperative, using the researchers' algorithms, a robot could take matters into its own hands. If the person is making mistakes, the robot will try to fix these without endangering the person. But if that doesn't work, the robot could, for example, pick up what it's working on and take it to a safer area to finish its task. Much like a chess champion who thinks several turns ahead about an opponent's possible moves, a robot will try to anticipate what a person will do and stay several steps ahead of them, Lahijanian said. But the goal is not to attempt the impossible and perfectly predict a person's actions. Instead, the goal is to create robots that put people's safety first. "If you want to have collaboration between a human and a robot, the robot has to adjust itself to the human. We don't want humans to adjust themselves to the robot," he said. "You can have a human who is a novice and doesn't know what they're doing, or you can have a human who is an expert. But as a robot, you don't know which kind of human you're going to get. So you need to have a strategy for all possible cases." And when robots can work safely alongside humans, they can enhance people's lives and provide real and tangible benefits to society. As more industries embrace robots and artificial intelligence, there are many lingering questions about what AI will ultimately be capable of doing, whether it will be able to take over the jobs that people have historically done, and what that could mean for humanity. But there are upsides to robots being able to take on certain types of jobs. They could work in fields with labor shortages, such as health care for older populations, and physically challenging jobs that may take a toll on workers' health. Lahijanian also believes that, when they're used correctly, robots and AI can enhance human talents and expand what we're capable of doing. "Human-robot collaboration is about combining complementary strengths: humans contribute intelligence, judgment, and flexibility, while robots offer precision, strength, and reliability," he said. "Together, they can achieve more than either could alone, safely and efficiently."
[3]
Robot Regret: New Research Helps Robots Make Safer Decisions Around Humans | Newswise
Newswise -- Imagine for a moment that you're in an auto factory. A robot and a human are working next to each other on the production line. The robot is busy rapidly assembling car doors while the human runs quality control, inspecting the doors for damage and making sure they come together as they should. Robots and humans can make formidable teams in manufacturing, health care and numerous other industries. While the robot might be quicker and more effective at monotonous, repetitive tasks like assembling large auto parts, the person can excel at certain tasks that are more complex or require more dexterity. But there can be a dark side to these robot-human interactions. People are prone to making mistakes and acting unpredictably, which can create unexpected situations that robots aren't prepared to handle. The results can be tragic. New and emerging research could change the way robots handle the uncertainty that comes hand-in-hand with human interactions. Morteza Lahijanian, an associate professor in CU Boulder's Ann and H.J. Smead Department of Aerospace Engineering Sciences, develops processes that let robots make safer decisions around humans while still trying to complete their tasks efficiently. From left, engineering professor Morteza Lahijanian and graduate student Karan Muvvala watch as a robotic arm completes a task using wooden blocks. (Credit: Casey Cass) In a new study presented at the International Joint Conference on Artificial Intelligence in August 2025, Lahijanian and graduate students Karan Muvvala and Qi Heng Ho devised new algorithms that help robots create the best possible outcomes from their actions in situations that carry some uncertainty and risk. "How do we go from very structured environments where there is no human, where the robots are doing everything by themselves, to unstructured environments where there are a lot of uncertainties and other agents?" Lahijanian asked. "If you're a robot, you have to be able to interact with others. You have to put yourself out there and take a risk and see what happens. But how do you make that decision, and how much risk do you want to tolerate?" Similar to humans, robots have mental models that they use to make decisions. When working with a human, a robot will try to predict the person's actions and respond accordingly. The robot is optimized for completing a task -- assembling an auto part, for example -- but ideally, it will also take other factors into consideration. In the new study, the research team drew upon game theory, a mathematical concept that originated in economics, to develop the new algorithms for robots. Game theory analyzes how companies, governments and individuals make decisions in a system where other "players" are also making choices that affect the ultimate outcome. In robotics, game theory conceptualizes a robot as being one of numerous players in a game that it's trying to win. For a robot, "winning" is completing a task successfully -- but winning is never guaranteed when there's a human in the mix, and keeping the human safe is also a top priority. So instead of trying to guarantee a robot will always win, the researchers proposed the concept of a robot finding an "admissible strategy." Using such a strategy, a robot will accomplish as much of its task as possible while also minimizing any harm, including to a human. "In choosing a strategy, you don't want the robot to seem very adversarial," said Lahijanian. "In order to give that softness to the robot, we look at the notion of regret. Is the robot going to regret its action in the future? And in optimizing for the best action at the moment, you try to take an action that you won't regret." Let's go back to the auto factory where the robot and human are working side-by-side. If the person makes mistakes or is not cooperative, using the researchers' algorithms, a robot could take matters into its own hands. If the person is making mistakes, the robot will try to fix these without endangering the person. But if that doesn't work, the robot could, for example, pick up what it's working on and take it to a safer area to finish its task. Karan Muvvala watches the robotic arm pick up a blue block. (Credit: Casey Cass) Much like a chess champion who thinks several turns ahead about an opponent's possible moves, a robot will try to anticipate what a person will do and stay several steps ahead of them, Lahijanian said. But the goal is not to attempt the impossible and perfectly predict a person's actions. Instead, the goal is to create robots that put people's safety first. "If you want to have collaboration between a human and a robot, the robot has to adjust itself to the human. We don't want humans to adjust themselves to the robot," he said. "You can have a human who is a novice and doesn't know what they're doing, or you can have a human who is an expert. But as a robot, you don't know which kind of human you're going to get. So you need to have a strategy for all possible cases." And when robots can work safely alongside humans, they can enhance people's lives and provide real and tangible benefits to society. As more industries embrace robots and artificial intelligence, there are many lingering questions about what AI will ultimately be capable of doing, whether it will be able to take over the jobs that people have historically done, and what that could mean for humanity. But there are upsides to robots being able to take on certain types of jobs. They could work in fields with labor shortages, such as health care for older populations, and physically challenging jobs that may take a toll on workers' health. Lahijanian also believes that, when they're used correctly, robots and AI can enhance human talents and expand what we're capable of doing. "Human-robot collaboration is about combining complementary strengths: humans contribute intelligence, judgment, and flexibility, while robots offer precision, strength, and reliability," he said. "Together, they can achieve more than either could alone, safely and efficiently."
Share
Share
Copy Link
Researchers at CU Boulder have developed new algorithms using game theory to help robots make safer decisions when working alongside humans, prioritizing safety while maintaining efficiency in various industries.
Researchers at CU Boulder have developed groundbreaking algorithms that enable robots to make safer decisions when working alongside humans. The study, presented at the International Joint Conference on Artificial Intelligence in August 2025, introduces a novel approach to human-robot interaction using game theory
1
.As robots become increasingly integrated into various industries, from manufacturing to healthcare, the risks associated with human-robot collaboration have become more apparent. Professor Morteza Lahijanian and his team at CU Boulder recognized that human unpredictability and potential errors could lead to dangerous situations that robots might not be prepared to handle
2
.Source: Earth.com
The researchers drew inspiration from game theory, a mathematical framework originally used in economics, to develop new algorithms for robots. In this context, each robot is treated as a player in a game where winning means completing a task successfully. However, with humans involved, the game becomes unpredictable
1
.Instead of ensuring robots always win, the team proposed the concept of "admissible strategies." This approach allows robots to accomplish as much of their task as possible while minimizing potential harm, with safety remaining the top priority
3
.A key aspect of the new algorithms is the concept of robot regret. As Lahijanian explains, "Is the robot going to regret its action in the future? And in optimizing for the best action at the moment, you try to take an action that you won't regret"
1
. This approach allows robots to make decisions that balance task completion with safety considerations.Source: Tech Xplore
Related Stories
The researchers envision numerous applications for this technology across various industries:
Manufacturing: In auto factories, robots could safely assemble parts while humans perform quality control tasks
2
.Healthcare: Robots could assist nurses by delivering medications or carrying equipment, allowing healthcare professionals to focus on complex, judgment-driven decisions
1
.Construction: Robots could handle heavy lifting while humans manage fine-detail tasks like alignment or inspection
1
.Agriculture: Machines could harvest crops at scale while farmers concentrate on resource management and sustainable practices
1
.One of the key strengths of this approach is its ability to adapt to different human skill levels. "You can have a human who is a novice and doesn't know what they're doing, or you can have a human who is an expert. But as a robot, you don't know which kind of human you're going to get. So you need to have a strategy for all possible cases," Lahijanian explains
3
.As industries increasingly embrace robotics and artificial intelligence, questions arise about the future of human employment. However, Lahijanian emphasizes that robots are not meant to replace human talent but to expand it. "Human-robot collaboration is about combining complementary strengths: humans contribute intelligence, judgment, and flexibility, while robots offer precision, strength, and reliability," he states
2
.This research represents a significant step forward in creating safer, more efficient human-robot collaborations across various sectors, potentially addressing labor shortages and improving worker safety in physically demanding jobs.
Summarized by
Navi
1
Business and Economy
2
Technology
3
Business and Economy