Curated by THEOUTPOST
On Fri, 28 Feb, 8:04 AM UTC
2 Sources
[1]
A robot nearly headbutted a festival spectator in China - here are four urgent steps to make the tech safer
Humanoid robots are supposed to be our loyal assistants, but we saw another side to them the other day. Chinese robot manufacturer Unitree was demonstrating its latest H1 robots at a lantern festival in the city of Taishan, Guangdong province, when one walked up to the crowd barrier and seemed to lunge at an elderly woman, nearly headbutting her. The incident quickly went viral, and sparked a fierce debate about whether the robot actually attacked the woman or had tripped up. It's mostly being overlooked that we're a long way from having robots that could intentionally attack someone - machines like these are often remote controlled - but the danger to the public is clearly real enough. With sales of humanoid robots set to skyrocket over the next decade, the public will increasingly be at risk from these kinds of incidents. In our view as robotics researchers, governments have put very little thought into the risks. Here are some urgent steps that they should take to make humanoid robots as safe as possible. 1. Increase owner requirements The first important issue is to what extent humanoid robots will be controlled by users. Whereas Tesla's Optimus can be remotely operated by people in a control centre, others such as the Unitree H1s are controlled by the user with a handheld joystick. Currently on sale for around £90,000, they come with a software development kit on which you can develop your own artificial intelligence (AI) system, though only to a limited extent. For example, it could say a sentence or recognise a face but not take your kids to school. Who is to blame if someone gets hurt or even killed by a human-controlled robot? It's hard to know for sure - any discussion about liability would first involve proving whether the harm was caused by human error or a mechanical malfunction. This came up in a Florida case where a widower sued medical robot-maker Intuitive Surgical Inc over his wife's death in 2022. Her death was linked to injuries she sustained from a heat burn in her intestine during an operation that was caused by a fault in one of the company's machines. The case was dropped in 2024 after being partially dismissed by a district judge. But the fact that the widower sued the manufacturer rather than the medics demonstrated that the robotics industry needs a legal framework for preventing such situations as much as the public do. While for drones there are aviation laws and other restrictions to govern their use in public areas, there are no specific laws for walking robots. So far, the only place to have put forward governance guidelines is China's Shanghai province. Published in summer 2024, these include stipulating that robots must not threaten human security, and that manufacturers must train users on how to use these machines ethically. For robots controlled by owners, in the UK there is currently nothing preventing someone from taking a robot dog out for a stroll in a busy park, or a humanoid robot to the pub for a pint. As a starting point, we could ban people from controlling robots under the influence of alcohol or drugs, or when they are otherwise distracted such as using their phones. Their use could also be restricted in risky environments such as confined spaces with lots of members of the public, places with fire or chemical hazards, and the roofs of buildings. 2. Improve design Robots that looks sleek and can dance and flip are fun to watch, but how safe are the audiences? Safe designs would consider everything from reducing cavities where fingers could get caught, to waterproofing internal components. Protective barriers or exoskeletons could further reduce unintended contact, while cushioning mechanisms could reduce the effect of an impact. Robots should be designed to signal their intent through lights, sounds and gestures. For example, they should arguably make a noise when entering a room so as not to surprise anyone. Even drones can alert their user if they lose signal or battery and need to return to home, and such mechanisms should also be built into walking robots. There are no legal requirements for any such features at present. It's not that manufacturers are entirely ignoring these issues for walking robots. Unitree's quadroped Go2, for instance, blinks and beeps when the battery is low or if it is overheating. It also has automatic emergency cut-offs in these situations, although they must be triggered by a remote operator when the robot is in "telemetric mode". Crucially, however, there are no clear regulations to ensure that all manufacturers meet a certain safety standard. 3. Train operators Clearly there will be dangers with robots using AI features, but remote-operated models could be even more dangerous. Mistakes could result from users' lack of real-world training and experience in real-life situations. There appears to be a major skills gap in operator training, and robotics companies will need to prioritise this to ensure operators can control machines efficiently and safely. In addition, humans can have delayed reaction times and limited concentration, so we also need systems that can monitor the attention of robot operators and alert them to prevent accidents. This would be similar to the HGV-driver distraction-detection systems that were installed in vehicles in London in 2024. 4. Educate the public The incident in China has highlighted current misconceptions about humanoid robots as the media is once again blaming AI despite the fact that this was not the issue. This risks causing widespread mistrust and confusion among the public. If people understand to what extent walking robots are owner-operated or remote-operated, it will change their expectations about what the robot might do, and make everyone safer as a result. Also, understanding the owner's level of control is vital for managing buyers' expectations and forewarning them about how much they'll need to learn about operating and programming a robot before they buy one.
[2]
A robot nearly headbutted a festival spectator in China. Here are four urgent steps to make the tech safer
Humanoid robots are supposed to be our loyal assistants, but we saw another side to them the other day. Chinese robot manufacturer Unitree was demonstrating its latest H1 robots at a lantern festival in the city of Taishan, Guangdong province, when one walked up to the crowd barrier and seemed to lunge at an elderly woman, nearly headbutting her. The incident quickly went viral, and sparked a fierce debate about whether the robot actually attacked the woman or had tripped up. It's mostly being overlooked that we're a long way from having robots that could intentionally attack someone -- machines like these are often remote controlled -- but the danger to the public is clearly real enough. With sales of humanoid robots set to skyrocket over the next decade, the public will increasingly be at risk from these kinds of incidents. In our view as robotics researchers, governments have put very little thought into the risks. The first important issue is to what extent humanoid robots will be controlled by users. Whereas Tesla's Optimus can be remotely operated by people in a control center, others such as the Unitree H1s are controlled by the user with a handheld joystick. Currently on sale for around £90,000, they come with a software development kit on which you can develop your own artificial intelligence (AI) system, though only to a limited extent. For example, it could say a sentence or recognize a face but not take your kids to school. Who is to blame if someone gets hurt or even killed by a human-controlled robot? It's hard to know for sure -- any discussion about liability would first involve proving whether the harm was caused by human error or a mechanical malfunction. This came up in a Florida case where a widower sued medical robot-maker Intuitive Surgical Inc over his wife's death in 2022. Her death was linked to injuries she sustained from a heat burn in her intestine during an operation that was caused by a fault in one of the company's machines. The case was dropped in 2024 after being partially dismissed by a district judge. But the fact that the widower sued the manufacturer rather than the medics demonstrated that the robotics industry needs a legal framework for preventing such situations as much as the public do. While for drones there are aviation laws and other restrictions to govern their use in public areas, there are no specific laws for walking robots. So far, the only place to have put forward governance guidelines is China's Shanghai province. Published in summer 2024, these include stipulating that robots must not threaten human security, and that manufacturers must train users on how to use these machines ethically. For robots controlled by owners, in the UK there is currently nothing preventing someone from taking a robot dog out for a stroll in a busy park, or a humanoid robot to the pub for a pint. As a starting point, we could ban people from controlling robots under the influence of alcohol or drugs, or when they are otherwise distracted such as using their phones. Their use could also be restricted in risky environments such as confined spaces with lots of members of the public, places with fire or chemical hazards, and the roofs of buildings. 2. Improve design Robots that looks sleek and can dance and flip are fun to watch, but how safe are the audiences? Safe designs would consider everything from reducing cavities where fingers could get caught, to waterproofing internal components. Protective barriers or exoskeletons could further reduce unintended contact, while cushioning mechanisms could reduce the effect of an impact. Robots should be designed to signal their intent through lights, sounds and gestures. For example, they should arguably make a noise when entering a room so as not to surprise anyone. Even drones can alert their user if they lose signal or battery and need to return to home, and such mechanisms should also be built into walking robots. There are no legal requirements for any such features at present. It's not that manufacturers are entirely ignoring these issues for walking robots. Unitree's quadruped Go2, for instance, blinks and beeps when the battery is low or if it is overheating. It also has automatic emergency cut-offs in these situations, although they must be triggered by a remote operator when the robot is in "telemetric mode." Crucially, however, there are no clear regulations to ensure that all manufacturers meet a certain safety standard. 3. Train operators Clearly there will be dangers with robots using AI features, but remote-operated models could be even more dangerous. Mistakes could result from users' lack of real-world training and experience in real-life situations. There appears to be a major skills gap in operator training, and robotics companies will need to prioritize this to ensure operators can control machines efficiently and safely. In addition, humans can have delayed reaction times and limited concentration, so we also need systems that can monitor the attention of robot operators and alert them to prevent accidents. This would be similar to the HGV-driver distraction-detection systems that were installed in vehicles in London in 2024. 4. Educate the public The incident in China has highlighted current misconceptions about humanoid robots as the media is once again blaming AI despite the fact that this was not the issue. This risks causing widespread mistrust and confusion among the public. If people understand to what extent walking robots are owner-operated or remote-operated, it will change their expectations about what the robot might do, and make everyone safer as a result. Also, understanding the owner's level of control is vital for managing buyers' expectations and forewarning them about how much they'll need to learn about operating and programming a robot before they buy one.
Share
Share
Copy Link
A recent incident involving a Unitree H1 robot at a Chinese lantern festival has sparked discussions about the safety of humanoid robots and the urgent need for regulations and safety measures.
A recent incident at a lantern festival in Taishan, Guangdong province, China, has raised significant concerns about the safety of humanoid robots. During a demonstration by Chinese robot manufacturer Unitree, one of their latest H1 robots appeared to lunge at an elderly woman, nearly headbutting her 1. This event quickly went viral, sparking debates about whether the robot attacked intentionally or simply tripped.
While the incident has caused alarm, it's important to note that we are still far from having robots capable of intentional attacks. Many of these machines, including the Unitree H1, are often remote-controlled or operated by users with handheld joysticks 2. However, with sales of humanoid robots expected to surge in the coming decade, the potential risks to public safety are becoming increasingly apparent.
Currently, there is a significant lack of specific laws and regulations governing the use of walking robots in public spaces. While drones are subject to aviation laws and other restrictions, humanoid robots operate in a regulatory vacuum. China's Shanghai province is the only place to have proposed governance guidelines, which were published in summer 2024 1.
Robotics researchers have identified four critical areas that need immediate attention to enhance the safety of humanoid robots:
There is a pressing need to establish clear guidelines for robot owners and operators. Suggestions include:
Safety considerations should be paramount in robot design, including:
The skills gap in robot operation is a significant concern. Key recommendations include:
Public understanding of humanoid robots is crucial for safety. This involves:
As the incident in China demonstrates, the rapid advancement of humanoid robot technology necessitates an equally swift development of safety measures and regulations. Without proper guidelines and public awareness, the increasing presence of these robots in our daily lives could lead to more dangerous situations.
Penn Engineering researchers have successfully hacked AI-controlled robots, bypassing safety protocols and manipulating them to perform dangerous actions. This breakthrough raises serious concerns about the integration of AI in physical systems and the need for enhanced security measures.
4 Sources
4 Sources
As AI technology rapidly advances, experts challenge common misconceptions about AI safety, emphasizing the need for a more nuanced and comprehensive approach to managing both current and future risks.
2 Sources
2 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
Figure AI has developed a more natural walking gait for its humanoid robot using reinforcement learning and AI simulation, marking a significant advancement in robotics and AI.
5 Sources
5 Sources
A lawsuit alleges an AI chatbot's influence led to a teenager's suicide, raising concerns about the psychological risks of human-AI relationships and the need for stricter regulation of AI technologies.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved