Curated by THEOUTPOST
On Wed, 5 Mar, 8:08 AM UTC
4 Sources
[1]
UC Irvine study shines headlights on consumer driverless vehicle safety deficiencies | Newswise
Irvine, Calif., March 4, 2025 -- For the first time, researchers at the University of California, Irvine have demonstrated that multicolored stickers applied to stop or speed limit signs on the roadside can confuse self-driving vehicles, causing unpredictable and possibly hazardous operations. In a presentation at the recent Network and Distributed System Security Symposium in San Diego, researchers from UC Irvine's Donald Bren School of Information & Computer Sciences described the real-world implications of what previously was only theorized: that low-cost and highly deployable malicious attacks can make traffic signs undetectable to artificial intelligence algorithms in some autonomous vehicles while making nonexistent signs appear out of nowhere to others. Both types of assaults can result in cars ignoring road commands, triggering unintended emergency braking, speeding and other violations. The scientists said that their study, which involved the three most representative AI attack designs, was the first large-scale evaluation of traffic sign recognition systems in top-selling consumer vehicle brands. "Waymo has been delivering more than 150,000 autonomous rides per week, and there are millions of Autopilot-equipped Tesla vehicles on the road, which demonstrates that autonomous vehicle technology is becoming an integral part of daily life in America and around the world," said co-author Alfred Chen, UC Irvine assistant professor of computer science. "This fact spotlights the importance of security, since vulnerabilities in these systems, once exploited, can lead to safety hazards that become a matter of life and death." The lead author of the study, Ningfei Wang, a research scientist at Meta who performed this work as a Ph.D. student in computer science at UC Irvine, said that his team's attack vectors of choice were stickers that had swirling, multicolored designs that confuse AI algorithms used for traffic sign recognition in driverless vehicles. "These stickers can be cheaply and easily produced by anyone with access to an open-source programming language such as Python and image processing libraries," Wang said. "Those tools combined with a computer with a graphics card and a color printer are all someone would need to foil TSR systems in autonomous vehicles." He added that an interesting discovery made during the project relates to the spatial memorization design common to many of today's commercial TSR systems. While this feature makes a disappearing attack (seeming to remove a sign from the vehicle's view) more difficult, Wang said, it makes spoofing a fake stop sign "much easier than we expected." Chen noted that the research was the first of its type on this security threat in real-world scenarios with commercially available vehicles. "Academics have studied driverless vehicle security for years and have discovered various practical security vulnerabilities in the latest autonomous driving technology," he said. "But these studies have been limited mostly to academic setups, leaving our understanding of such vulnerabilities in commercial autonomous vehicle systems highly limited. Our study fills this critical gap." Chen said that by focusing on a small subset of existing research in this area, his group was able to uncover various broken assumptions, inaccuracies and false claims. For example, no prior academic studies realized the common existence of spatial memorization design in commercial TSR systems. When Chen's team members modeled such a design in previously devised academic study setups, they uncovered results that directly challenge earlier observations and claims made in the state-of-the-art research community. "We believe this work should only be the beginning, and we hope that it inspires more researchers in both academia and industry to systematically revisit the actual impacts and meaningfulness of such types of security threats against real-world autonomous vehicles," Chen said. "This would be the necessary first step before we can actually know if, at the society level, action is needed to ensure safety on our streets and highways." Joining Chen and Wang on this project were former UC Irvine graduate students Takami Sato and Yunpeng Luo; current UC Irvine graduate student Shaoyuan Xie; and Kaidi Xu, assistant professor of computer science at Drexel University. The work was supported by the National Science Foundation and the U.S. Department of Transportation's CARMEN+ University Transportation Center, of which UC Irvine is a member. About the University of California, Irvine: Founded in 1965, UC Irvine is a member of the prestigious Association of American Universities and is ranked among the nation's top 10 public universities by U.S. News & World Report. The campus has produced five Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UC Irvine has more than 36,000 students and offers 224 degree programs. It's located in one of the world's safest and most economically vibrant communities and is Orange County's second-largest employer, contributing $7 billion annually to the local economy and $8 billion statewide. For more on UC Irvine, visit www.uci.edu. Media access: Radio programs/stations may, for a fee, use an on-campus studio with a Comrex IP audio codec to interview UC Irvine faculty and experts, subject to availability and university approval. For more UC Irvine news, visit news.uci.edu. Additional resources for journalists may be found at https://news.uci.edu/media-resources.
[2]
Study shines headlights on consumer driverless vehicle safety deficiencies
For the first time,researchers at the University of California, Irvine have demonstrated that multicolored stickers applied to stop or speed limit signs on the roadside can confuse self-driving vehicles, causing unpredictable and possibly hazardous operations. In a presentation at the recent Network and Distributed System Security Symposium in San Diego, researchers from UC Irvine's Donald Bren School of Information & Computer Sciences described the real-world implications of what previously was only theorized: that low-cost and highly deployable malicious attacks can make traffic signs undetectable to artificial intelligence algorithms in some autonomous vehicles while making nonexistent signs appear out of nowhere to others. Both types of assaults can result in cars ignoring road commands, triggering unintended emergency braking, speeding and other violations. The scientists said that their study, which involved the three most representative AI attack designs, was the first large-scale evaluation of traffic sign recognition systems in top-selling consumer vehicle brands. "Waymo has been delivering more than 150,000 autonomous rides per week, and there are millions of Autopilot-equipped Tesla vehicles on the road, which demonstrates that autonomous vehicle technology is becoming an integral part of daily life in America and around the world," said co-author Alfred Chen, UC Irvine assistant professor of computer science. "This fact spotlights the importance of security, since vulnerabilities in these systems, once exploited, can lead to safety hazards that become a matter of life and death." The lead author of the study, Ningfei Wang, a research scientist at Meta who performed this work as a Ph.D. student in computer science at UC Irvine, said that his team's attack vectors of choice were stickers that had swirling, multicolored designs that confuse AI algorithms used for traffic sign recognition in driverless vehicles. "These stickers can be cheaply and easily produced by anyone with access to an open-source programming language such as Python and image processing libraries," Wang said. "Those tools combined with a computer with a graphics card and a color printer are all someone would need to foil TSR systems in autonomous vehicles." He added that an interesting discovery made during the project relates to the spatial memorization design common to many of today's commercial TSR systems. While this feature makes a disappearing attack (seeming to remove a sign from the vehicle's view) more difficult, Wang said, it makes spoofing a fake stop sign "much easier than we expected." Chen noted that the research was the first of its type on this security threat in real-world scenarios with commercially available vehicles. "Academics have studied driverless vehicle security for years and have discovered various practical security vulnerabilities in the latest autonomous driving technology," he said. "But these studies have been limited mostly to academic setups, leaving our understanding of such vulnerabilities in commercial autonomous vehicle systems highly limited. Our study fills this critical gap." Chen said that by focusing on a small subset of existing research in this area, his group was able to uncover various broken assumptions, inaccuracies and false claims. For example, no prior academic studies realized the common existence of spatial memorization design in commercial TSR systems. When Chen's team members modeled such a design in previously devised academic study setups, they uncovered results that directly challenge earlier observations and claims made in the state-of-the-art research community. "We believe this work should only be the beginning, and we hope that it inspires more researchers in both academia and industry to systematically revisit the actual impacts and meaningfulness of such types of security threats against real-world autonomous vehicles," Chen said. "This would be the necessary first step before we can actually know if, at the society level, action is needed to ensure safety on our streets and highways." Joining Chen and Wang on this project were former UC Irvine graduate students Takami Sato and Yunpeng Luo; current UC Irvine graduate student Shaoyuan Xie; and Kaidi Xu, assistant professor of computer science at Drexel University. The work was supported by the National Science Foundation and the U.S. Department of Transportation's CARMEN+ University Transportation Center, of which UC Irvine is a member.
[3]
Study shines headlights on consumer driverless vehicle safety deficiencies
For the first time, researchers at the University of California, Irvine have demonstrated that multicolored stickers applied to stop or speed limit signs on the roadside can confuse self-driving vehicles, causing unpredictable and possibly hazardous operations. In a presentation at the recent Network and Distributed System Security Symposium in San Diego, researchers from UC Irvine's Donald Bren School of Information & Computer Sciences described the real-world implications of what previously was only theorized: that low-cost and highly deployable malicious attacks can make traffic signs undetectable to artificial intelligence algorithms in some autonomous vehicles while making nonexistent signs appear out of nowhere to others. Both types of assaults can result in cars ignoring road commands, triggering unintended emergency braking, speeding and other violations. The scientists said that their study, which involved the three most representative AI attack designs, was the first large-scale evaluation of traffic sign recognition systems in top-selling consumer vehicle brands. "Waymo has been delivering more than 150,000 autonomous rides per week, and there are millions of Autopilot-equipped Tesla vehicles on the road, which demonstrates that autonomous vehicle technology is becoming an integral part of daily life in America and around the world," said co-author Alfred Chen, UC Irvine assistant professor of computer science. "This fact spotlights the importance of security, since vulnerabilities in these systems, once exploited, can lead to safety hazards that become a matter of life and death." The lead author of the study, Ningfei Wang, a research scientist at Meta who performed this work as a Ph.D. student in computer science at UC Irvine, said that his team's attack vectors of choice were stickers that had swirling, multicolored designs that confuse AI algorithms used for traffic sign recognition in driverless vehicles. "These stickers can be cheaply and easily produced by anyone with access to an open-source programming language such as Python and image processing libraries," Wang said. "Those tools combined with a computer with a graphics card and a color printer are all someone would need to foil TSR systems in autonomous vehicles." He added that an interesting discovery made during the project relates to the spatial memorization design common to many of today's commercial TSR systems. While this feature makes a disappearing attack (seeming to remove a sign from the vehicle's view) more difficult, Wang said, it makes spoofing a fake stop sign "much easier than we expected." Chen noted that the research was the first of its type on this security threat in real-world scenarios with commercially available vehicles. "Academics have studied driverless vehicle security for years and have discovered various practical security vulnerabilities in the latest autonomous driving technology," he said. "But these studies have been limited mostly to academic setups, leaving our understanding of such vulnerabilities in commercial autonomous vehicle systems highly limited. Our study fills this critical gap." Chen said that by focusing on a small subset of existing research in this area, his group was able to uncover various broken assumptions, inaccuracies and false claims. For example, no prior academic studies realized the common existence of spatial memorization design in commercial TSR systems. When Chen's team members modeled such a design in previously devised academic study setups, they uncovered results that directly challenge earlier observations and claims made in the state-of-the-art research community. "We believe this work should only be the beginning, and we hope that it inspires more researchers in both academia and industry to systematically revisit the actual impacts and meaningfulness of such types of security threats against real-world autonomous vehicles," Chen said. "This would be the necessary first step before we can actually know if, at the society level, action is needed to ensure safety on our streets and highways."
[4]
Self-driving cars can still be fooled by tampered-with signs
Eggheads have taken a look at previously developed techniques that can be used to trick self-driving cars into doing the wrong thing - and found cheap stickers stuck on stop and speed limit signs, at least, are pretty effective. They also learned of a specific phenomenon where the systems "memorized" signs, so that when they were hidden, the vehicle still assumed in its calculations that the signs would be present in the position where it first detected them, even when they were hidden from view, leading to "lower-than-expected attack success rates" in the wild. The scientists - from the University of California, Irvine, and Drexel University, in the USA - are following in the footsteps of earlier studies that have shown patterns of light directed at signs can confuse autonomous vehicles, as can carefully placed tinfoil and paint, electrical tape, and stickers on the road, as well as manipulative audio, and no doubt other approaches. The tech behind the AI algos, object identifiers, image classifiers, and the other sensor information used by self driving vehicles to squire humans around has developed in leaps and bounds, but computer vision still misses some of the trillions of context clues gathered by a human eye attached to a meatbag's brain from years of observed human life and perfected over millennia of evolution. The human brain might reason, for example, when it sees a sticker on a stop sign: "Oh look, that piece of street furniture is the right size and shape to be a stop sign, but there is a sticker on it. Yep, some idiot has defaced it. I'll stop the car at the sign anyway." But the object detectors and image classifiers used in self driving systems don't work the same way. For this latest research, the team focused on traffic sign recognition (TSR) systems used by self-driving cars, and devised a way to measure the effectiveness of past adversarial attacks. Wang said his team's attack vectors were made up of stickers that had swirling, multicolored designs that confuse AI algorithms used by image classifiers and object detectors you might find in a self driving car's TSR system. Some of these can cause a Stop sign to "disappear" according to the detector -- either by covering the sign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In the photo above, for example, the top two leftmost signs use the RP [PDF] algo, or the Robust Physical Perturbations algorithm, for the attack. RP can generate a perturbation that maximizes the probability that an object detector mis-classifies the object. As the position of the stop sign changes relative to the car driving towards it, the grid cells the object is contained in (and the corresponding network weights) change as well and the perturbations have to be applicable to multiple grid cells simultaneously. Crucially, while they are large enough to be visible to the human eye, these malicious stickers can look like graffiti or "subtle lighting artifacts that could be considered benign" - you can see from the illustration that they don't even obscure the letters STOP. To us they look a bit like a tiny cut-and-stick poster for an underground techno club. "These stickers can be cheaply and easily produced by anyone with access to an open-source programming language such as Python and image processing libraries. Those tools combined with a computer with a graphics card and a color printer are all someone would need to foil TSR systems in autonomous vehicles," says the study's lead author Ningfei Wang, a research scientist at Meta who performed this work as a PhD student in computer science at UC Irvine. Specifically, the paper used those existing academic attacks and evaluated them against existing commercial systems used in cars currently on the road. They divide the attacks into "hiding" and "appearing" attacks. As the names suggest, the first method tricks the system so it cannot detect a legitimate traffic sign and the second adds information in places where it shouldn't be, triggering false detection. They found that low-cost methods -- primarily, specially designed stickers adhered to stop and speed-limit signs -- can indeed (as previously shown) make such signs undetectable to TSRs in some vehicles, while making nonexistent signs appear out of nowhere to others. Such attacks could result in cars ignoring road commands, triggering unintended emergency braking, speeding, and other rule violations. In a statement this week to mark with the team's work being presented at the Network and Distributed System Security Symposium in California, Alfred Chen, a UC Irvine assistant professor of computer science, acknowledged academics have studied driverless vehicle security for years and have discovered various practical security vulnerabilities in the latest autonomous driving technology. "But these studies have been limited mostly to academic setups, leaving our understanding of such vulnerabilities in commercial autonomous vehicle systems highly limited. Our study fills this critical gap," he argued. The team's conference presentation [Arxiv paper], "Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective," claims this study was the first large-scale measurement of "physical-world adversarial attacks against commercial TSR systems." In other words, if you wondered how well the aforementioned techniques actually work, this paper tries to figure that out. The write-up states that while the results reveal it is possible for previous trickery to have a 100 percent success rate against certain commercial TSR systems, these results cannot be generalized across all models. Also, somewhat surprisingly, the study observed "much lower-than-expected attack success rates overall." This is due to a function the researchers observed across all the cars it tested that hadn't previously been accounted for or known about in prior research: "spatial memorization design." In one test, the boffins show a sign to the vehicle for a short time (sign display time), and later hide the sign and wait for a certain time duration (sign disappearing time). After that, they drive the vehicle past the original sign-display position to measure whether the sign detection result is spatially memorized. They did the same for speed limit signs, finding that all the vehicle models they tested that supported speed limit signs (not all did) only displayed detection results after the vehicle passed the sign (thus preventing the vehicles from speeding before they reached the sign's location if the new speed limit was higher). The cars were all tested at 5 mph (8 kph) on a "rooftop parking structure," for anyone worried about the testers' safety. The researchers tested five models of publicly available cars with some level of automated driving capabilities (namely, the Tesla Model 3 2023, Toyota Camry 2023, Nissan Sentra 2023, Mazda CX-30 2023, and Hyundai Tucson 2024) and did not identify which models were vulnerable to which attack, for ethical reasons. Of its spatial memorization design findings, the paper says that the technique is common among today's commercial TSR systems. The boffins found this means, among other things, that "hiding attacks" (making the sign "disappear" for the TSR system) are "theoretically harder (if not equally hard) than appearing attacks" when it comes to fooling the systems. The reason for this is that when a TSR system initially detects a sign and its location, it appears to "remember" it until it has passed the spot where the sign is supposed to be, even if the sign is hidden during parts of that journey. Conversely, so-called "appearing" attacks are much more successful - spatial memorization means spoofing a fake stop sign is "much easier than we expected," says the study's lead author Ningfei Wang. This is all useful information on the road to hardening self driving vehicles against these kinds of attacks. Not only did the study allow the team to responsibly disclose where attacks had been successful to the self-driving car vendors to allow them to make tweaks where necessary, it also meant that the boffins could mathematically model the apparently altered risk profile of attacks with the new information they gleaned. This hopefully means more small improvements on the journey towards making these systems safer overall. ®
Share
Share
Copy Link
Researchers at UC Irvine have demonstrated that low-cost stickers can confuse AI algorithms in autonomous vehicles, potentially causing safety hazards. The study highlights the need for improved security in commercial self-driving systems.
A groundbreaking study conducted by researchers at the University of California, Irvine has revealed significant security vulnerabilities in the traffic sign recognition (TSR) systems of self-driving vehicles. The study, presented at the Network and Distributed System Security Symposium in San Diego, demonstrates that low-cost, easily deployable attacks can potentially compromise the safety of autonomous vehicles 1.
The research team, led by Ningfei Wang and Alfred Chen, focused on using multicolored stickers applied to stop or speed limit signs. These stickers, featuring swirling designs, were found to confuse AI algorithms used in TSR systems. The attacks could either make traffic signs undetectable to some autonomous vehicles or cause nonexistent signs to appear to others 2.
Wang explained, "These stickers can be cheaply and easily produced by anyone with access to an open-source programming language such as Python and image processing libraries" 3.
The potential consequences of these attacks are severe, including:
With Waymo delivering over 150,000 autonomous rides per week and millions of Autopilot-equipped Tesla vehicles on the road, the security of these systems is becoming increasingly critical 1.
The study uncovered several interesting phenomena:
Spatial memorization design: Many commercial TSR systems use this feature, which makes disappearing attacks more difficult but fake sign spoofing easier than expected 2.
Lower-than-expected attack success rates: The researchers observed that while some attacks had a 100% success rate against certain TSR systems, the results varied across different models 4.
Real-world implications: This study is the first to evaluate these security threats in real-world scenarios with commercially available vehicles, filling a critical gap in understanding vulnerabilities in commercial autonomous vehicle systems 3.
The researchers emphasize that this work should be just the beginning. Chen stated, "We hope that it inspires more researchers in both academia and industry to systematically revisit the actual impacts and meaningfulness of such types of security threats against real-world autonomous vehicles" 1.
This study challenges previous assumptions and claims in the field, highlighting the need for continued research and development to ensure the safety and security of autonomous vehicles on our streets and highways.
Reference
[1]
[4]
Researchers at the University of Georgia have developed a novel AI model for self-driving cars that integrates traffic prediction and vehicle motion planning, potentially reducing the risk of accidents and improving road safety.
2 Sources
2 Sources
AI is revolutionizing the automotive industry, from enhancing in-car experiences to advancing autonomous driving technologies. This story explores the current state and future prospects of AI in vehicles, including robotaxis, advanced driver assistance systems, and the challenges faced by the industry.
3 Sources
3 Sources
Tesla's Full Self-Driving (FSD) technology is under scrutiny as safety concerns mount and doubts arise about its launch schedule. Recent analysis casts doubt on the system's readiness for widespread deployment.
2 Sources
2 Sources
Penn Engineering researchers have successfully hacked AI-controlled robots, bypassing safety protocols and manipulating them to perform dangerous actions. This breakthrough raises serious concerns about the integration of AI in physical systems and the need for enhanced security measures.
4 Sources
4 Sources
Researchers from Wuhan University of Technology have developed an optimized sensor design for autonomous vehicles, reducing aerodynamic drag and potentially improving energy efficiency and driving range.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved