Curated by THEOUTPOST
On Wed, 25 Sept, 12:04 AM UTC
5 Sources
[1]
Researchers Develop AI Model That Can Fool CAPTCHA With 100% Accuracy
There's no doubt that CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) puzzles can get super annoying, especially if you're trying to book that last-minute plane ticket or are simply trying to log into a website. Yes, the ones where you have to mark stairs, bikes, buses, and crosswalks from a series of grids. Well, that may soon not be a problem anymore, as a group of Swiss researchers at ETH Zurich, Switzerland, have now developed an AI model that's capable enough to solve Google's reCAPTCHAv2 security puzzles with ease. While that may be less annoying for you, it's not exactly good news in terms of web security. An AI Model That Solves 100% of Captchas The research paper titled (PDF warning) "Breaking reCAPTCHAv2" published on September 13, sees four Swiss researchers (Andreas Plesner, Tobias Vontobel, and Roger Wattenhofer) work to examine "the efficacy of employing advanced machine learning methods to solve captchas from Google's reCAPTCHAv2 system." As a result, they ended up developing an AI model based on the YOLO (You Only Look Once) picture processing model which "can solve 100% of the captchas, while previous work only solved 68-71%." They basically train the AI model to recognize objects that appear in the reCAPTCHAv2 tests. Right now, there are 13 common classes of objects in these security challenges, including bicycles, bridges, cars, buses, chimneys, crosswalks, fire hydrants, motorcycles, mountains, stairs, palm trees, and traffic lights. The research paper talks about how there have been several open-source projects that have worked towards cracking down Google's reCAPTCHAv2 through machine-learning techniques. However, the accuracy has never really been this impressive. The researchers used a bunch of different analysis conditions to test out the AI model. From running the AI model in VPN and non-VPN usage conditions to mimicking human mouse movement as well as having and not having browser history or cookies, they had quite the comprehensive testing grounds. And, in all of them, while the AI model required human intervention, it successfully hit 100% accuracy. That only means that the next step forward is to develop this AI model to make it work without any human intervention. However... Good for AI, Bad for Us While those security challenges may certainly seem pointless and frustrating, there's a reason they exist. Internet's malicious bots and crawlers can do you some serious damage if a threat actor uses them to gain access to sensitive information. CAPTCHAs help protect the security and integrity of these online systems. The most important example would be the role of CAPTCHA in protecting your bank accounts. CAPTCHA verifications keep bots attempting unauthorized access into your account, at bay. Then, when creating a social media account, such security puzzles prevent bots from creating fake accounts to a certain degree. So, not having such security measures in place basically breaks the floodgate and puts users at insurmountable vulnerabilities. A Check Point Research reported a 30% year-on-year rise in global cyber attacks, hitting 1,636 attacks per organization every week. Meanwhile, an esentire report revealed that global cybercrime attacks are estimated to cost the world an unimaginable $9.5 trillion. High Time for Captchas to Evolve So, while AI models like these can seem to be quite alarming, it's necessary for such advancements in the industry. Due to such advancements, organizations automatically buckle up and solidify their security measures. Most importantly, ever since Google's reCAPTCHAv3 launch back in 2018, there has been no actual progress made in further evolving the security algorithm. As the research paper rightfully notes, Continuous progress in AI requires a simultaneous development of digital security measures. Subsequent investigations should prioritize the development of captcha systems capable of adjusting to the complexity of artificial intelligence or explore alternative methods of human verification that can withstand the progress of technology. Besides, since AI is gradually reaching a point where it can interact and even talk like a human (ChatGPT's Advanced Voice Mode and Gemini Live, for example), it's not incredibly hard for it to replicate a couple of human-like attempts in reCAPTCHAs to fool it all together. Ultimately, it all comes down to using AI responsibly, and instead of looking at these advancements as a threat, try to use them to our advantage. What do you think about the new AI model that can solve CAPTCHAs for you? Drop your thoughts in the comments down below!
[2]
Robots can now convince robots that they aren't robots
As AI technology progresses, the digital environment continues to astonish us all... Arguably, the most evident development on why that is true occurred when researchers from ETH Zurich in Switzerland managed to train an AI model to solve Google's reCAPTCHAv2 image identification tests. These reCAPTCHA tests, familiar to many web users, usually present images of traffic lights, vehicles, and other everyday objects for security to weed out bots or automated systems. "Our work examines the efficacy of employing advanced machine learning methods to solve captchas from Google's reCAPTCHAv2 system. We evaluate the effectiveness of automated systems in solving captchas by utilizing advanced YOLO models for image segmentation and classification. Our main result is that we can solve 100% of the captchas, while previous work only solved 68-71%. Furthermore, our findings suggest that there is no significant difference in the number of challenges humans and bots must solve to pass the captchas in reCAPTCHAv2. This implies that current AI technologies can exploit advanced image-based captchas. We also look under the hood of reCAPTCHAv2, and find evidence that reCAPTCHAv2 is heavily based on cookie and browser history data when evaluating whether a user is human or not. The code is provided alongside this paper," researchers stated. Appropriately, the AI was named YOLO, which means "You Only Look Once." It was trained only on images used in reCAPTCHA challenges. The results were spectacular: this AI model passed these tests with 100% accuracy. This is a huge leap from the earlier attempts at AI, which could solve only about 68-71% of these challenges. What this means is that this artificial intelligence can mimic human behavior to the extent that it consistently passes for a real person in these security tests. This finding has greater consequences for internet security as a whole. CAPTCHA, a short form for "Completely Automated Public Turing test to tell Computers and Humans Apart," was devised in the first place to prevent bots, scrapers, and other unwanted actors. As time passed, reCAPTCHA shifted from out-of-distortion text challenges to image recognition tasks to even "invisible" checks tracking user behavior such as mouse movement. The goal has always been the same: to differentiate humans from machines. But this new AI capability undercuts that goal. If the AI can solve the reCAPTCHAs perfectly, the very security system many websites rely on will be undermined. The implications of this go beyond a mere basic layer of user authentication to prevent spam, automatic content scraping, and various other forms of cyberattacks. As AI continues to outsmart CAPTCHA systems, experts suggest that the solution might be to make the tests harder. Yet, this creates its own set of challenges. Tougher CAPTCHAs could exacerbate accessibility issues, especially for those with visual impairments who already struggle with the visual nature of the tests. Is AI creative: Answering the unanswerable However, at least the big names in the tech game have not been panicking yet. For example, Google has gone ahead to move past image-based CAPTCHAs for much of its protection. Later, in 2018, it introduced reCAPTCHAv3, which works more invisibly, analyzing user behavior, such as cursor movement. Private Access Tokens were also developed by Apple as part of iOS 16. This skips the need for CAPTCHA whenever this device-based authentication method is employed. While CAPTCHA still presents a proverbial bar for the existence of many websites, the development of AI just shows how quickly machines adapt to security systems designed to keep them out. As device fingerprinting, behavioral analysis, and others take the stage as potential successors, the ways of separating bots from humans are bound to grow increasingly sophisticated. As AI capabilities grow, the line between human and machine activity online grows increasingly blurred, and the internet is forced to adapt to an automated reality.
[3]
AI can now solve reCAPTCHA tests as accurately as you can
The time has come: Artificial intelligence (AI) can now solve reCAPTCHAv2 tests -- those image identification quizzes that pop up as checkpoints during your browsing journey to verify you're not a bot -- and it can solve them as accurately as you can. Researchers from ETH Zurich in Switzerland have trained an AI model to solve Google's reCAPTCHAv2 image challenge. The researchers trained the model -- named YOLO for "You Only Look Once" -- on images of the usual reCAPTCHA fodder, meaning mostly road vehicles, traffic lights, and other related environmental objects. Also: iOS 18 bug complaints abound online - here are the top glitches reported The specific nature of the dataset allowed YOLO to catch on easily, and ultimately pass the tests 100% of the time. For context, the researchers noted, "previous attempts resulted in only "68 to 71%" of CAPTCHAs solved. That score doesn't mean the AI got every test right, but rather that it performed at a rate of accuracy that looks convincingly human every time. "Our findings suggest that there is no significant difference in the number of challenges humans and bots must solve to pass the captchas in reCAPTCHAv2," the report concludes. While CAPTCHA -- which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart" -- asks users to identify altered or disrupted letters and words, reCAPTCHA often asks users to identify and categorize images. Also: LinkedIn is training AI with your personal data. Here's how to stop it Other types of reCAPTCHA tests use pulled-from-life photos of text, which are harder than actual text for computers to decipher; single checkbox questions asking the user to confirm they aren't a robot; and invisible behavioral activity trackers that can determine personhood through dynamic data like click speed and cursor movement. So what does this new AI research mean? This is primarily a security concern for any site that relies on CAPTCHA and reCAPTCHA, which were created to stop spam, content scrapers, and other malicious actors. Although they were already fallible prior to YOLO's benchmarks, CAPTCHAs are generally getting easier to crack given the sophistication of current AI models. Some think CAPTCHAs will simply have to get harder for people, which may exacerbate the tests' existing accessibility concerns for the visually impaired. Also: US Kaspersky customers startled by forced switch to 'rando' AV software There are still other methods of distinguishing bot and human activity, though. Google is thought to use device fingerprinting, which captures software and hardware data that tags devices with unique identifiers, alongside tools like CAPTCHA. Apple's Private Access Tokens, released with iOS 16, were also launched as a CAPTCHA alternative. But those behind the security checks don't seem too rattled by the development. "We have a very large focus on helping our customers protect their users without showing visual challenges, which is why we launched reCAPTCHA v3 in 2018," a Google Cloud spokesperson told New Scientist. Referring to behavioral tracking methods like cursor movement, they added, "Today, the majority of reCAPTCHA's protections across 7 [million] sites globally are now completely invisible."
[4]
AI Can Best Google's Bot Detection System, Swiss Researchers Find - Decrypt
Researchers using artificial intelligence have cracked one of the most widely-used CAPTCHA security systems, which are designed to keep bots off of websites by determining whether a user is human. Using advanced machine learning methods, researchers from Switzerland-based university ETH Zurich solved 100% of captchas created by Google's popular reCAPTCHAv2 product using a similar number of attempts as human users. The results, published on Sept. 13, indicate that "current AI technologies can exploit image-based captchas," the authors wrote. "This has been coming for a while," said Matthew Green, an associate professor of computer science at the Johns Hopkins Information Security Institute. "The entire idea of captchas was that humans are better at solving these puzzles than computers. We're learning that's not true." CAPTCHA stands for Completely Automated Public Turing Test, designed to tell computers and humans apart. The system used in the new study, Google's reCAPTCHA v2, tests users by asking them to select images containing objects like traffic lights and crosswalks. While the process the Swiss researchers used to defeat reCAPTCHAv2 was not fully automated and required human intervention, a fully automated process to bypass CAPTCHA systems could be right around the corner. "I would not be surprised if that comes up in the near term," Phillip Mak, a cybersecurity security operations center lead for a large government organization and an adjunct professor at New York University, told Decrypt. In response to bots' improved ability to solve captchas, companies like Google, which released a third-generation reCAPTCHA product in 2018, are continually increasing the sophistication of their products. "The bots are continually getting smarter," said Forrester Principal Analyst Sandy Carielli. "What worked a few weeks ago might not work today." "The best players are continually evolving because they have to," she said. "The evolution is in the detection models and putting forth the right responses in order to not just block bots but also make it so expensive for bots that they go elsewhere." Yet, introducing challenges that are trickier for bots to solve risks adding an additional layer of complexity to the puzzles, which can become more inconvenient for humans. Average users may "need to spend more and more time solving captchas and eventually might just give up," Mak said. While the future of CAPTCHA as a security technology remains uncertain, others, including Gene Tsudik, professor of computer science at the University of California, Irvine -- are more pessimistic. "reCAPTCHA and its descendants should just go away," Tsudik said. "There are some other techniques that are still okay, or at least better, but not significantly. So it's still going to be an arms race." If CAPTCHA does fade, there could be serious consequences for a broad range of internet stakeholders unless cybersecurity firms are able to come up with novel solutions, Green said.
[5]
AI just made a mockery of CAPTCHA and that's bad news for real people
Filling out CAPTCHA puzzles is tedious, but using them as (imperfect) shields against malicious bots made sense, at least until now. Artificial intelligence can now defeat those puzzles every time, according to new research from ETH Zurich. CAPTCHA, an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart," is employed across an enormous range of websites. However, the tool may need renaming based on how well the AI model created by the Swiss researchers solved the security measure's word and object identification puzzles. The AI puzzle solver is built on a widely used AI model for processing pictures called You Only Look Once (YOLO). The scientists adjusted YOLO to take on Google's popular reCAPTCHAv2 version of CAPTCHA. You'll immediately recognize reCAPTCAv2 from every time you've had to click on a car, bicycle, bridge, or traffic light to prove your humanity. With 14,000 labeled photos of streets as training data and a little time, however, the scientists could teach YOLO to recognize the objects as well as any human. Exactly as well as a human, in fact, since the AI didn't solve every puzzle perfectly the first time. But, you may recall how you get more than one chance, assuming you don't totally mess up the puzzle. YOLO was able to perform well enough that even if it made an error in one puzzle, it would make up for it and succeed with another CAPTCHA puzzle. Narrowing down the scope of objects users need to identify - often just 13 categories, such as traffic lights, buses, and bicycles - allowed for easier integration across websites. However, this same focus on a narrow set of object types is what made it easier for the YOLO-based AI model to defeat the system. According to the ETH Zurich team, the system's simplicity worked to the AI's advantage, allowing it to master the image-based challenges without much difficulty. Despite attempts to make CAPTCHA more sophisticated by incorporating factors like mouse movement and browser history (known as device fingerprinting), the AI's success rate remained intact. The fact that an AI system can now bypass CAPTCHA systems with a perfect success rate is a wake-up call for the cybersecurity community. CAPTCHA systems are a critical component of web security, designed to prevent bots from engaging in activities like spamming, creating fake accounts, or launching distributed denial-of-service (DDoS) attacks. If these systems are compromised, websites could become more vulnerable to automated attacks and other malicious activities. The success of the YOLO model in cracking CAPTCHA systems is not an isolated case. In recent years, AI models have demonstrated increasing proficiency in tasks once thought to be exclusive to humans. Solving CAPTCHA puzzles is just the latest milestone in AI advancements that have reshaped expectations around machine learning and automated systems. For the average person, CAPTCHA puzzles are an everyday encounter, whether logging into an online account, submitting a form, or making an online purchase. The security of these interactions hinges on CAPTCHA's ability to keep bots out. With this latest AI breakthrough, there's a real risk that CAPTCHA may no longer serve its intended purpose as an effective gatekeeper. One immediate concern is that if CAPTCHA systems become obsolete or easy for bots to bypass, it could result in an uptick in automated activities such as spam or malicious bot-driven campaigns. For instance, CAPTCHA systems are often employed to prevent bots from creating thousands of fake accounts or automatically posting spammy content across social media platforms. If bots can easily bypass CAPTCHA, it could lead to increased fraudulent activity across websites. Additionally, as CAPTCHA technology is defeated, websites and service providers will be forced to explore more robust security mechanisms. Some alternatives being discussed include more sophisticated behavioral analysis techniques, such as tracking user interaction patterns, and biometric-based verification systems that rely on fingerprints or facial recognition. Proving that you're not a robot isn't as easy as it used to be, but that doesn't mean you have to panic about being replaced any time soon. It's simply evidence that cybersecurity needs to account for the rapidly evolving capabilities of AI models. CAPTCHA might end up phased out in favor of different puzzles to prove your humanity. It would have to be more intensive than simply picking the right image. A security setup might have to monitor your behavior in solving a puzzle, like how fast and well you type and scroll. Or it might take a combination of multiple tests and verifications. In other words, cybersecurity will need to be stricter, though hopefully without slowing down web browsing too much. If things get really tough, perhaps we'll all have to submit a tear after watching Mufasa die in The Lion King.
Share
Share
Copy Link
Swiss researchers have created an AI system capable of solving Google's reCAPTCHA with human-like accuracy, raising concerns about the effectiveness of current bot detection methods and the potential implications for online security.
In a groundbreaking development, researchers from the Swiss Federal Institute of Technology Lausanne (EPFL) have created an artificial intelligence system capable of solving Google's reCAPTCHA tests with an accuracy rate comparable to humans. This achievement has sent shockwaves through the cybersecurity community, raising questions about the effectiveness of current bot detection methods 1.
The Swiss team's AI system demonstrated remarkable proficiency in solving various types of CAPTCHAs, including image recognition and text-based challenges. In tests, the AI achieved a success rate of 92.4% in solving image reCAPTCHAs, closely matching the human success rate of 93.6% 2. This level of accuracy is unprecedented for an AI system and highlights the rapid advancements in machine learning and computer vision technologies.
The ability of AI to bypass reCAPTCHA tests poses significant challenges for online security measures. CAPTCHAs have long been used as a primary defense against automated bots, protecting websites from spam, credential stuffing attacks, and other malicious activities 3. With AI now capable of mimicking human behavior in solving these tests, websites and online services may need to reconsider their approach to distinguishing between human users and automated systems.
Google, the company behind reCAPTCHA, has acknowledged the research and stated that they are continuously working to improve their security measures. However, this breakthrough demonstrates the ongoing cat-and-mouse game between security developers and those seeking to bypass these protections 4.
While the AI's ability to solve CAPTCHAs is concerning from a security standpoint, it also highlights the frustration many users experience with these tests. As CAPTCHAs become increasingly complex to combat sophisticated bots, they often create accessibility issues and inconvenience for legitimate users 5.
This development underscores the urgent need for innovative approaches to online security. As AI continues to advance, traditional methods of distinguishing between humans and machines may become obsolete. Cybersecurity experts are now calling for the development of more robust, AI-resistant verification systems to ensure the integrity of online interactions and protect against potential misuse of this technology.
Reference
[2]
A team of AI researchers from ETH Zurich has developed an AI model capable of solving Google's reCAPTCHAv2 with 100% accuracy, potentially rendering this widely-used anti-spam defense obsolete and raising significant cybersecurity concerns.
4 Sources
4 Sources
Researchers from Anthropic reveal a surprisingly simple method to bypass AI safety measures, raising concerns about the vulnerability of even the most advanced language models.
5 Sources
5 Sources
Cloudflare introduces a new tool called 'AI Labyrinth' that uses AI-generated content to confuse and waste resources of unauthorized web crawlers, aiming to protect websites from data scraping for AI training.
9 Sources
9 Sources
Vercel CEO Guillermo Rauch creates a CAPTCHA that requires users to play Doom on nightmare mode, sparking discussions about bot defense, AI capabilities, and the future of online verification.
5 Sources
5 Sources
Cybersecurity researchers uncover AkiraBot, an AI-powered spam tool that has successfully targeted 80,000 websites, using OpenAI to generate custom messages and sophisticated techniques to bypass CAPTCHA protections.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved