Curated by THEOUTPOST
On Thu, 21 Nov, 12:06 AM UTC
4 Sources
[1]
Google's AI-powered bug hunting tool finds a host of concerning open source security flaws
Google sees this as a major milestone in automated bug discovery Google has found 26 vulnerabilities in different open source code repositories, including a medium-severity flaw in "the critical OpenSSL library that underpins much of internet infrastructure." This wouldn't be much of a news (Google helped find thousands of bugs throughout the years), if the method by which the flaws were discovered wasn't "artificial", as the bugs were revealed using its AI-powered fuzzing tool, OSS-Fuzz. "These particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets," Google explained in a blog post. Among these 26 flaws is an OpenSSL bug tracked as CVE-2024-9143. It has a severity score of 4.3 and is described as an out-of-bounds memory write bug that can crash an app, or allow crooks to mount remote code execution (RCE) malware attacks. OpenSSL has since been upgraded to versions 3.3.3, 3.2.4, 3.1.8, 3.0.16, 1.1.1zb, and 1.0.2zl, to address the flaw. To make matters even more interesting, Google said the vulnerability was most likely present for two decades, "and wouldn't have been discoverable with existing fuzz targets written by humans." The bug discovery came as a result of two major improvements, the company further explained. The first one is the ability to automatically generate more relevant context in the prompts, which makes the LLM "less likely to hallucinate the missing details in its response." The second one revolves around the LLM's ability to emulate a typical developer's entire workflow, including writing, testing, and iterating on the fuzz target, as well as triaging the crashes found. "Thanks to this, it was possible to further automate more parts of the fuzzing workflow. This additional iterative feedback in turn also resulted in higher quality and greater number of correct fuzz targets."
[2]
Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects
Google has revealed that its AI-powered fuzzing tool, OSS-Fuzz, has been used to help identify 26 vulnerabilities in various open-source code repositories, including a medium-severity flaw in the OpenSSL cryptographic library. "These particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets," Google's open-source security team said in a blog post shared with The Hacker News. The OpenSSL vulnerability in question is CVE-2024-9143 (CVSS score: 4.3), an out-of-bounds memory write bug that can result in an application crash or remote code execution. The issue has been addressed in OpenSSL versions 3.3.3, 3.2.4, 3.1.8, 3.0.16, 1.1.1zb, and 1.0.2zl. Google, which added the ability to leverage large language models (LLMs) to improve fuzzing coverage in OSS-Fuzz in August 2023, said the vulnerability has likely been present in the codebase for two decades and that it "wouldn't have been discoverable with existing fuzz targets written by humans." Furthermore, the tech giant noted that the use of AI to generate fuzz targets has improved code coverage across 272 C/C++ projects, adding over 370,000 lines of new code. "One reason that such bugs could remain undiscovered for so long is that line coverage is not a guarantee that a function is free of bugs," Google said. "Code coverage as a metric isn't able to measure all possible code paths and states -- different flags and configurations may trigger different behaviors, unearthing different bugs." These AI-assisted vulnerability discoveries are also made possible by the fact that LLMs are proving to be adept at emulating a developer's fuzzing workflow, thereby allowing for more automation. The development comes as the company revealed earlier this month that its LLM-based framework called Big Sleep facilitated the detection of a zero-day vulnerability in the SQLite open-source database engine. In tandem, Google has been working towards transitioning its own codebases to memory-safe languages such as Rust, while also retrofitting mechanisms to address spatial memory safety vulnerabilities - which occur when it's possible for a piece of code to access memory that's outside of its intended bounds - within existing C++ projects, including Chrome. This includes migrating to Safe Buffers and enabling hardened libc++, which adds bounds checking to standard C++ data structures in order to eliminate a significant class of spatial safety bugs. It further noted that the overhead incurred as a result of incorporating the change is minimal (i.e., an average 0.30% performance impact). "Hardened libc++, recently added by open source contributors, introduces a set of security checks designed to catch vulnerabilities such as out-of-bounds accesses in production," Google said. "While C++ will not become fully memory-safe, these improvements reduce risk [...], leading to more reliable and secure software."
[3]
Google's AI bug hunters sniff out two dozen-plus code flaws
OSS-Fuzz is making a strong argument for LLMs in security research Google's OSS-Fuzz project, which uses large language models (LLMs) to help find bugs in code repositories, has now helped identify 26 vulnerabilities, including a critical flaw in the widely used OpenSSL library. The OpenSSL bug (CVE-2024-9143) was reported in mid-September and fixed a month later. Some, but not all, of the other vulnerabilities have also been addressed. Google believes its AI-driven fuzzing tool - which injects unexpected or random data into software to catch errors - found something that's unlikely to have ever been caught by human-driven fuzzing. "As far as we can tell, this vulnerability has likely been present for two decades and wouldn't have been discoverable with existing fuzz targets written by humans," said Oliver Chang, Dongge Liu, and Jonathan Metzman of Google's open source security team in a blog post. If that's correct, security research henceforth really ought to involve AI for fear that threat actors have already done so - and found flaws that would be invisible to the AI-deprived. Another example cited by Google's security team, a bug in the cJSON project, is similarly said to have been spotted by AI and missed by a human-written fuzzing test. So the value of AI assistance appears to be substantial for security professionals. The Chocolate Factory earlier this month announced that, for the first time, a separate LLM-based bug hunting tool called Big Sleep had identified a previously unknown exploitable memory-safety flaw in real software. And in October, Seattle-based Protect AI released an open source tool called Vulnhuntr that used Anthropic's Claude LLM to find zero-day vulnerabilities in Python-based projects. The OSS-Fuzz team introduced AI-based fuzzing in August 2023 in an effort to fuzz a greater portion of codebases - to improve fuzzing coverage, meaning the amount of code tested. The process of fuzzing involves drafting a fuzzing target - "a function that accepts an array of bytes and does something interesting with these bytes using the API under test" - then dealing with potential compilation issues and running the fuzzing target to see how it performs, making corrections, and repeating the process to see whether crashes can be traced to specific vulnerabilities. Initially, OSS-Fuzz handled the first two steps: 1) Drafting an initial fuzz target; and 2) Fixing any compilation issues that arise. Then, at the beginning of 2024, Google made OSS-Fuzz available as an open source project and has been trying to improve how the software handles subsequent steps: 3) Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues; 4) Running the corrected fuzz target for a longer period of time, and triaging crashes to determine their root causes; and 5) Fixing vulnerabilities. According to Google, its LLM can now handle the first four steps of the developer's fuzzing process and the plan is to tackle the fifth shortly. "The goal is to fully automate this entire workflow by having the LLM generate a suggested patch for the vulnerability," said Chang, Liu, and Metzman. "We don't have anything we can share here today, but we're collaborating with various researchers to make this a reality and look forward to sharing results soon." ®
[4]
Google Uses AI to Discover 20-Year-Old Software Bug
Google recently used an AI program to help it discover a software bug that's persisted in an open-source software project for the past two decades. The software bug is among 26 vulnerabilities Google recently identified with the help of a ChatGPT-like AI tool, the company said in a blog post on Wednesday. Google discovered the vulnerabilities through an approach called "fuzz testing," which involves feeding a software program random data to see if it'll crash and then diagnosing the problem. Last year, the company started an effort to use large language models to write the fuzz testing code, offloading the work from humans who previously had to conduct the fuzz testing manually. "Our approach was to use the coding abilities of an LLM to generate more fuzz targets," Google's Open Source Security Team wrote in Wednesday's blog post. "LLMs turned out to be highly effective at emulating a typical developer's entire workflow of writing, testing, and iterating on the fuzz target, as well as triaging the crashes found." Since then, Google has applied the AI tool for fuzz testing across 272 software projects, which led it to discover the 26 vulnerabilities, including a 20-year-old bug found in OpenSSL, which is widely used to provide encryption and server authentication for internet connections. "We reported this vulnerability on September 16 and a fix was published on October 16. As far as we can tell, this vulnerability has likely been present for two decades and wouldn't have been discoverable with existing fuzz targets written by humans," researchers added. The 20-year-old bug, dubbed CVE-2024-9143, involves the software triggering an "out-of-bounds memory access," which can cause the program to crash or, in rare cases, execute rogue computer code. Fortunately, the bug is low severity due to the minimal risk of the out-of-bounds memory access executing a dangerous process. Still, Google theorizes the bug went undiscovered because the specific code was presumed to be thoroughly tested and vetted. "Code coverage as a metric isn't able to measure all possible code paths and states -- different flags and configurations may trigger different behaviors, unearthing different bugs," researchers said. "These examples underscore the need to continue to generate new varieties of fuzz targets even for code that is already fuzzed." Going forward, Google's Open Source Security Team is working to make the LLMs suggest a patch for any bugs found during the fuzzing process. Another goal is "to get to a point where we're confident about not requiring human review," the team said. "This will help automatically report new vulnerabilities to project maintainers." The effort joins another Google AI project, dubbed "Big Sleep," which also involves finding security vulnerabilities by using LLMs to mimic the workflow of a human security researcher. Earlier this month, the company said Big Sleep was smart enough to discover a previously unknown and exploitable bug in SQLite, an open-source database engine.
Share
Share
Copy Link
Google's AI-enhanced fuzzing tool, OSS-Fuzz, has discovered 26 vulnerabilities in open-source projects, including a long-standing flaw in OpenSSL. This breakthrough demonstrates the potential of AI in automated bug discovery and software security.
Google has announced a significant milestone in automated vulnerability detection, with its AI-powered fuzzing tool OSS-Fuzz uncovering 26 vulnerabilities in various open-source code repositories 1. This achievement highlights the growing potential of artificial intelligence in enhancing software security and bug discovery processes.
Among the discovered vulnerabilities, a particularly noteworthy find is a medium-severity flaw in the widely-used OpenSSL cryptographic library. Tracked as CVE-2024-9143, this vulnerability is an out-of-bounds memory write bug with a CVSS score of 4.3 2. The flaw could potentially lead to application crashes or, in rare cases, allow attackers to execute remote code.
What makes this discovery remarkable is that the vulnerability had likely been present in the OpenSSL codebase for two decades, eluding detection by traditional human-written fuzz targets 1. This underscores the power of AI-driven tools in identifying long-standing, hidden flaws in critical software infrastructure.
Google's OSS-Fuzz project, which introduced AI capabilities in August 2023, has demonstrated significant improvements in code coverage across 272 C/C++ projects, adding over 370,000 lines of new code to the fuzzing process 2. The AI-powered tool excels at emulating a developer's entire fuzzing workflow, including writing, testing, and iterating on fuzz targets, as well as triaging crashes 3.
The success of OSS-Fuzz can be attributed to two key improvements in its AI capabilities:
These advancements have enabled the tool to discover vulnerabilities that might have remained hidden using traditional methods.
The discovery of these vulnerabilities, especially the long-standing OpenSSL flaw, raises important questions about the future of software security. It suggests that AI-assisted vulnerability detection may become essential for thorough code analysis, as it can uncover issues that human-written tests might miss 4.
Google's open-source security team is now working towards fully automating the entire fuzzing workflow, including the generation of suggested patches for discovered vulnerabilities 3. This development could significantly accelerate the process of identifying and addressing security flaws in open-source projects.
Google's success with OSS-Fuzz is part of a larger trend in applying AI to security research. The company recently announced that another LLM-based tool, Big Sleep, had identified a previously unknown exploitable flaw in the SQLite database engine 2. Additionally, other organizations are exploring similar approaches, such as Protect AI's open-source tool Vulnhuntr, which uses Anthropic's Claude LLM to find zero-day vulnerabilities in Python projects 3.
As AI continues to demonstrate its effectiveness in uncovering software vulnerabilities, it is likely to become an indispensable tool for security professionals. The ability of AI-powered tools to discover flaws that have evaded human detection for years suggests that integrating these technologies into security practices may soon be crucial for maintaining robust software security in an increasingly complex digital landscape.
Reference
[1]
[2]
[3]
[4]
Google's AI model, Big Sleep, has made a groundbreaking discovery of a previously unknown security vulnerability in SQLite, marking a significant advancement in AI-driven cybersecurity.
4 Sources
4 Sources
Open source project maintainers are facing a surge in low-quality, AI-generated bug reports, leading to wasted time and resources. This trend is causing concern among developers and raising questions about the impact of AI on software development.
4 Sources
4 Sources
Microsoft's AI-powered Security Copilot has discovered 20 previously unknown vulnerabilities in popular open-source bootloaders, highlighting the potential of AI in cybersecurity and the importance of securing fundamental system components.
3 Sources
3 Sources
GitHub introduces new features for Copilot Autofix, integrating third-party tools to address security vulnerabilities more efficiently. This update aims to reduce security debt and streamline the development process.
2 Sources
2 Sources
Google introduces Jules, an AI-powered coding assistant built on Gemini 2.0, capable of autonomously fixing bugs and streamlining development workflows. Set for broader release in 2025, Jules promises to revolutionize software development practices.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved