4 Sources
4 Sources
[1]
Microsoft, OpenAI & Others Pony Up $12.5M To Strengthen Open-Source Security
"As the security landscape grows more complex, advances in AI are dramatically increasing the speed and scale of vulnerability discovery in open source software. Maintainers are now facing an unprecedented influx of security findings, many of which are generated by automated systems, without the resources or tooling needed to triage and remediate them effectively. Through this investment, Alpha-Omega and OpenSSF will work directly with maintainers and their communities to make emerging security capabilities accessible, practical, and aligned with existing project workflows. The effort will support sustainable strategies that help maintainers manage growing security demands while improving the overall resilience of the open source ecosystem."
[2]
Linux Foundation wants to shield FOSS devs from AI bug slop
Half a dozen Big Tech players have together delivered $12.5 million in grants towards a project that aims to help maintainers of open source projects to cope with AI slop bug reports. "As the security landscape grows more complex, advances in AI are dramatically increasing the speed and scale of vulnerability discovery in open source software," explains a Linux Foundation announcement about the initiative. "Maintainers are now facing an unprecedented influx of security findings, many of which are generated by automated systems, without the resources or tooling needed to triage and remediate them effectively." Anthropic, AWS, GitHub, Google, Microsoft, and OpenAI have decided they want to help, by collectively chipping in $12.5 million to the project. Alpha-Omega, the Linux Foundation project that works to improve the security of open source supply chains, will run the new effort alongside the Open Source Security Foundation (OpenSSF). We're told the two organizations "work directly with maintainers and their communities to make emerging security capabilities accessible, practical, and aligned with existing project workflows." Further: "The effort will support sustainable strategies that help maintainers manage growing security demands while improving the overall resilience of the open source ecosystem." The Linux Foundation's announcement includes a canned quote from Greg Kroah-Hartman of the Linux kernel project, which opens "Grant funding alone is not going to help solve the problem that AI tools are causing today on open source security teams." Fear not, gentle reader, GKH didn't dump on this idea. The quote continues: "OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving." There's no word on exactly what this project will do, or when it will happen. The problem of AI-generated bug reports overwhelming FOSS maintainers is not new. The Python Software Foundation complained about it in late 2024. More recently, the maintainer of popular open-source data transfer tool ended the project's bug bounty program due to difficulties caused by a flood of AI-generated contributions. Even Microsoft's GitHub has pondered doing something about a torrent of low quality, AI-generated contributions to FOSS projects. ®
[3]
Our latest investment in open source security for the AI era
Billions of people rely on an Internet built on open source software -- which is software anyone can use -- but that reliance only works if the software beneath it is secure. That's why for over 20 years, Google has championed open source by supporting the developers who secure it -- fueling initiatives like Google Summer of Code and bug-hunting programs that discover and fix more vulnerabilities. Today, as a founding member of the Linux Foundation's Alpha-Omega Project, we're pledging $12.5 million collectively with Amazon, Anthropic, Microsoft/GitHub and OpenAI to further invest in the stability and security of the open source community. The funding, managed by Alpha-Omega and OpenSSF, will help maintainers stay ahead of a new generation of AI-driven threats, move security beyond vulnerability discovery to actually deploying fixes, and put advanced security tools directly into maintainers' hands, to turn a flood of AI-generated findings into fast action. In addition to its industry-wide commitments, Google is dedicated to helping the open source community to outpace evolving threats and tip the scales in favor of the defenders by providing advanced AI tools for wider use. Internally, Big Sleep and CodeMender, both AI-powered tools from Google DeepMind, have already shown incredible success in helping us protect our own systems, demonstrating that AI can autonomously find and fix deep, exploitable vulnerabilities in systems as complex as the Chrome browser. We're also extending research initiatives like Sec-Gemini to open source projects (interest form). These breakthroughs show the transformational potential of AI to secure the wider open source ecosystem. Open source is the backbone of the modern web, and we're proud to support the maintainers who secure it to move faster, stay safer and continue building the future.
[4]
Open source maintainers are drowning in AI-generated security noise - $12.5 million is being deployed to throw them a lifeline
There's an image of a unicorn, galloping through a pastel sky with rainbows streaming behind it, carrying a wooden crate on its back labelled "Software Supply Chain." Michael Winser, co-founder of Alpha-Omega, shared it with me during a group call with Steve Fernandez, General Manager of OpenSSF this week - from a LinkedIn post he wrote back in February last year. It is, as images go, extremely funny - and it skewers something genuinely serious. The idea that the code your entire operation runs on just sort of arrives, pristine and trustworthy, as if conjured from cloud and magic, is one of enterprise software's most dangerous collective delusions. The Linux Foundation announced this week that $12.5 million in grant funding has been committed by Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to address open source software security - specifically through Alpha-Omega and the Open Source Security Foundation (OpenSSF). The numbers are significant, as is the coalition of funders. But if you walk away from this announcement thinking it's primarily about money, you've missed the point. The organizing principle here isn't capital - it's maintainers, and everything Winser and Fernandez describe flows from that. The easy version of this story is that AI tools are finding more vulnerabilities in open source code, and now the industry is funding efforts to fix them. That's true, but it flattens the fundamental issue. The real problem is what happens in the middle - specifically, what happens to the maintainers who receive the output of these AI-powered vulnerability searches. Winser is blunt about it. The friction of discovering and reporting a potential vulnerability has dropped to near zero. Security researchers who previously needed domain expertise can now prompt their way to a list of findings. What they can't do - and what AI can't do - is understand the tribal knowledge that sits inside a project: Maintainers are seeing an influx of vulnerability reports that often lack context or awareness of the project. Even if they come with a PR fix, it's isolated to the specific thing and doesn't take into account the broader tribal knowledge that that project has culturally maintained over long periods of time about the right way to do things - the knowledge that ensures the code keeps working across a variety of things. Fernandez explains the human reality: A lot of the word is overwhelmed. We use the term AI slop - some of it's good, some of it's not. It's just a lot coming at people. Winser describes the coping mechanism that's emerging as a "tortoise shell defense strategy" - heads down, ignore everything, try to survive. This posture makes things worse, because the signal gets lost along with the noise. And the trajectory is only heading one way: When the next version of an AI hits the market, attackers are now equipped with a zero-day machine. Attackers have only to find one thing that works and they win - whereas maintainers have to ignore all the noise and get down to the things that matter. The objectives Winser and Fernandez are working toward - still being refined, as Winser openly notes - operate at three levels. The first is getting AI tooling, frameworks, and curated security prompts into the hands of critical maintainers and ecosystems so they can find and fix vulnerabilities on their own terms. The word "maintainer-centric" comes up repeatedly, and it's the thread that runs through everything both men say: Everything we're doing has to be maintainer-centric. There have been a lot of industry initiatives over the past few years that have focused on 'we use this stuff in industry, and it's not secure - make it secure so we don't have to worry about it.' That's essentially writing checks on other people's time. We are trying to put maintainers in a place where they feel empowered and supported in doing the things that they want to do. Fernandez describes how OpenSSF's community infrastructure complements this: We're really trying to do both things at once: immediate help, and building the medium and long-term solutions, processes, and communities so that these maintainers will have a place to go if there is a big issue. The second objective is building enough trust in the tooling itself that maintainers can start to accept automated contributions from known, vetted sources - rather than assessing everything from a thousand strangers, any one of whom might be a wolf in sheep's clothing. The third, and most ambitious, is scale: 100,000-plus maintainers across the open source community. Package registries feature heavily in this thinking as leverage points. Winser points to Seth Larson, hired as a security engineer in residence at the Python Software Foundation through Alpha-Omega funding, whose influence has rippled across the entire Python ecosystem and back into OpenSSF norms. That multiplier effect is the model. He's also honest about the limits of what's currently known: This is like the Y2K problem, but without the same clarity of problem, solution, or date. We're still building the solutions while running the train at full tilt. Winser's summary of what this initiative is really about cuts through the funding headline: The most important currencies moving forward are trust and attention. Maintainers have limited heartbeats. The work of security triage is fundamentally a problem of deciding what deserves attention - and right now, the signal-to-noise ratio is breaking that decision-making process. The goal of the tooling and the OpenSSF working groups is to create trusted networks where a maintainer knows what they're hearing has been filtered through people who understand their context: When you can start to trust a smaller set of people - when you go to a working group at the OpenSSF - you can have conversations that really help you feel safe about the risks. And that network reaches out when there is a crisis: 'I just had this happen to me. How do I handle that?' There's a network of people who have dealt with this before. Fernandez, who came to this role after 30 years as CIO and CTO at organizations including Coca-Cola, L'Oreal, and AIG, is equally direct about what this means for enterprises: Open source isn't something off to the side anymore. It's the engine of how your operations are running. If we don't address this together, vulnerabilities don't care how they get into the code. It's about addressing vulnerabilities holistically and working together. The ask is grounded in Winser's Three Fs framework - Fix, Fork, or Forgo - which starts with a complete inventory of your dependencies and active decisions about each. As he puts it: Open source projects and corporations were all just as bad as each other in treating their upstream supply as if it came down on the back of a unicorn. If you have vendors providing significant parts of your business, you'd have vendor relationships. Why aren't you doing that with your upstream? Everything in your supply chain has access to your build and your runtime. Get engaged. Control your future. The overlap between open source security, AI governance, maintainer sustainability, and supply chain risk sits directly in the middle of almost everything I think matters in enterprise technology right now. What gives me more confidence than the funding figure is the honesty of it. These are people who understand the problem deeply enough to resist the urge to oversell the answer - and that, in a space drowning in AI hype, is rarer than it should be. The unicorn has left the building. It's time to deal with what's actually in the crate.
Share
Share
Copy Link
The Linux Foundation has secured $12.5 million from Microsoft, OpenAI, Google, AWS, Anthropic, and GitHub to address a growing crisis: open-source software maintainers are drowning in AI-generated security bug reports. While AI tools accelerate vulnerability discovery, they've created an overwhelming flood of findings that maintainers lack resources to properly triage, threatening the resilience of the open-source ecosystem that billions depend on.
The Linux Foundation announced a $12.5 million grant from Microsoft, OpenAI, Google, AWS, Anthropic, and GitHub to tackle an escalating problem threatening open-source security: maintainers overwhelmed by AI-generated vulnerabilities
1
. As AI tools dramatically accelerate vulnerability discovery, open-source software maintainers now face an unprecedented influx of security findings generated by automated systems, often without the resources or tooling needed to triage and remediate them effectively2
.Source: Phoronix
Alpha-Omega and OpenSSF will manage the funding, working directly with maintainers and their communities to make emerging security capabilities accessible, practical, and aligned with existing project workflows
3
. The initiative aims to support sustainable strategies that help maintainers manage growing security demands while improving the overall resilience of the open-source ecosystem.Michael Winser, co-founder of Alpha-Omega, describes the crisis in stark terms: the friction of discovering and reporting a potential vulnerability has dropped to near zero, but maintainers receive AI-generated security bug reports that lack context or awareness of the project's tribal knowledge
4
. Steve Fernandez, General Manager of OpenSSF, calls it "AI slop" - some good, some not, but overwhelming in volume4
.
Source: The Register
This isn't theoretical. The Python Software Foundation complained about AI-generated bug reports in late 2024, and the maintainer of a popular open-source data transfer tool recently ended its bug bounty program due to difficulties caused by a flood of AI-generated contributions
2
. Winser warns that maintainers are adopting a "tortoise shell defense strategy" - heads down, ignoring everything to survive - which means legitimate security findings get lost in the noise.The funding will help maintainers stay ahead by putting advanced AI tools directly into their hands to turn a flood of AI-generated findings into fast action
3
. Google DeepMind's Big Sleep and CodeMender have already shown success in autonomously finding and fixing deep, exploitable vulnerabilities in systems as complex as the Chrome browser3
.The initiative operates at three levels: getting AI tooling, frameworks, and curated security prompts into the hands of critical maintainers; building trust in automated contributions from vetted sources; and scaling support to over 100,000 maintainers across the open-source community
4
. Package registries feature as leverage points, with the model being Seth Larson's work at the Python Software Foundation, whose influence has rippled across the entire Python ecosystem.
Source: Google
Related Stories
Greg Kroah-Hartman of the Linux kernel project emphasized that "grant funding alone is not going to help solve the problem that AI tools are causing," but noted that OpenSSF has the active resources needed to support projects that will help overworked maintainers with security triage and processing
2
.The stakes extend beyond individual projects. Billions of people rely on an Internet built on open-source software, and that reliance only works if the Software Supply Chain beneath it is secure
3
. Winser frames the asymmetry bluntly: attackers only need to find one thing that works to win, whereas maintainers must filter all the noise and focus on what matters4
. When the next AI version hits the market, attackers gain what amounts to a zero-day machine.The initiative's organizing principle isn't capital but maintainers themselves. Everything must be maintainer-centric, moving security beyond vulnerability discovery to actually deploying fixes
4
. The signal-to-noise ratio in security reporting has collapsed, and this funding aims to restore it before the defensive posture hardens into something that blocks both threats and legitimate contributions alike.Summarized by
Navi
[2]
27 Mar 2026•Technology

10 Mar 2026•Technology

11 Dec 2024•Technology
