4 Sources
4 Sources
[1]
The new rules for AI-assisted code in the Linux kernel: What every dev needs to know
If you try to mess around with Linux code using AI, bad things will happen. After months of heated debate, Linus Torvalds and the Linux kernel maintainers have officially codified the project's first formal policy on AI-assisted code contributions. This new policy reflects Torvald's pragmatic approach, balancing the embrace of modern AI development tools with the kernel's rigorous quality standards. The new guidelines establish three core principles: The Assisted-by tag serves as both a transparency mechanism and a review flag. It enables maintainers to give AI-assisted patches the extra scrutiny they may require without stigmatizing the practice itself. Also: Linux after Linus? The kernel community finally drafts a plan for replacing Torvalds The Assisted-by attribution was forged in the fire of controversy when Nvidia engineer and prominent Linux kernel developer Sasha Levin submitted a patch to Linux 6.15 entirely generated by AI, including the changelog and tests. Levin reviewed and tested the code before submission, but he didn't disclose to the reviewers that an AI had written it. That did not go over well with other kernel developers. The upshot of all the subsequent fuss? At the 2025 North America Open Source Summit, Levin himself began advocating for formal AI transparency rules. In July 2025, he proposed the first draft of what would become the kernel's AI policy. He initially suggested a Co-developed-by tag for AI-assisted patches. Initial discussions, both in person and on the Linux Kernel Mailing List (LKML), debated whether to use a new Generated-by tag or repurpose the existing Co-developed-by tag. Maintainers ultimately settled on Assisted-by to better reflect AI's role as a tool rather than a co-author. The decision comes as AI coding assistants have suddenly become genuinely useful for kernel development. As Greg Kroah-Hartman, maintainer of the Linux stable kernel, recently told me, "something happened a month ago, and the world switched" with AI tools now producing real, valuable security reports rather than hallucinated nonsense. Also: Linux explores new way of authenticating developers and their code - how it works The final choice of Assisted-by rather than Generated-by was deliberate and influenced by three factors. First, it's more accurate. Most AI use in kernel development is assistive (code completion, refactoring suggestions, test generation) rather than full code generation. Second, the tag format mirrors existing metadata tags like Reviewed-by, Tested-by, and Co-developed-by. Finally, Assisted-by describes the tool's role without implying the code is suspicious or second-class. This pragmatic approach got a kickstart when, in an LKML conversation, Torvalds said, "I do *not* want any kernel development documentation to be some AI statement. We have enough people on both sides of the 'sky is falling' and 'it's going to revolutionize software engineering.' I don't want some kernel development docs to take either stance. It's why I strongly want this to be that 'just a tool' statement." Despite the Linux kernel's new AI disclosure policy, maintainers aren't relying on AI-detection software to catch undisclosed AI-generated patches. Instead, they're using the same tools they've always used: Deep technical expertise, pattern recognition, and good, old-fashioned code review. As Torvalds said back in 2023, "You have to have a certain amount of good taste to judge other people's code." Also: This is my favorite Linux distro of all time - and I've tried them all Why? As Torvalds pointed out. "There is zero point in talking about AI slop. Because the AI slop people aren't going to document their patches as such." The hard problem isn't obvious junk; that's easy to reject regardless of origin. The real challenge is credible-looking patches that meet the immediate spec, match local style, compile cleanly, and still encode a subtle bug or a long-term maintenance tax. The new policy's enforcement doesn't depend on catching every violation. It depends on making the consequences of getting caught severe enough to discourage dishonesty. Ask anyone who's ever been the target of Torvalds' ire for garbage patches. Even though he's a lot more mild-tempered than he used to be, you still don't want to get on his bad side.
[2]
Linux lays down the law on AI-generated code, yes to Copilot, no to AI slop, and humans take the fall for mistakes -- after months of fierce debate, Torvalds and maintainers come to an agreement
The open-source community's long-simmering identity crisis over artificial intelligence just got a much-needed dose of pragmatism. This week, the Linux kernel project finally established a formal, project-wide policy explicitly allowing AI-assisted code contributions provided that developers follow strict new disclosure rules. The new guidelines mandate that AI agents cannot use the legally binding "Signed-off-by" tag, requiring instead a new "Assisted-by" tag for transparency. Ultimately, the policy legally anchors every single line of AI-generated code and any resulting bugs or security flaws firmly onto the shoulders of the human submitting it. The move comes after a chaotic few months in the open-source world, resolving a fierce debate that peaked in January when Intel's Dave Hansen and Oracle's Lorenzo Stoakes clashed over how aggressively the kernel should police AI tools. Linus Torvalds, in his trademark blunt fashion, ultimately shut the argument down, calling the debate over outright bans "pointless posturing." Torvalds' stance, which forms the philosophical backbone of this new policy, is remarkably straightforward: AI is just another tool. Bad actors submitting garbage code aren't going to read the documentation anyway, so the kernel should focus on holding human developers accountable rather than trying to police the software they run on their local machines. It's a highly reasonable, pragmatic approach, especially when contrasted with the panic that has gripped other corners of the open-source ecosystem. Until now, major projects have taken wildly different approaches to the AI question. Over the last two years, prominent Linux distributions like Gentoo, as well as venerable Unix distribution NetBSD, moved to outright ban AI-generated submissions. NetBSD maintainers famously described LLM outputs as legally "tainted" due to the murky copyright status of the models' training data. The core of this panic revolves around the Developer Certificate of Origin (DCO). As Red Hat pointed out in a thorough analysis late last year, the DCO requires humans to legally certify they have the right to submit their code. Because LLMs are trained on massive datasets of open-source code that often carries restrictive licenses like the GNU General Public License, developers using Copilot or ChatGPT can't genuinely guarantee the provenance of what they are submitting. Red Hat warned this could inadvertently violate open-source licenses and shatter the DCO framework entirely. Legal headaches aside, project maintainers have also been fighting a losing battle against sheer volume. The open-source world is currently drowning in what the community has dubbed "AI slop." The creator of cURL had to close bug bounties after being flooded with hallucinated code, whiteboard tool tldraw began auto-closing external PRs in self-defense, and projects like Node.js and OCaml have seen massive, >10,000-line AI-generated patches spark existential debates among maintainers. The cultural friction of undisclosed AI code has been even more volatile. Late last year, NVIDIA engineer and kernel maintainer Sasha Levin faced massive community backlash after it was revealed he submitted a patch to kernel 6.15 entirely written by an LLM without disclosing it, including the changelog. While the code was functional, it include a performance regression despite being reviewed and tested. The community pushed back hard against the idea of developers slapping their names on complex code they didn't actually write, and even Torvalds admitted the patch was not properly reviewed, partially because it was not labeled as AI-generated. The Linux kernel isn't the only community dealing with the fallout of undisclosed AI assistance. Over in the gaming sphere, the legendary (and still quite-alive) Doom modding community was cleaved in two last year as Christoph "Graf Zahl" Oelckers, the longtime lead developer of the mega-popular GZDoom source port, was caught using undisclosed AI-generated patches. When community members called him out on the lack of transparency, Oelckers took a remarkably cavalier attitude, essentially telling his critics to "feel free to fork the project." The community called his bluff, resulting in the birth of the new UZDoom source port as the overwhelming majority of contributors to GZDoom fled to the new fork. The GZDoom incident and the Sasha Levin backlash highlight exactly why the Linux kernel's new policy is so vital. Most of the developer community is less angry about the use of AI and more frustrated about the dishonesty surrounding it. By demanding an Assisted-by tag and enforcing strict human liability, the Linux kernel is attempting to strip the emotion out of the debate. Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard. The bottom line is, if the code is good, then it's good. If it's hallucinatory AI slop that breaks the kernel, the human who clicked "submit" is the one who will have to answer to Linus Torvalds. In the open-source world, that's about as strong a deterrent as you can get. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[3]
The Linux kernel now allows AI-written code, but you're on the hook for it
* Linux allows AI-generated kernel code, but the community will treat it as your own contribution. * AI tools can't add signed-off-by tags; you must certify DCO, license, and review all code. * If your AI code breaks the kernel, the blame stops with you; review the code carefully or face the consequences. In a world where AI code is entrenched within people's workflows, developers of all walks of life have had to draw a line somewhere. Some places will outright ban AI code, while others will fully embrace it, and each side has its advantages and disadvantages. Well, it turns out that the world of Linux has finally agreed upon where AI code fits within kernel development. Turns out, it's totally fine if you submit AI-generated code to the kernel; however, if something goes wrong with it, it's on your head. No pointing the finger at Claude Code this time. GNOME is cracking down on AI-generated code after updating its extensions guidelines The reviewers have had enough. Posts 1 By Simon Batt Linux kernel contributors can use AI-generated code, but with caution It's essentially treated like it's the contributor's own code As spotted by the folk over at Hacker News, there's new documentation over on the Linux GitHub project for coding assistants. The document reveals that people can use AI-generated code, as long as it complies with the guidelines for submitting to the Linux kernel, fits within the license Linux uses, and is attributed to the bot correctly. So, does that mean you can attach your favorite LLM to the kernel, let it code away, and head out for the day? Well, not quite. While AI agents can now submit code to the kernel, the documentation makes it very clear that, if anything does go wrong, you cannot simply get out of trouble by blaming your assistant: AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO). The human submitter is responsible for: * Reviewing all AI-generated code * Ensuring compliance with licensing requirements * Adding their own Signed-off-by tag to certify the DCO * Taking full responsibility for the contribution That last bullet point is the real bombshell. You are, essentially, submitting the AI's code as if it were your own. If it ends up being a buggy mess and Linus himself gets mad, your head is going on the block. So, feel free to use AI code when contributing to the Linux kernel, but make sure you understand what, exactly, it's coding, else you'll likely not be contributing to Linux for much longer. Stop using CLAUDE.md; here's what actually works for AI-assisted development Do you really need custom context files for every repository? Posts 5 By Joe Rice-Jones
[4]
Linux rules on using AI-generated code - Copilot is OK, but humans must take 'full responsibility for the contribution'
* Linux positions AI as an assistance tool, not as a developer replacement * Human contributors are still fully responsible for their submissions * Transparency tagging will reveal where AI is used Linux has confirmed the use of generative AI to support coding is acceptable, but has established several requirements to ensure high-quality output. For example, code must be compatible with GPL-2.0-only and it must include proper SPDX identifiers. More importantly, though, while AI assistants like Microsoft Copilot may be accepted in the development process, human developers ultimately remain responsible for the output, reviewing code, ensuring licence compliance and taking full accountability (as before). Linux says AI is fine, but humans are still accountable The move positions AI tools as an assistant rather than a human replacement, with AI agents condemned from signing off code and only humans permitted to certify the Developer Certificate of Origin. A new 'Assisted-by' tag will be added for transparency, used to disclose AI involvement, detailing the model and tools used. "When AI tools contribute to kernel development, proper attribution helps track the evolving role of AI in the development process," the Github page reads. Confirmation from the project behind one of the biggest open-source projects on the planet comes after months of internal debate. Finally, a sensible middle ground seems to have been reached, whereby AI assistance is broadly accepted, but 'AI slop' is not. The decision to implement transparency tagging is also noteworthy, with Linux founder Linus Torvalds previously dismissing total AI bans as unrealistic. Instead, liability for security flaws, copyright issues and so on all sits with the contributors personally. As for the move's impacts on the industry, Linux has become one of the first and most influential projects to establish boundaries for AI in such a way. Looking ahead, we could see more companies and projects adopt similar rules, while others may forge their own way, but Linux has certainly kickstarted a broader discussion about where AI fits in the development lifecycle. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Share
Share
Copy Link
After months of heated debate, Linus Torvalds and Linux kernel maintainers have codified the project's first formal policy on AI-assisted code contributions. The new guidelines introduce an Assisted-by tag for transparency while treating AI as just another development tool. Developers using AI must take full responsibility for the contribution, including any bugs or security flaws.
The Linux kernel project has officially codified its first formal policy on AI-assisted code contributions, ending months of fierce debate within the open-source community
1
. Linus Torvalds and kernel maintainers have established clear guidelines that position AI as a development tool rather than a replacement for human developers, balancing modern AI capabilities with the kernel's rigorous quality standards2
.
Source: XDA-Developers
The new policy introduces the Assisted-by tag as a transparency mechanism, requiring developers to disclose when AI tools like Microsoft Copilot contribute to their code
4
. This tag serves dual purposes: enabling kernel maintainers to apply extra scrutiny to AI-assisted patches while avoiding stigmatization of the practice itself1
.
Source: ZDNet
The policy makes developer accountability its cornerstone. AI agents cannot add the legally binding "Signed-off-by" tag—only humans can certify the Developer Certificate of Origin
3
. The human submitter must review all AI-generated code, ensure licensing compliance, and take full responsibility for the contribution, including any resulting bugs or security flaws2
.Code must be compatible with GPL-2.0-only licensing and include proper SPDX identifiers
4
. This approach legally anchors every line of AI-generated code onto the shoulders of the person submitting it, eliminating any possibility of blaming the AI for mistakes3
.The Assisted-by tag emerged from controversy when Nvidia engineer and prominent kernel developer Sasha Levin submitted a patch to Linux 6.15 entirely generated by AI, including the changelog and tests, without disclosing this to reviewers
1
. While Levin reviewed and tested the code, the lack of transparency provoked massive community backlash2
. The code included a performance regression despite being reviewed, and Torvalds admitted the patch wasn't properly scrutinized partly because it wasn't labeled as AI-generated2
.At the 2025 North America Open Source Summit, Levin himself began advocating for formal AI transparency rules, proposing the first draft in July 2025
1
. Initial discussions on the Linux Kernel Mailing List debated whether to use Generated-by or Co-developed-by tags before settling on Assisted-by to accurately reflect AI's role as a tool1
.
Source: Tom's Hardware
Linus Torvalds took a characteristically blunt stance during the debate, stating he didn't want kernel documentation to take either the "sky is falling" or "it's going to revolutionize software engineering" position
1
. His pragmatism shaped the policy's philosophical backbone: AI is just another tool, and bad actors submitting garbage code won't read documentation anyway2
.Kernel maintainers aren't relying on AI-detection software to catch undisclosed AI-generated code. Instead, they're using deep technical expertise, pattern recognition, and traditional code review
1
. As Torvalds noted in 2023, "You have to have a certain amount of good taste to judge other people's code"1
. He pointed out there's "zero point in talking about AI slop" because people creating it won't document their patches as such1
.Related Stories
The policy arrives as the open-source community struggles with an identity crisis over artificial intelligence
2
. Major projects have taken wildly different approaches: Linux distributions like Gentoo and NetBSD moved to outright ban AI-generated submissions, with NetBSD maintainers describing Large Language Model outputs as legally "tainted" due to murky copyright issues surrounding training data2
.The open-source world has been drowning in what the community calls "AI slop." The creator of cURL closed bug bounties after being flooded with hallucinated code, whiteboard tool tldraw began auto-closing external pull requests, and projects like Node.js and OCaml faced massive AI-generated patches exceeding 10,000 lines
2
.The decision reflects a shift in AI capabilities. Greg Kroah-Hartman, maintainer of the Linux stable kernel, recently noted that "something happened a month ago, and the world switched" with AI tools now producing genuine, valuable security reports rather than hallucinated nonsense
1
. Most AI use in kernel development is assistive—code completion, refactoring suggestions, test generation—rather than full code generation1
.The choice of Assisted-by over Generated-by was deliberate, influenced by three factors: accuracy in describing assistive rather than generative use, consistency with existing metadata tags like Reviewed-by and Tested-by, and avoiding implications that AI-assisted code is suspicious or second-class
1
.As one of the most influential open-source projects, Linux's approach could establish industry standards for AI-assisted code contributions
4
. The policy strips emotion from the debate by acknowledging reality: developers will use AI tools to code faster, and attempting to ban specific software is impractical2
. The enforcement strategy doesn't depend on catching every violation but on making consequences severe enough to discourage dishonesty1
.Summarized by
Navi
12 Jan 2026•Technology

10 Mar 2026•Technology

08 Jan 2026•Technology