3 Sources
3 Sources
[1]
Anthropic says its leak-focused DMCA effort unintentionally hit legit GitHub forks
An Anthropic-backed DMCA effort to remove its recently leaked Claude Code client source code from GitHub this week resulted in the accidental removal of many legitimate forks of its official public code repository. While that overzealous takedown has now been reversed, Anthropic still faces an extreme uphill battle in limiting the spread of its recently leaked code. The DMCA notice that GitHub received late Tuesday focuses on a repository containing the leaked source code originally posted by GitHub user nirholas (archived here) and nearly 100 specifically named forks of that repository. In a note appended to that request, though, GitHub said it had acted to take down a network of 8,100 similar forked repositories because "the submitter alleged that all or most of the forks were infringing to the same extent as the parent repository." That expanded takedown affected many repositories that didn't contain leaked code but instead forked Anthropic's official public Claude Code repository, which the company shares to encourage public bug reports and fixes. Many coders took to social media to complain about being swept up in the DMCA dragnet despite not sharing any leaked code. "I'm sorry that your people shipped your source code, and that your lawyers don't know how to read a repo," coder Robert McLaws wrote. "I will be filing a DCMA counter-notice." By Wednesday, Anthropic had moved to fix the issue with GitHub, requesting that the site restrict its takedowns to the 96 fork URLs specifically listed in its takedown notice and to "reinstate all other repositories that were disabled by network-wide processing." Anthropic's head of Claude Code, Boris Cherny, said on social media that the overzealous takedowns were "not intentional," and Anthropic's Thariq Shihipar added that they were the result of "a communication mistake." "The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended," an Anthropic spokesperson told TechCrunch. "We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks." Closing the barn door Even with the corrected focus on leaked code, Anthropic will struggle to stop or even materially slow the spread of its Claude Code client source code across the Internet. As of this writing, many copies of that source code are still easy to find on GitHub, including one Ars referenced in our own analysis yesterday. Copies of the leaked code have also appeared on other platforms, like the Germany-based Codeberg, which is outside the direct reach of the US DMCA (but which might be subject to similar local European laws). Meanwhile, multiple enterprising coders have already used AI coding tools to develop so-called "clean room" reimplementations of the original Claude Code leak, converting the original TypeScript code into languages such as Python and Rust. Even if Anthropic could somehow remove every copy of its leaked code from the Internet, these functionally similar rewrites might be legally distinct (though there is still some argument over whether this kind of AI-written code can really claim the strict separation needed to avoid being considered a "derivative work.") Anthropic coders' own use of Claude Code to write pieces of the Claude Code client could also complicate the legal status of the leaked code. Cherny admitted in December that "in the last thirty days, 100% of my contributions to Claude Code were written by Claude Code." That kind of admission could be significant, because while the US Copyright Office offers protection to "AI-assisted" codebases, that protection generally doesn't extend to work generated entirely by AI.
[2]
Anthropic Takes Down GitHub Repositories as It Attempted to Limit Code Leak Damage
The issue went out of hand because instead of knocking off the relevant repos, it actually removed many relevant ones Heard of throwing the baby out with the bathwater? Well, AI startup Anthropic appears to have taken down thousands of GitHub repositories, including several legitimate ones, as it attempted to take down some leaked source code. A day earlier, the company had confirmed that in the incident involving the source code leak, no confidential client data was exposed. "The error occurred during the release packaging. It was caused by human error, not a hack. We are implementing measures to prevent this from happening again," a company representative said. Close on the heels of the above incident when the company found a recent release had unintentionally exposed parts of the internal source code for Claude Code, an application built by Anthropic to assist software developers. The leak spread across developer communities with users analysing and reposting the code on GitHub. This led Anthropic to respond by issuing a takedown request under the US Copyright law as the company sought to remove the repositories hosting the leaked material. However, instead of sorting things out quickly, the move appears to have had unintended consequences whereby a lot more content was swept off GitHub than originally intended. Per GitHub's records, the notice was executed against around 8,100 repositories including some legitimate forks of Anthropic's own publicly released Claude Code repository. "Because the reported network that contained the allegedly infringing content was larger than one hundred (100) repositories, and the submitter alleged that all or most of the forks were infringing to the same extent as the parent repository, GitHub processed the takedown notice against the entire network of 8.1K repositories, inclusive of the parent repository," GitHub said. The unfortunate outcome of this process was that social media was abuzz for more than 12 hours with users going bonkers when they found that their own code had been blocked. Of course, Anthropic was quick to apologise saying that the move was accidental and soon had retracted a bulk of the takedown notices. Boris Cherny, Anthropic's head of Claude Code took to his X handle to confirm that the matter was being resolved. This was not intentional, we've been working with GitHub to fix it. Should be better now, he noted in a post. Eventually, it was reported that only one repository and 96 forks with the accidentally released source code were taken down. The company later clarified that the repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended. "We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks," the company said. Looks like Anthropic, while winning a small battle against the Trump administration over their ejection from the Pentagon, is in haste to prepare for an IPO. Not surprising, given that arch rivals OpenAI is already on the job, having announced a fresh fundraise of $122 billion barely a couple of days back. Maybe, it is time Dario Amodei and gang do what they're best and leave all the PR, including the bad ones, to their rivals.
[3]
Anthropic Counters Massive Code Leak With 8,000 Copyright Takedowns
Anthropic has issued 8,000 copyright takedown requests. This decision followed the accidental exposure of the entire source code for Claude AI, which is their AI model. The leak has reportedly revealed commercially sensitive information. The leaks included internal tools, proprietary techniques, and specific instructions for transforming its AI models into functional coding agents. On Tuesday, 31 March 2026, a software engineer discovered that Anthropic had, seemingly by accident, included access to the source code for the category-leading command line application in a recent release. AI enthusiasts pored over the leaked code for clues about how Anthropic harnesses the LLM underlying the application, and shared it on GitHub. issued a takedown notice under U.S. digital copyright law, asking GitHub to take down repositories containing the offending code. According to GitHub's records, the notice was served on some 8,100 repositories, including legitimate forks of Anthropic's publicly released Claude Code repository.
Share
Share
Copy Link
AI startup Anthropic issued a DMCA takedown request after accidentally leaking Claude Code source code, but the effort swept up 8,100 GitHub repositories including legitimate forks of its public repository. The company quickly retracted the notice and restored access to affected repositories, but faces an uphill battle containing the spread of leaked code across the internet.
AI startup Anthropic found itself in a difficult position this week after an accidental leak of source code for Claude Code, its AI-powered application for software developers, prompted a DMCA takedown that removed far more than intended. The leaked Claude Code source code first appeared on GitHub when a software engineer discovered on March 31, 2026, that Anthropic had unintentionally included access to the complete source code in a recent release
1
. The accidental leak of source code exposed commercially sensitive information, including internal tools, proprietary techniques, and specific instructions for transforming AI models into functional coding agents3
.
Source: Analytics Insight
In response to the leak spreading across developer communities, Anthropic issued copyright takedown requests under US copyright law late Tuesday. The DMCA takedown focused on a repository posted by GitHub user nirholas and nearly 100 specifically named forks
1
. However, GitHub processed the notice against an entire network of 8,100 repositories because "the submitter alleged that all or most of the forks were infringing to the same extent as the parent repository"1
. This expanded action swept up numerous legitimate repositories that had forked Anthropic's official public Claude Code repository, which the company shares to encourage public bug reports and fixes1
.The overzealous intellectual property protection attempt sparked immediate backlash from developers who found their legitimate work blocked. Coders took to social media to voice frustration, with developer Robert McLaws stating, "I'm sorry that your people shipped your source code, and that your lawyers don't know how to read a repo"
1
. Boris Cherny, Anthropic's head of Claude Code, acknowledged on social media that the takedowns were "not intentional," while Thariq Shihipar attributed the mishap to "a communication mistake"1
. By Wednesday, Anthropic moved quickly to correct the error, requesting GitHub restrict takedowns to the 96 fork URLs specifically listed and reinstate all other repositories1
.
Source: Ars Technica
Related Stories
Even with corrected focus, Anthropic faces significant challenges containing the spread of its leaked code. Multiple copies remain easily accessible on GitHub, and the code has appeared on platforms like Germany-based Codeberg, which sits outside direct US DMCA jurisdiction
1
. More concerning for the company, enterprising developers have already used AI coding tools to create clean room reimplementations, converting the original TypeScript code into Python and Rust1
. These functionally similar rewrites might be legally distinct, though debate continues over whether AI-written code can claim the strict separation needed to avoid being considered derivative work1
.
Source: CXOToday
The incident raises complex questions about AI-generated code copyright. Cherny admitted in December that "in the last thirty days, 100% of my contributions to Claude Code were written by Claude Code"
1
. This admission could prove significant, as the US Copyright Office offers protection to "AI-assisted" codebases but generally doesn't extend that protection to work generated entirely by AI1
. The company confirmed the error occurred during release packaging due to human error, not a hack, and stated no confidential client data was exposed2
. As Anthropic reportedly prepares for an IPO while rival OpenAI recently announced a $122 billion fundraise, this incident highlights the growing pains facing AI companies as they balance open development practices with protecting proprietary technology2
.Summarized by
Navi
[3]
31 Mar 2026•Technology

02 Apr 2026•Policy and Regulation

20 Aug 2024

1
Technology

2
Science and Research

3
Startups
