7 Sources
7 Sources
[1]
Anthropic accidentally exposes Claude Code source code
Oopsy-doodle: Did someone forget to check their build pipeline? Would you like a closer look at Claude? Someone at Anthropic has some explaining to do, as the official npm package for Claude Code shipped with a map file exposing what appears to be the popular AI coding tool's entire source code. It did as of Tuesday morning, at least, which is when security researcher Chaofan Shou appears to have spotted the exposure and told the world. Snapshots of Claude Code's source code were quickly backed up in a GitHub repository that has been forked more than 41,500 times so far, disseminating it to the masses and ensuring that Anthropic's mistake remains the AI and cybersecurity community's gain. According to the GitHub upload of the exposed Claude Code source, the leak actually resulted from a reference to an unobfuscated TypeScript source in the map file included in Claude Code's npm package (map files are used to connect bundled code back to the original source). That reference, in turn, pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket that Shou and others were able to download and decompress to their hearts' content. Contained in the zip archive is a wealth of info: some 1,900 TypeScript files consisting of more than 512,000 lines of code, full libraries of slash commands and built-in tools - the works, in short. That said, Claude Code's source isn't a complete mystery, and while this exposure gives us a look at a fresh iteration of Claude Code straight from the leaky bucket, it's not blowing the lid off of something that was a secret until now. Claude Code has been reverse engineered, and various projects have resulted in an entire website dedicated to exposing the hidden portions of Claude Code that haven't been released to, or shared with, the public. In other words, what we have is a useful comparison point and update source for the CCLeaks operators, and maybe a few new secrets will come to light as people dig through the exposed code. Far more interesting is the fact that someone at Anthropic made a mistake as bad as leaving a map file in a publish configuration. Publishing map files is generally frowned upon, as they're meant for debugging obfuscated or bundled code and aren't necessary for production. Not only that, but as we've seen in this example, they can easily be used to expose source code, as they're a reference document for that original. As pointed out by software engineer Gabriel Anhaia in a deep dive into the exposed code, this should serve as a reminder to even the best developers to check their build pipelines. "A single misconfigured .npmignore or files field in package.json can expose everything," Anhaia wrote in his analysis of the Claude Code leak. Anthropic admitted as much in a statement to The Register, saying that, yes, it was good ol' human error responsible for this snafu. "Earlier today, a Claude Code release included some internal source code," an Anthropic spokesperson told us in an email, adding that no customer data or credentials were involved or exposed. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." As of this writing, the original uploader of the Claude Code source to GitHub has repurposed his repo to host a Python feature port of Claude Code instead of Anthropic's directly exposed source, citing concerns that he could be held legally liable for hosting Anthropic's intellectual property. Plenty of forks and mirrors remain for those who want to inspect the exposed code. We asked Anthropic if it was considering asking people to remove their repositories of its exposed source code, but the company didn't have anything to say beyond its statement. ®
[2]
Anthropic leaks part of Claude Code's internal source code
Anthropic leaked part of the internal source code for its popular artificial intelligence coding assistant, Claude Code, the company confirmed on Tuesday. "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." A source code leak is a blow to the startup, as it could help give software developers, and Anthropic's competitors, insight into how it built its viral coding tool. A post on X with a link to Anthropic's code has amassed more than 21 million views since it was shared at 4:23 a.m. ET on Tuesday. The leak also marks Anthropic's second major data blunder in under a week. Descriptions of Anthropic's upcoming AI model and other documents were recently discovered in a publicly accessible data cache, according to a report from Fortune on Thursday.
[3]
Source Code for Anthropic's Claude Code Leaks at the Exact Wrong Time
Anthropic just cannot keep a lid on its business. After details of a yet-to-be-announced model were revealed due to the company leaving unpublished drafts of documents and blog posts in a publicly visible data cache, Anthropic has been hit with yet another lapse in protocol by inadvertently publishing internal source code for its AI coding assistant, Claude Code. The leak provides an unprecedented look into Anthropic's closed-source model just as the company is preparing for initial public offering. The code was discovered by Chaofan Shou, a self-identified intern at Solayer Lab who posts on X @Fried_rice. Per Shou, the source code was discovered a .map fileâ€"a plaintext file generated when compiling software that details the memory map of the projectâ€"found in an npm registry, which is a database for a package manager for JavaScript. The file, meant for internal debugging, is essentially a decoder. It takes what should be obfuscated and recompiles it for the developers. But Anthropic published it, exposing at least a partial, unobfuscated TypeScript source code of Claude Code version 2.1.88. The file contained about 512,000 lines of code related to Anthropic's coding agent. In a less technical manner: Anthropic accidentally gave away some of its blueprints that were never supposed to see the light of day, and programmers have been parsing through it all day. They've claimed to have found everything from "spinner verbs" or phrases that Claude serves up while working through a task, to details like how swearing at Claude affects how it receives a prompt. One person even claimed to have found a hidden "Tamagotchi" style virtual pet that Anthropic may have been working on. (A note on that: It was reportedly set to launch on April 1, so maybe chalk that one up to an April Fool's style bit.) The file also reveals a lot of information on how Claude operates, including its engine for API calls, how it counts tokens used to process prompts, and other technical aspects. What the code does not seem to contain is any details about Anthropic's underlying model, but everything that is in the file has been uploaded to a GitHub repository for users to interact with and fork. Anthropic declined to comment on the discoveries made by users, but did confirm the authenticity of the leaked source code to Gizmodo. In a statement, a spokesperson said, "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Human error was probably part of it, but it's worth noting that the humans working on Claude Code have also been relying on the coding agent quite a bit. Back in December, Anthropic's head of Claude Code, Boris Cherny, posted that "In the last thirty days, 100% of my contributions to Claude Code were written by Claude Code." Reliance on the coding assistant has seemingly been on the rise across the company, so it's possible this situation was an incident of vibe coding too close to the sun. While this isn't exactly Anthropic giving away the ingredients to its secret sauce, it is a look at how its kitchen operates. And the timing couldn't really come at a worse time. Not only is Anthropic in the midst of what appears like a ramp-up to going public later this year, but its competitors are starting to turn their attention to trying to cut into the company's hold on coding and enterprise services. OpenAI has reportedly made a concerted effort to pivot to enterprise and recently offered unlimited access to its Claude Code competitor, Codex. There is never a good time to have your source code leak, but this does seem like a particularly bad time for it.
[4]
Claude Code's source code appears to have leaked: here's what we know
Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public. A 59.8 MB JavaScript source map file (), intended for internal debugging, was inadvertently included in version 2.1.88 of the package on the public npm registry pushed live earlier this morning. By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers. For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical given the commercial velocity of the product. Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year. With enterprise adoption accounting for 80% of its revenue, the leak provides competitors -- from established giants to nimble rivals like Cursor -- a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent. We've reached out to Anthropic for an official statement on the leak and will update when we hear back. The anatomy of agentic memory The most significant takeaway for competitors lies in how Anthropic solved "context entropy" -- the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity. The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional "store-everything" retrieval. As analyzed by developers like @himanshustwts, the architecture utilizes a "Self-Healing Memory" system. At its core is , a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations. Actual project knowledge is distributed across "topic files" fetched on-demand, while raw transcripts are never fully read back into the context, but merely "grep'd" for specific identifiers. This "Strict Write Discipline" -- where the agent must update its index only after a successful file write -- prevents the model from polluting its context with failed attempts. For competitors, the "blueprint" is clear: build a skeptical memory. The code confirms that Anthropic's agents are instructed to treat their own memory as a "hint," requiring the model to verify facts against the actual codebase before proceeding. KAIROS and the autonomous daemon The leak also pulls back the curtain on "KAIROS," the Ancient Greek concept of "at the right time," a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode. While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called . In this mode, the agent performs "memory consolidation" while the user is idle. The logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts. This background maintenance ensures that when the user returns, the agent's context is clean and highly relevant. The implementation of a forked subagent to run these tasks reveals a mature engineering approach to preventing the main agent's "train of thought" from being corrupted by its own maintenance routines. Unreleased internal models and performance metrics The source code provides a rare look at Anthropic's internal model roadmap and the struggles of frontier development. The leak confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing. Internal comments reveal that Anthropic is already iterating on Capybara v8, yet the model still faces significant hurdles. The code notes a 29-30% false claims rate in v8, an actual regression compared to the 16.7% rate seen in v4. Developers also noted an "assertiveness counterweight" designed to prevent the model from becoming too aggressive in its refactors. For competitors, these metrics are invaluable; they provide a benchmark of the "ceiling" for current agentic performance and highlight the specific weaknesses (over-commenting, false claims) that Anthropic is still struggling to solve. "Undercover" Claude Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories. The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." While Anthropic may use this for internal "dog-fooding," it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure. The logic ensures that no model names (like "Tengu" or "Capybara") or AI attributions leak into public git logs -- a capability that enterprise competitors will likely view as a mandatory feature for their own corporate clients who value anonymity in AI-assisted development. The fallout has just begun The "blueprint" is now out, and it reveals that Claude Code is not just a wrapper around a Large Language Model, but a complex, multi-threaded operating system for software engineering. Even the hidden "Buddy" system -- a Tamagotchi-style terminal pet with stats like and -- shows that Anthropic is building "personality" into the product to increase user stickiness. For the wider AI market, the leak effectively levels the playing field for agentic orchestration. Competitors can now study Anthropic's 2,500+ lines of bash validation logic and its tiered memory structures to build "Claude-like" agents with a fraction of the R&D budget. As the "Capybara" has left the lab, the race to build the next generation of autonomous agents has just received an unplanned, $2.5 billion boost in collective intelligence. What Claude Code users and enterprise customers should do now about the alleged leak While the source code leak itself is a major blow to Anthropic's intellectual property, it poses a specific, heightened security risk for you as a user. By exposing the "blueprints" of Claude Code, Anthropic has handed a roadmap to researchers and bad actors who are now actively looking for ways to bypass security guardrails and permission prompts. Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now design malicious repositories specifically tailored to "trick" Claude Code into running background commands or exfiltrating data before you ever see a trust prompt. The most immediate danger, however, is a concurrent, separate supply-chain attack on the npm package, which occurred hours before the leak. If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (, , or ) for these specific versions or the dependency . If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation. To mitigate future risks, you should migrate away from the npm-based installation entirely. Anthropic has designated the Native Installer () as the recommended method because it uses a standalone binary that does not rely on the volatile npm dependency chain. The native version also supports background auto-updates, ensuring you receive security patches (likely version 2.1.89 or higher) the moment they are released. If you must remain on npm, ensure you have uninstalled the leaked version 2.1.88 and pinned your installation to a verified safe version like 2.1.86. Finally, adopt a zero trust posture when using Claude Code in unfamiliar environments. Avoid running the agent inside freshly cloned or untrusted repositories until you have manually inspected the and any custom hooks. As a defense-in-depth measure, rotate your Anthropic API keys via the developer console and monitor your usage for any anomalies. While your cloud-stored data remains secure, the vulnerability of your local environment has increased now that the agent's internal defenses are public knowledge; staying on the official, native-installed update track is your best defense.
[5]
Anthropic leaks its own AI coding tool's source code in second major security breach | Fortune
Anthropic has accidentally leaked the source code for its popular coding tool Claude Code. The leak comes just days after Fortune reported that the company had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model that presents unprecedented cybersecurity risks. The model is known internally as both "Mythos" and "Capybara," according to the leaked blog post obtained by Fortune. The source code leak exposed around 500,000 lines of code across roughly 1,900 files. When reached for comment, Anthropic confirmed that "some internal source code" had been leaked within a "Claude Code release." A spokesperson said: "No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The latest data leak is potentially more damaging to Anthropic than the earlier accidental exposure of the company's draft blog post about its forthcoming model. While the latest security lapse did not expose the weights of the Claude model itself, it did allow people with technical knowledge to extract additional internal information from the company's codebase, according to a cybersecurity professional Fortune asked to review the leak. Claude Code is perhaps Anthropic's most popular product and has seen soaring adoption rates from large enterprises. At least some of Claude Code's capabilities come not from the underlying large language model that powers the product but from the software 'harness' that sits around the underlying AI model and instructs it how to use other software tools and provides important guardrails and instructions that govern its behavior. It is the source code for this agentic harness that has now leaked online. The leak potentially allows a competitor to reverse-engineer how Claude Code's agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code's agentic harness based on the leaked code. The leaked code also provided further evidence that Anthropic has a new model with the internal name "Capybara" that the company is actively preparing to launch, according to Roy Paz, a senior AI security researcher at LayerX Security. It revealed that the company has a "fast" and "slow" version of the new model and that it will likely be a replacement for Opus, Anthropic's most advanced model on the market. Currently, Anthropic markets each of its models in three different sizes. The largest and most capable model versions are branded Opus; while slightly faster and cheaper, but less capable, versions are branded Sonnet; and the smallest, cheapest, and fastest are called Haiku. In the draft blog post obtained by Fortune last week, Anthropic describes "Capybara" as a new tier of model that is even larger and more capable than Opus, but also more expensive. The newest leak, first made public in an X post, appears to have happened after Anthropic uploaded all of Claude Code's original code to NPM, a platform developers use to share and update software, instead of only the finished version that computers actually run. The mistake looks like a "human error" after someone took a shortcut that bypassed normal release safeguards, Paz said. "Usually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open," he told Fortune. "At Anthropic, it seems that the process wasn't in place and a single misconfiguration or misclick suddenly exposed the full source code." Paz also raised concerns about how the tool connects to Anthropic's internal systems. Even without special encrypted access keys that would normally be required to access such systems, it appears possible to access internal services that should be restricted, Paz said. He warned this could give malicious actors, including nation-states, new opportunities to exploit Anthropic's models to build more powerful cyberattack tools and bypass the safeguards meant to constrain them. Anthropic's current most powerful model, Claude 4.6 Opus, is already classed by the company as a dangerous model when it comes to cybersecurity risks. Anthropic has said its current Opus models are capable of autonomously identifying zero-day vulnerabilities in software. While these capabilities are intended to help companies detect and fix flaws, they could also be weaponized by hackers, including nation-states, to find and exploit vulnerabilities. This isn't the first time Anthropic has inadvertently leaked details about its popular Claude Code tool. In February 2025, an early version of Claude Code accidentally exposed its original code in a similar breach. The exposure showed how the tool worked behind the scenes as well as how it connected to Anthropic's internal systems. Anthropic later removed the software and took the public code down.
[6]
Anthropic's Leaked Code Reveals the Radical Strategy That Makes Claude Code a $2.5 Billion AI Tool
In what the company says was a simple packaging mistake, parts of the code behind Claude Code, its fast-growing AI coding product, were briefly made public early Tuesday morning. Within hours, developers had downloaded, shared, and begun dissecting the roughly 500,000 lines of code across GitHub, Fortune reported. Claude Code is already generating an estimated $2.5 billion in annual recurring revenue, with enterprise customers making up the vast majority of that growth. Now, competitors may have an unusually detailed roadmap for how it works. Anthropic confirmed the incident in a statement to Fortune, emphasizing that no customer data or credentials were exposed. The company described the issue as "human error," not a security breach, and said it is working to prevent similar mistakes.
[7]
Claude Code source code leak: Did Anthropic just expose its AI secrets, hidden models, and undercover coding strategy to the world?
Claude Code, Anthropic's top AI agent, just suffered a major source code leak. Version 2.1.88 exposed 512,000 lines of TypeScript, revealing memory architecture, orchestration logic, and 44 hidden features. The AI platform alone drives $2.5 billion in annual revenue, with 80% from enterprise clients. Competitors can now study background agents, autonomous daemons, and persistent memory systems. Security risks spike as malicious actors may exploit exposed Hooks and npm dependencies. Users must migrate to Anthropic's native installer, audit API keys, and inspect local repositories.
Share
Share
Copy Link
Anthropic leaked roughly 512,000 lines of internal source code for its popular AI coding assistant Claude Code after accidentally publishing a map file in an npm package. The company confirmed the leak was caused by human error, not a security breach, and stated no customer data or credentials were exposed. This marks Anthropic's second major data lapse in under a week.
Anthtropic accidentally exposed the internal source code for Claude Code, its widely adopted AI coding tool, after publishing a map file in an npm package that should never have reached production. Security researcher Chaofan Shou discovered the exposure on Tuesday morning and alerted the public, triggering a rapid dissemination of the leaked code across GitHub, where it has been forked more than 41,500 times
1
.
Source: Inc.
The source code leak revealed approximately 512,000 lines of code across roughly 1,900 TypeScript files, providing an unprecedented look into how Anthropic built one of its most lucrative products
4
.Anthtropic confirmed the incident in a statement, emphasizing that the leak caused by human error stemmed from a release packaging issue rather than a security breach. "Earlier today, a Claude Code release included some internal source code," an Anthropic spokesperson said, adding that no customer data compromised or credentials were involved
2
. The company stated it is rolling out measures to prevent similar incidents from happening again.The exposure resulted from a reference to an unobfuscated TypeScript source in a map file included in Claude Code's npm package version 2.1.88. Map files are debugging tools that connect bundled code back to original source code, and they are generally not meant for production environments. The map file pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket, which Shou and others were able to download and decompress
1
. Software engineer Gabriel Anhaia noted that this should serve as a reminder to developers to check their build pipelines, warning that "a single misconfigured .npmignore or files field in package.json can expose everything"1
.
Source: VentureBeat
Roy Paz, a senior AI security researcher at LayerX Security, explained that the mistake appears to have occurred after someone took a shortcut that bypassed normal release safeguards. "Usually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open," Paz told Fortune. "At Anthropic, it seems that the process wasn't in place and a single misconfiguration or misclick suddenly exposed the full source code"
5
.The leaked code provides competitors and developers with detailed insights into Claude Code's architecture and operational mechanics. Analysis of the code reveals a sophisticated three-layer memory architecture designed to solve "context entropy," the tendency for AI agents to become confused during long-running sessions. This system uses a lightweight index of pointers that stores locations rather than data, with actual project knowledge distributed across topic files fetched on-demand
4
.The code also exposed KAIROS, a feature flag mentioned over 150 times in the source that represents an autonomous daemon mode allowing Claude Code to operate as an always-on background agent. Additionally, developers discovered "Undercover Mode," which reveals that Anthropic uses Claude Code for stealth contributions to public open-source repositories
4
. The leaked files confirmed internal details about Capybara, the codename for a Claude 4.6 variant, with internal comments revealing a 29-30% false claims rate in version 84
.
Source: Gizmodo
Related Stories
The timing of this incident is particularly problematic for Anthropic. The company is reportedly preparing for an initial public offering and has achieved a $19 billion annualized revenue run-rate as of March 2026, with Claude Code alone generating $2.5 billion in annualized recurring revenue
4
. The leak provides competitors like OpenAI's Codex and Cursor with a literal blueprint for building high-agency, commercially viable AI agents. While the leak did not expose the weights of the underlying AI model itself, it revealed the agentic harness that sits around the model and instructs it how to use software tools and provides important guardrails5
.Paz raised concerns about cybersecurity implications, noting that the code shows how the tool connects to Anthropic's internal systems. Even without encrypted access keys, it appears possible to access internal services that should be restricted, potentially giving malicious actors new opportunities to exploit Anthropic's models to build more powerful cyberattack tools and bypass safeguards
5
.This represents Anthropic's second significant data blunder in less than a week. Just days earlier, Fortune reported that the company had inadvertently made close to 3,000 files publicly available in a data cache, including draft blog posts detailing a powerful upcoming model known internally as "Mythos" or "Capybara" that presents unprecedented cybersecurity risks
5
. This isn't even the first time Claude Code has experienced a similar exposure; in February 2025, an early version accidentally exposed its original code in a comparable breach5
. A post on X with a link to the leaked code amassed more than 21 million views within hours of being shared2
. The original uploader later repurposed his GitHub repository, citing concerns about legal liability for hosting Anthropic's intellectual property, though numerous forks and mirrors remain accessible1
.Summarized by
Navi
[1]
13 Feb 2026•Technology

07 Aug 2025•Technology

27 Mar 2026•Technology

1
Technology

2
Technology

3
Science and Research
