Anthropic accidentally exposes Claude Code source code in major packaging blunder

Reviewed byNidhi Govil

7 Sources

Share

Anthropic leaked roughly 512,000 lines of internal source code for its popular AI coding assistant Claude Code after accidentally publishing a map file in an npm package. The company confirmed the leak was caused by human error, not a security breach, and stated no customer data or credentials were exposed. This marks Anthropic's second major data lapse in under a week.

Anthropic Confirms Claude Code Source Code Leak

Anthtropic accidentally exposed the internal source code for Claude Code, its widely adopted AI coding tool, after publishing a map file in an npm package that should never have reached production. Security researcher Chaofan Shou discovered the exposure on Tuesday morning and alerted the public, triggering a rapid dissemination of the leaked code across GitHub, where it has been forked more than 41,500 times

1

.

Source: Inc.

Source: Inc.

The source code leak revealed approximately 512,000 lines of code across roughly 1,900 TypeScript files, providing an unprecedented look into how Anthropic built one of its most lucrative products

4

.

Anthtropic confirmed the incident in a statement, emphasizing that the leak caused by human error stemmed from a release packaging issue rather than a security breach. "Earlier today, a Claude Code release included some internal source code," an Anthropic spokesperson said, adding that no customer data compromised or credentials were involved

2

. The company stated it is rolling out measures to prevent similar incidents from happening again.

How the Leak Happened Through an npm Package

The exposure resulted from a reference to an unobfuscated TypeScript source in a map file included in Claude Code's npm package version 2.1.88. Map files are debugging tools that connect bundled code back to original source code, and they are generally not meant for production environments. The map file pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket, which Shou and others were able to download and decompress

1

. Software engineer Gabriel Anhaia noted that this should serve as a reminder to developers to check their build pipelines, warning that "a single misconfigured .npmignore or files field in package.json can expose everything"

1

.

Source: VentureBeat

Source: VentureBeat

Roy Paz, a senior AI security researcher at LayerX Security, explained that the mistake appears to have occurred after someone took a shortcut that bypassed normal release safeguards. "Usually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open," Paz told Fortune. "At Anthropic, it seems that the process wasn't in place and a single misconfiguration or misclick suddenly exposed the full source code"

5

.

What the Leaked Internal Source Code Reveals

The leaked code provides competitors and developers with detailed insights into Claude Code's architecture and operational mechanics. Analysis of the code reveals a sophisticated three-layer memory architecture designed to solve "context entropy," the tendency for AI agents to become confused during long-running sessions. This system uses a lightweight index of pointers that stores locations rather than data, with actual project knowledge distributed across topic files fetched on-demand

4

.

The code also exposed KAIROS, a feature flag mentioned over 150 times in the source that represents an autonomous daemon mode allowing Claude Code to operate as an always-on background agent. Additionally, developers discovered "Undercover Mode," which reveals that Anthropic uses Claude Code for stealth contributions to public open-source repositories

4

. The leaked files confirmed internal details about Capybara, the codename for a Claude 4.6 variant, with internal comments revealing a 29-30% false claims rate in version 8

4

.

Source: Gizmodo

Source: Gizmodo

Competitive and Security Implications for the AI Coding Tool

The timing of this incident is particularly problematic for Anthropic. The company is reportedly preparing for an initial public offering and has achieved a $19 billion annualized revenue run-rate as of March 2026, with Claude Code alone generating $2.5 billion in annualized recurring revenue

4

. The leak provides competitors like OpenAI's Codex and Cursor with a literal blueprint for building high-agency, commercially viable AI agents. While the leak did not expose the weights of the underlying AI model itself, it revealed the agentic harness that sits around the model and instructs it how to use software tools and provides important guardrails

5

.

Paz raised concerns about cybersecurity implications, noting that the code shows how the tool connects to Anthropic's internal systems. Even without encrypted access keys, it appears possible to access internal services that should be restricted, potentially giving malicious actors new opportunities to exploit Anthropic's models to build more powerful cyberattack tools and bypass safeguards

5

.

Second Major Data Lapse in Under a Week

This represents Anthropic's second significant data blunder in less than a week. Just days earlier, Fortune reported that the company had inadvertently made close to 3,000 files publicly available in a data cache, including draft blog posts detailing a powerful upcoming model known internally as "Mythos" or "Capybara" that presents unprecedented cybersecurity risks

5

. This isn't even the first time Claude Code has experienced a similar exposure; in February 2025, an early version accidentally exposed its original code in a comparable breach

5

. A post on X with a link to the leaked code amassed more than 21 million views within hours of being shared

2

. The original uploader later repurposed his GitHub repository, citing concerns about legal liability for hosting Anthropic's intellectual property, though numerous forks and mirrors remain accessible

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo