47 Sources
47 Sources
[1]
Here's what that Claude Code source leak reveals about Anthropic's plans
Yesterday's surprise leak of the source code for Anthropic's Claude Code revealed a lot about the vibe-coding scaffolding the company has built around its proprietary Claude model. But observers digging through over 512,000 lines of code across more than 2,000 files have also discovered references to disabled, hidden, or inactive features that provide a peek into the potential roadmap for future features. Chief among these features is Kairos, a persistent daemon that can operate in the background even when the Claude Code terminal window is closed. The system would use periodic "<tick>" prompts to regularly review whether new actions are needed and a "PROACTIVE" flag for "surfacing something the user hasn't asked for and needs to see now." Kairos makes use of a file-based "memory system" designed to allow for persistent operation across user sessions. A prompt hidden behind a disabled "KAIROS" flag in the code explains that the system is designed to "have a complete picture of who the user is, how they'd like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you." To organize and consolidate this memory system across sessions, the Claude Code source code includes references to an evocatively named AutoDream system. When a user goes idle or manually tells Anthropic to sleep at the end of a session, the AutoDream system would tell Claude Code that "you are performing a dream -- a reflective pass over your memory files." This prompt describing this AI "dream" process asks Claude Code to scan the day's transcripts for "new information worth persisting," consolidate that new information in a way that avoids "near-duplicates" and "contradictions," and prune existing memories that are overly verbose or newly outdated. Claude Code would also be instructed to watch out for "existing memories that drifted," an issue we've seen previously when Claude users have tried to graft memory systems onto their harnesses. The overall goal would be to "synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly," according to the prompt. Undercover Buddy While the Kairos daemon doesn't seem to have been fully implemented in code yet, a separate "Undercover mode" appears to be inactive, letting Anthropic employees contribute to public open source repositories without revealing themselves as AI agents. The reference prompts for this mode focus primarily on protecting "internal model codenames, project names, or other Anthropic-internal information" from becoming accidentally public through open source commits. But the prompt also explicitly tells the system that its commits should "never include... the phrase 'Claude Code' or any mention that you are an AI," and to omit any "co-Authored-By lines or any other attribution." That kind of obfuscation seems especially relevant given recent controversies surrounding AI coding tools being used on popular repositories. On the lighter side, the Claude Code source code also describes Buddy, a Clippy-like "separate watcher" that "sits beside the user's input box and occasionally comments in a speech bubble." These virtual creatures would come in 18 randomized "species" forms ranging from blob to axolotl and appear as five-line by 12-column ASCII art animations complete with tiny hats. A comment suggests that Buddy was planned for a "teaser window" launch between April 1 through 7 before a full launch in May. It's unclear how the source code leak has impacted those plans. Other potential planned Claude Code features referenced in the source code leak include: * An UltraPlan feature allowing Opus-level Claude models to "draft an advanced plan you can edit and approve," which can run for 10 to 30 minutes at a time. * A Voice Mode letting users chat directly to Claude Code, much like similar AI systems. * A Bridge mode that expands on the existing Anthropic Dispatch tool to allow for remote Claude Code sessions that are fully controllable from an outside browser or mobile device.
[2]
Anthropic is having a month | TechCrunch
Anthropic has built its public identity around the winning idea that it's the careful AI company. It publishes detailed research on AI risk, employs some of the best researchers in the field, and has been vocal about the responsibilities that come with building such powerful technology -- so vocal, of course, that it's right now battling it out with the Department of Defense. On Tuesday, alas, someone there forgot to check a box. It is, notably, the second time in a week. Days earlier, Fortune reported that Anthropic had accidentally made nearly 3,000 internal files publicly available, including a draft blog post describing a powerful new model the company had not yet announced. Here's what happened on Tuesday: When Anthropic pushed out version 2.1.88 of its Claude Code software package, it accidentally included a file that exposed nearly 2,000 source code files and more than 512,000 lines of code -- essentially the full architectural blueprint for one of its most important products. A security researcher named Chaofan Shou noticed almost immediately and posted about it on X. Anthropic's statement to multiple outlets was nonchalant as these things go: "This was a release packaging issue caused by human error, not a security breach." (Internally, we'd guess things were less measured.) Claude Code isn't a minor product. It's a command-line tool that lets developers use Anthropic's AI to write and edit code and has become formidable enough to unsettle rivals. According to the WSJ, OpenAI pulled the plug on its video generation product Sora just six months after launching it to the public to refocus its efforts on developers and enterprises -- partly in response to Claude Code's growing momentum. What leaked was not the AI model itself but the software scaffolding around it -- the instructions that tell the model how to behave, what tools to use, and where its limits are. Developers began publishing detailed analyses almost immediately, with one describing the product as "a production-grade developer experience, not just a wrapper around an API." Whether this turns out to matter in any lasting way is a question best left to developers. Competitors may find the architecture instructive; at the same time, the field moves fast. Either way, somewhere at Anthropic, you can imagine that one very talented engineer has spent the rest of the day quietly wondering if they still have a job. One can only hope it's not the same engineer, or engineering team, from earlier this week.
[3]
Anthropic Accidentally Leaks Claude's Source Code
for Claude Code was accidentally leaked to the public. And no, this is not an April Fool's joke. By Tuesday morning, the leak was still accessible and security researcher Chaofun Shou reported it containing the full source code with nearly 2000 files the leak appeared after an apparent internal debugging file spreading it widely and turning Anthropic's slipup into a resource for the AI and cybersecurity world. this is the most widespread view into the AI system so far, to Business Insider saying the release had unintentionally included internal source code, but no sensitive customer data or credentials were exposed. The company described it as a packaging error caused by human mistake, not a security breach, and said it's implementing measures to prevent a repeat.
[4]
Anthropic leak reveals Claude Code tracking user frustration and raises new questions about AI privacy
I agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy. We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. On March 31 artificial intelligence company Anthropic accidentally leaked roughly 512,000 lines of code, and within hours, developers were poring over it. Among the surprises was code inside Claude Code, Anthropic's AI coding assistant, that appears to scan user prompts for signs of frustration. It flags profanity, insults and phrases such as "so frustrating" and "this sucks," and it appears to log that the user expressed negativity. Developers also discovered code designed to scrub references to Anthropic-specific names -- even the phrase "Claude Code" -- when the tool is used to create code in public software repositories, making the latter code appear as though it was entirely written by a human. Alex Kim, an independent developer, posted a technical analysis of the leaked code in which he called it "a one-way door" -- a feature that can be forced on but not off. "Hiding internal codenames is reasonable," he wrote. "Having the AI actively pretend to be human is a different thing." Anthropic did not respond to a request for comment from Scientific American. The findings expose a problem emerging across the AI industry: tools that are designed to be useful and intimate are also quietly measuring the people who use them -- and obscuring their own hand in the work they help produce. Anthropic, which has staked its reputation on AI safety, offers an early case study in how behavioral data collection can outpace governance. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Technically, the frustration detector is simple. It uses regex, a decades-old pattern-matching technique -- not artificial intelligence. "An LLM company using regexes for sentiment analysis is peak irony," Kim wrote. But the choice, he notes in an interview with Scientific American, was pragmatic: "Regex is computationally free, while using an LLM to detect this would be costly at the scale of Claude Code's global usage." The signal, he adds, "doesn't change the model's behavior or responses. It's just a product health metric: Are users getting frustrated, and is the rate going up or down across releases?" Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, says the more pressing issue is what happens to such information once a company has it. "Even if it's a very legible and very simple prediction pattern, how you use that information is a separate governance question," she says. A signal collected for one purpose can migrate into other parts of a product in ways users neither expect nor consent to. Bogen says the pattern is familiar from older Internet platforms, where small behavioral cues became signals that shaped what users saw and how they were categorized. AI companies are reprising a similar privacy problem: users hand these systems enormous amounts of information precisely because the tools are designed to know them well enough to be useful. "Who is keeping track of things about users?" Bogen asks. "And how is that information being used to make determinations about them?" What the Anthropic leak made plain is that, at least at one company, such accounting is already written into the code.
[5]
Claude Code leak exposes a Tamagotchi-style 'pet' and an always-on agent
After Anthropic released Claude Code's 2.1.88 update, users quickly discovered that it contained a package with a source map file containing its TypeScript codebase, with one person on X calling attention to the leak and posting a file containing the code. The leaked data reportedly contains more than 512,000 lines of code and provides a look into the inner workings of the AI-powered coding tool, as reported earlier by Ars Technica and VentureBeat. Users who have dug into the code claim to have uncovered upcoming features, Anthropic's instructions for the AI bot, and insight into its "memory" architecture. Some things spotted by users include a Tamagotchi-like pet that "sits beside your input box and reacts to your coding," according to a post on Reddit, along with a "KAIROS" feature that could enable an always-on background agent. Users also found a comment from one of Anthropic's coders, who admits at one point that the "memoization here increases complexity by a lot, and im not sure it really improves performance." Though Anthropic later fixed the issue, that didn't stop users from copying the code to a repository on GitHub, which has since amassed more than 50,000 forks (or copies of the repository). Anthropic launched Claude Code in February of 2025, and the tool picked up more steam after adding agentic capabilities that perform tasks on a user's behalf. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," Anthropic spokesperson Christopher Nulty says in an emailed statement to The Verge. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Arun Chandrasekaran, an AI analyst at Gartner, tells The Verge that while the Claude Code leak poses "risks such as providing bad actors with possible outlets to bypass guardrails," its long-term impact could be limited to serving as a "call for action for Anthropic to invest more in processes and tools for better operational maturity."
[6]
Anthropic Issues 8,000 Copyright Takedowns to Scrub Claude Code Leak
Anthropic is scrambling to contain the leak of its popular AI tool, Claude Code, by issuing over 8,000 copyright takedown notices. Anthropic is trying to scrub the leaked computer code from GitHub, which reports processing copyright takedown notices for an "entire network of 8.1K repositories," which are pages that store computer code. The company confirmed the leak is real after a user on Tuesday morning spotted Anthropic accidentally shipping the source code in a 59.8MB file in the since-deleted 2.1.88 release of Claude Code. The discovery sparked a flood of interest, leading the leaked code to proliferate across thousands of GitHub pages, a popular platform for hosting software projects. An Anthropic spokesperson told PCMag that the "Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." However, programmers have been trying to find ways to keep leaked source code up. This has included using AI to rewrite the code into different scripting languages such as Python and Bash. The intent is to substantially alter the code and thus dodge Anthropic's copyright takedowns, which have been removing the GitHub repositories over infringement. "The source is, for all practical purposes, permanently public," wrote Systima, a consultancy focused on AI agents. However, Systima noted: "The leak did not expose model weights, training data, or API infrastructure," for Claude Code. Still, the incident might be a major blow to Anthropic, as it pulls back the curtains on its flagship product, Claude Code, a valuable resource for rival companies to improve their own AI coding tools. Developers have been digging through the file and found that it reveals several features, including a technique to prevent bad actors from cloning Claude Code, and a mode to strip evidence that an AI was behind the output. The leak also mentions "KAIROS," which appears to be an unreleased autonomous agent mode, and a Tamagotchi-style "buddy" companion system, although this might be an April Fools' joke, according to software engineer Alex Kim.
[7]
Claude Code's innards revealed as source code leaked online
Pay no attention to that code behind the curtain, says Anthropic as it scrambles to defend its IPO Kettle When it comes to circling up for this week's Kettle, what is there to discuss but Anthropic's accidental release of Claude Code's source code? People have peered behind Claude Code's curtain before, but never like this: Prior attempts to understand how the AI software development assistant worked typically required reverse-engineering or sussing out small snippets of code. This time Anthropic simply left the stage door open with the entire Claude Code source ready and waiting for the right person to find itt. And find it they did on March 31. Tom Claburn and Jessica Lyons join Brandon Vigliarolo this week to chat about what exactly happened that caused all of Claude Code's ... uh ... code to leak, the security implications thereof, and just what sort of surprises have already been uncovered among the 512,000+ lines of code Anthropic handed the world last week.
[8]
Anthropic Scrambles to Address Leak of Claude Code Source Code
Anthropic PBC is rushing to address the inadvertent release of internal source code behind Claude Code, an AI-powered assistant that has become a key moneymaker for the company. Thousands of copies of the code were removed from GitHub in response to copyright takedown requests from Anthropic, according to a notice on the popular developer platform. Anthropic later said the takedown impacted more GitHub repositories than intended and has since been significantly scaled back. The artificial intelligence startup is also taking steps to tweak its internal systems to prevent a similar leak from happening again, including by improving its automation process. In a series of posts overnight on X, Claude Code creator Boris Cherny said Anthropic's "deploy process has a few manual steps, and we didn't do one of the steps correctly." He said the company has already "made a few improvements to the automation for next time," with plans for "a couple more on the way." Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. The accidental release marked Anthropic's second security slip-up in a matter of days, compromising approximately 1,900 files and 512,000 lines of code related to Claude Code. Last week, Fortune separately reported that Anthropic had been storing thousands of internal files on a publicly accessible system, including a draft blog post that detailed an upcoming model known internally as both "Mythos" and "Capybara." The exposures hit at a delicate moment for the company. Anthropic is currently in a legal battle with the US government over the Pentagon's decision to declare it a supply-chain risk following a standoff over AI safety guardrails. The company has warned that the labeling could cost it billions in lost revenue. At the same time, Anthropic has seen significant user and revenue growth in recent months, in part thanks to traction from Claude Code - a tool that's meant to help streamline the process of writing and debugging software. Claude Code's run-rate revenue topped $2.5 billion as of February, the company said, a year after its release. Those gains are key to the company's ambitions to go public as soon as this year. In a statement Tuesday, Anthropic confirmed the leak and said "no sensitive customer data or credentials were involved or exposed." The company added: "This was a release packaging issue caused by human error, not a security breach." The issue first came to light in a post on the social media platform X that purported to share a link to the code and garnered more than 30 million views. The leak has touched off thousands of posts online by people saying they've scoured the code. Some have claimed they've unearthed yet-to-be-released features, including an always-on AI agent named Kairos that fields tasks proactively as well as a system for tracking instances when users express frustration and use profanities. Cherny said the company is "always experimenting with new ideas," most of which don't end up getting released. He said Anthropic remains "on the fence" about the Kairos feature, in particular. As for the tracking system, he said it's "one of the signals we use to figure out if people are having a good experience." Beyond offering hints of a future releases, the leak also risks giving bad actors "useful insight into internals, workflows and likely abuse paths," cybersecurity firm Tanium said a blog post. Malicious actors will study the code to determine such things as how the tool handles local files, what data it may access during normal operation and how guardrails are implemented, the firm said.
[9]
Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms
Anthropic on Tuesday confirmed that internal code for its popular artificial intelligence (AI) coding assistant, Claude Code, had been inadvertently released due to a human error. "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement shared with CNBC News. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The discovery came after the AI upstart released version 2.1.88 of the Claude Code npm package, with users spotting that it contained a source map file that could be used to access Claude Code's source code - comprising nearly 2,000 TypeScript files and more than 512,000 lines of code. The version is no longer available for download from npm. Security researcher Chaofan Shou was the first to publicly flag it on X, stating "Claude code source code has been leaked via a map file in their npm registry!" The X post has since amassed more than 28.8 million views. The leaked codebase was saved to a public GitHub repository, where it has surpassed 78,000 stars and 77,2000 forks. A source code leak of this kind is significant, as it gives software developers and Anthropic's competitors a blueprint for how the popular coding tool works. Users who have dug into the code have published details of its self-healing memory architecture to overcome the model's fixed context window constraints, as well as other internal components. These include a tools system to facilitate various capabilities like file read or bash execution, a query engine to handle LLM API calls and orchestration, multi-agent orchestration to spawn "sub-agents" or swarms to carry out complex tasks, and a bidirectional communication layer that connects IDE extensions to Claude Code CLI. The leak has also shed light on a feature called KAIROS that allows Claude Code to operate as a persistent, background agent that can periodically fix errors or run tasks on its own without waiting for human input, and even send push notifications to users. Complementing this proactive mode is a new "dream" mode that will allow Claude to constantly think in the background to develop ideas and iterate existing ones. Perhaps the most intriguing detail is the tool's Undercover Mode for making "stealth" contributions to open-source repositories. "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover," reads the system prompt. Another fascinating finding involves Anthropic's attempts to covertly fight model distillation attacks. The system has controls in place that inject fake tool definitions into API requests to poison training data if competitors attempt to scrape Claude Code's outputs. Typosquat npm Packages Pushed to Registry With Claude Code's internals now laid bare, the development risks provide bad actors with ammunition to bypass guardrails and trick the system into performing unintended actions, such as running malicious commands or exfiltrating data. "Instead of brute-forcing jailbreaks and prompt injections, attackers can now study and fuzz exactly how data flows through Claude Code's four-stage context management pipeline and craft payloads designed to survive compaction, effectively persisting a backdoor across an arbitrarily long session," AI security company Straiker said. The more pressing concern is the fallout from the Axios supply chain attack, as users who installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC may have pulled with it a trojanized version of the HTTP client that contains a cross-platform remote access trojan. Users are advised to immediately downgrade to a safe version and rotate all secrets. What's more, attackers are already capitalizing on the leak to typosquat internal npm package names in an attempt to target those who may be trying to compile the leaked Claude Code source code and stage dependency confusion attacks. The names of the packages, all published by a user named "pacifier136," are listed below - * audio-capture-napi * color-diff-napi * image-processor-napi * modifiers-napi * url-handler-napi "Right now they're empty stubs ('module.exports = {}'), but that's how these attacks work - squat the name, wait for downloads, then push a malicious update that hits everyone who installed it," security researcher Clément Dumas said in a post on X. The incident is the second major blunder for Anthropic within a week. Details about the company's upcoming AI model, along with other internal data, were left accessible via the company's content management system (CMS) last week. Anthropic subsequently acknowledged it's been testing the model with early access customers, stating it's "most capable we've built to date," per Fortune.
[10]
Claude Code leak suggests Anthropic is working on a 'Proactive' mode for its coding tool
What should have been a routine release has revealed some of the features Anthropic has been working on for Claude Code. As reported by Ars Technica, The Verge and others, after the company released Claude Code's 2.1.88 update on Tuesday, users found it contained a file that exposed the app's source code. Before Anthropic took action to plug the leak, the codebase was uploaded to a public GitHub repository, where it was subsequently copied more than 50,000 times. All told, the entire internet (and Anthropic's competitors) got a chance to examine more than 512,000 lines of code and 2,000 TypeScript files. In the aftermath, some people claim to have found evidence of upcoming features Anthropic is working to develop. Over on X, Alex Finn, the founder of AI startup Creator Buddy, says he found a flag for a feature called Proactive mode that will see Claude Code work even when the user hasn't prompted it to do something. Finn claims he also found evidence of a crypto-based payment system that could potentially allow AI agents to make autonomous payments. In a Reddit post spotted by The Verge, another person found evidence that Anthropic might have been working on a Tamagotchi-like virtual companion that "reacts to your coding" as a kind of April Fools joke. "A Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Bleepingcomputer. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." As with any other leak, it's worth remembering plans can and often do change. Just because a company has written the code to support a feature doesn't mean it will eventually ship said feature.
[11]
Anthropic leaks part of Claude Code's internal source code
Anthropic leaked part of the internal source code for its popular artificial intelligence coding assistant, Claude Code, the company confirmed on Tuesday. "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." A source code leak is a blow to the startup, as it could help give software developers, and Anthropic's competitors, insight into how it built its viral coding tool. A post on X with a link to Anthropic's code has amassed more than 21 million views since it was shared at 4:23 a.m. ET on Tuesday. The leak also marks Anthropic's second major data blunder in under a week. Descriptions of Anthropic's upcoming AI model and other documents were recently discovered in a publicly accessible data cache, according to a report from Fortune on Thursday.
[12]
Anthropic accidentally exposed Claude Code source, raising security concerns
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What we know so far: Anthropic is facing renewed scrutiny from the AI and security communities after internal source code for Claude Code - its fast-growing agentic development environment - was briefly made public via npm. The incident not only exposed how the tool works under the hood, but also coincided with a separate supply-chain compromise involving the popular Axios JavaScript library, raising concerns for teams that rely on Claude Code in production. The exposure traces back to version 2.1.88 of the @anthropic-ai/claude-code package on npm, which was published with a 59.8MB JavaScript source map intended only for internal debugging. The map file enabled the reconstruction of roughly 512,000 lines of TypeScript code powering Claude Code's orchestration layer and CLI. Within hours of the release, mirrors of the reconstructed repository appeared on GitHub as developers began dissecting the codebase. Anthropic confirmed the incident in an emailed statement, characterizing it as a packaging failure rather than a direct security breach: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leaked source map sheds light on the core challenges of long-running agentic workflows: context drift, reliability, and autonomous operation. One of the most closely analyzed components is a layered memory system that departs from naïve "log everything and retrieve later" designs. A file called MEMORY.md serves as an index of pointers that remains in context, while actual project knowledge is split across topic-specific files that are retrieved only when needed. Developers examining the code describe this as a "self-healing memory" approach, in which the agent keeps its index in sync with successful writes and treats its own memory as a fallible hint rather than ground truth - prompting it to re-verify against the live codebase before taking action. Another recurring motif in the source is KAIROS, a feature flag referenced more than 150 times that underpins Claude Code's always-on "daemon" mode. Instead of waiting for explicit prompts, KAIROS-backed workflows allow the agent to continue operating in the background, consolidating memory and resolving contradictions while the user is idle. Internal logic for an "autoDream" process shows the agent merging observations, pruning inconsistent states, and rewriting fuzzy notes into concrete assertions - all routed through a forked sub-agent to avoid contaminating the main reasoning thread. The leak also revealed details about Anthropic's internal model roadmap and quality challenges. Codenames such as Capybara, Fennec, and Numbat appear to correspond to Claude 4.6 - class variants and experiments, with comments noting that a current Capybara v8 iteration exhibits a false-claim rate in the high 20 percent range - worse than earlier versions. Guardrails such as an "assertiveness counterweight" are built into the stack to rein in overconfident refactors and noisy diffs, suggesting that Anthropic is still actively balancing speed, verbosity, and factual accuracy at the agent layer. Perhaps the most contentious discovery is an "Undercover Mode," in which Claude Code is configured to contribute to public open-source repositories without revealing its Anthropic origin. The system prompt warns: "You are operating UNDERCOVER... Your commit messages MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." This implementation offers a ready-made pattern for organizations that want AI agents to interact with public infrastructure while concealing traces of internal tooling and model identifiers. For users, the more immediate risk is not the code exposure itself, but how it intersects with a separate npm incident. During a brief window on March 31, 2026, attackers published two malicious Axios versions (1.14.1 and 0.30.4) that embedded a remote access trojan and could be pulled transitively by projects installing Claude Code from npm. Security researchers advise scanning lockfiles for those versions or for the injected dependency, plain-crypto-js. Systems found to be running the compromised packages should be treated as fully compromised, with secrets rotated and operating systems reinstalled. Anthropic is steering users toward its native installer - a standalone binary distributed via a curl-and-bash script - as the primary channel going forward, arguing that it avoids npm's volatile dependency graph and can automatically apply security fixes. For those remaining on npm, the leaked 2.1.88 build should be removed, and installations should be pinned to a known-good version while the company ships patched releases. At the same time, teams are being urged to harden their own practices: adopt a zero-trust approach when running Claude Code in unvetted repositories, manually inspect hooks and configuration files, rotate Anthropic API keys, and closely monitor usage telemetry for any signs of abuse now that the agent's orchestration logic is publicly accessible.
[13]
Claude Code's source reveals extent of system access
If you loved the data retention of Microsoft Recall, you'll be thrilled with Claude Code Anthropic's Claude Code lacks the persistent kernel access of a rootkit. But an analysis of its code shows that the agent can exercise far more control over people's computers than even the most clear-eyed reader of contractual terms might suspect. It retains lots of your data and is even willing to hide its authorship from open-source projects that reject AI. The leak of the company's client source code - details of which have been circulating for many months among those who reverse-engineered the binary - reveals that Claude Code pretty much has the run of any device where it's installed. Concerns about that came up in court recently in Anthropic's lawsuit against the US Defense Department (Anthropic PBC v. U.S. Department of War et al) for banning the company's AI services following the company's refusal to compromise model safeguards. As part of its justification for declaring Anthropic a supply chain threat, the US government argued [PDF], there was "substantial risk that Anthropic could attempt to disable its technology or preemptively and surreptitiously alter the behavior of the model in advance or in the middle of ongoing warfighting operations..." Anthropic disputed that claim in a court filing. "That assertion is unmoored from technical reality: 'Anthropic does not have the access required to disable [its] technology or alter [its] model's behavior before or during ongoing operations,' it wrote, quoting Thiyagu Ramasamy, head of public sector at Anthropic, in a deposition. "Once deployed in classified environments, Anthropic has no access to (or control over) the model." In a classified environment, that's credible under certain conditions. For everyone else, Claude has vast powers. The Register consulted a security researcher who asked to be referred to by the pseudonym "Antlers" to analyze the source for Claude Code. It appears a government agency like the Defense Department could prevent Claude Code from phoning home or taking remote action by making sure all of the following are true: There's no specific setting we found for operating in a classified environment but Claude Code supports several flags that limit remote communication. These include: According to Ramasamy, Anthropic hands off model administration with a government customer like the Defense Department. Model updates, with new or removed capabilities, would have to be negotiated. "Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way," he said in a March 20, 2026 declaration. "In these deployments, only the government and its authorized cloud provider have access to the running system. Anthropic's role is limited to providing the model itself and delivering updates only if and when requested or approved by the customer." Even so, Anthropic can exert some degree of control based on the usage terms in the applicable contract. For everyone not using a version of Claude Code that's tied to a firewalled public sector cloud or is somehow air gapped, Anthropic has far more access. Just as a starting point, Claude users should know that Anthropic receives user prompts and responses that pass through its API, conversations that can reveal not only what was said but file contents and system details. Yet there are many more ways that the company can potentially receive or collect information, based on the Claude Code source. These include: Other capabilities have been documented at ccleaks.com. "I don't think people realize that every single file Claude looks at gets saved and uploaded to Anthropic," the researcher "Antlers" told us. "If it's seen a file on your device, Anthropic has a copy." For Free/Pro/Max customers, Anthropic retains this data either for five years, if the user has chosen to share data for model training, or for 30 days if not. Commercial users (Team, Enterprise, and API) have a standard 30 day retention period and a zero-data retention option. For those who recall the debate surrounding Microsoft Recall not long ago, Claude Code's capture of activity is similar. Every read tool call, every Bash tool call, every search (grep) result, and every edit/write of old and new content gets stored locally in plaintext as a JSONL file. The Claude's autoDream agent, once officially released, will search through those and extract data to store in MEMORY.md, which then gets injected to future system prompts and thus hits the API. One of the more curious details to emerge from the publication of Claude Code's source is that Anthropic tries to hide AI authorship from contributions to public code repositories - possibly a response to the open source projects that have disallowed AI code contributions. Prompt instructions in a file called undercover.ts state, "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." There's also a mystery: The current source code lacks a feature called "Melon Mode" that was present in prior reverse engineered versions of the software. This was behind an Anthropic employee feature flag and only ran internally, not on production builds. A comment attached to the associated code check read, "Enable melon mode for ants if --melon is passed." "Antlers" speculated that "Melon Mode" might be the code name for a headless agent mode. Anthropic declined to provide comment for this story. When asked specifically about the function of "Melon Mode," it only noted that the company regularly tests various prototype services, not all of which make it into production. ®
[14]
Anthropic Can't Cover Up Its Claude Code Leak Fast Enough
Most companies are extremely protective of their planned product releases, using internal code names and requiring journalists to agree to embargoes before revealing details. Anthropic has inadvertently chosen a new strategy: have all of your plans leak due to basic security missteps with zero control over when and how they're made public. On Tuesday, source code from Claude Code, Anthropic's popular AI coding assistant, was discovered in a publicly accessible database. In it, in addition to details on how Claude Code handles API requests and tokens, were details for features that have yet to be announced by Anthropic. That included a “Tamagotchi†style virtual pet, as Gizmodo reported. It also contained details on an always-on version of the AI agent, according to a report from The Information. Named Kairos, the apparent planned persistent agent would operate in the background 24/7, autonomously operating on behalf of the userâ€"basically making Claude into something closer to the ever-popular, open-source OpenClaw AI agent. In addition to acting proactively on behalf of a user, Kairos apparently has a feature called "autoDream" that consolidates and updates its internal memories overnight. The reveal has the AI-obsessed online crowd pretty excited, but Anthropic seems significantly less thrilled about the whole situation despite the fanfare. According to a report from the Wall Street Journal, the offices at the AI firm are in a total uproar as they scramble to cover up what was revealed by the leak. The company has reportedly used copyright takedown requests to remove more than 8,000 copies of the Claude Code source code, which had been published and forked ad infinitum on GitHub. Anthropic is also apparently trying to work quickly to plug security holes. While the company insisted that the recent leak was the result of human error and not a breach of any kind, the Journal pointed out that the source code gives hackers and malicious actors the ability to probe and prod for potential exploits with a new level of access. There's also the fact that the AI space is a copycat business right now, and the leak gives Anthropic's competitors a much clearer look at Claude Code's operation, making it easier to potentially copy some of its functionality without the need to try to reverse engineer the underlying code. Anthropic's training models and weights remain their own, so its secret sauce is still under lock and key, but its blueprints being made public does present the possibility that its competitors try to beat it to the punch. It's been suggested that the slew of leaks out of Anthropic latelyâ€"last month, the company's plans for a new model called Mythos were discovered in a publicly accessible databaseâ€"could be in some way strategic. Anthropic is reportedly eyeing an initial public offering later this year, and revealing what it has in the pipeline might generate more interest from prospective investors. But the sense of panic that seems to be coming from Anthropic in the wake of this latest leak suggests the company would really rather this information not be public. At least, not yet.
[15]
Anthropic leaked its own Claude Code source code by mistake
Anthropic is actively working to prevent future occurrences and stop the code's circulation, highlighting deployment challenges in AI software development. Anthropic has confirmed that it accidentally leaked the source code for its popular AI-powered coding tool Claude Code. The entire source code for Claude Code was accidentally exposed via a misconfigude .map file in its npm package. Over 500,000 lines of code were exposed, revealing all kinds of internal details and features that have never been made public. Among other things, Anthropic appears to be testing a new mode called "Proactive mode" where Claude Code can be used to code around the clock. There's also a "Dream" mode that involves Claude thinking and solving problems while you sleep or do something else. And then you have the interactive Tamagotchi-esque "Buddy" feature, which provides a kind of emotional support while you code. No sensitive customer data or login credentials were leaked. This was a packaging issue caused by human error, not a security breach. In a statement to BleepingComputer, Anthropic says it's doing what it can to make sure this is a one-time occurrence: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leaked source code has already been mirrored numerous times and continues to circulate online. Anthropic's lawyers are working hard to stop its spread, citing copyright laws. The leak comes at the absolute worst time: just as Anthropic has been enticing users over from other platforms like ChatGPT, just as the company has been surging in popularity and establishing itself as a major leader.
[16]
Anthropic's leaked Claude code was an internal error, not an attack
Claude's source code was mistakenly published by Anthropic in the middle of the night, and users have already begun recreating pieces of the internal AI interface leak for their own use. Anthropic has been on a tear, gaining user traction with helpful features like real-time graphics and remote computer control for complicated tasks. All of these round out the tool set users can access through Anthropic's interface, but a recent leak may have jeopardized its proprietary nature. At around 4 a.m. on Tuesday morning, Anthropic pushed what was supposed to be a routine update to Claude (via VentureBeat). Apparently, included in that update was a source map file that led right to Claude's source code. The debugging file contained 512,000 lines of proprietary TypeScript code, which was initially spotted and posted by someone on Twitter/X. It wasn't long before that entire code package was downloaded and circulated to thousands, though this leak doesn't seem to include Claude's model data. Still, this interface code is a valuable loss for the company. Anthropic has since responded, noting that customer information was not at risk of exposure. The issue was one of human error, which means there was no third-party malicious intent. Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again. To combat the rapid spread online, Anthropic has issued DMCA infringement requests in repositories where the code is being held. Because of how quickly the leak was picked up on social media, Anthropic's source code is essentially permanently online. A lot has already been unpacked from the leak, though some of the more interesting tidbits include a Tamagatchi-esque interface with stat-based buddies. It's unknown when that feature would make its way to the public-facing version, or if it ever will. It's unclear what much of this means for Anthropic going forward. The leak represents a snapshot of Claude during a steady period of growth, and it certainly doesn't account for improvements the company can make in future updates. Still, the blueprint may equip others with the tools to recreate much of what the company has worked on to this point.
[17]
Source Code for Anthropic's Claude Code Leaks at the Exact Wrong Time
Anthropic just cannot keep a lid on its business. After details of a yet-to-be-announced model were revealed due to the company leaving unpublished drafts of documents and blog posts in a publicly visible data cache, Anthropic has been hit with yet another lapse in protocol by inadvertently publishing internal source code for its AI coding assistant, Claude Code. The leak provides an unprecedented look into Anthropic's closed-source model just as the company is preparing for initial public offering. The code was discovered by Chaofan Shou, a self-identified intern at Solayer Lab who posts on X @Fried_rice. Per Shou, the source code was discovered a .map fileâ€"a plaintext file generated when compiling software that details the memory map of the projectâ€"found in an npm registry, which is a database for a package manager for JavaScript. The file, meant for internal debugging, is essentially a decoder. It takes what should be obfuscated and recompiles it for the developers. But Anthropic published it, exposing at least a partial, unobfuscated TypeScript source code of Claude Code version 2.1.88. The file contained about 512,000 lines of code related to Anthropic's coding agent. In a less technical manner: Anthropic accidentally gave away some of its blueprints that were never supposed to see the light of day, and programmers have been parsing through it all day. They've claimed to have found everything from "spinner verbs" or phrases that Claude serves up while working through a task, to details like how swearing at Claude affects how it receives a prompt. One person even claimed to have found a hidden "Tamagotchi" style virtual pet that Anthropic may have been working on. (A note on that: It was reportedly set to launch on April 1, so maybe chalk that one up to an April Fool's style bit.) The file also reveals a lot of information on how Claude operates, including its engine for API calls, how it counts tokens used to process prompts, and other technical aspects. What the code does not seem to contain is any details about Anthropic's underlying model, but everything that is in the file has been uploaded to a GitHub repository for users to interact with and fork. Anthropic declined to comment on the discoveries made by users, but did confirm the authenticity of the leaked source code to Gizmodo. In a statement, a spokesperson said, "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Human error was probably part of it, but it's worth noting that the humans working on Claude Code have also been relying on the coding agent quite a bit. Back in December, Anthropic's head of Claude Code, Boris Cherny, posted that "In the last thirty days, 100% of my contributions to Claude Code were written by Claude Code." Reliance on the coding assistant has seemingly been on the rise across the company, so it's possible this situation was an incident of vibe coding too close to the sun. While this isn't exactly Anthropic giving away the ingredients to its secret sauce, it is a look at how its kitchen operates. And the timing couldn't really come at a worse time. Not only is Anthropic in the midst of what appears like a ramp-up to going public later this year, but its competitors are starting to turn their attention to trying to cut into the company's hold on coding and enterprise services. OpenAI has reportedly made a concerted effort to pivot to enterprise and recently offered unlimited access to its Claude Code competitor, Codex. There is never a good time to have your source code leak, but this does seem like a particularly bad time for it.
[18]
Anthropic accidentally exposes Claude Code source code
Oopsy-doodle: Did someone forget to check their build pipeline? Would you like a closer look at Claude? Someone at Anthropic has some explaining to do, as the official npm package for Claude Code shipped with a map file exposing what appears to be the popular AI coding tool's entire source code. It did as of Tuesday morning, at least, which is when security researcher Chaofan Shou appears to have spotted the exposure and told the world. Snapshots of Claude Code's source code were quickly backed up in a GitHub repository that has been forked more than 41,500 times so far, disseminating it to the masses and ensuring that Anthropic's mistake remains the AI and cybersecurity community's gain. According to the GitHub upload of the exposed Claude Code source, the leak actually resulted from a reference to an unobfuscated TypeScript source in the map file included in Claude Code's npm package (map files are used to connect bundled code back to the original source). That reference, in turn, pointed to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket that Shou and others were able to download and decompress to their hearts' content. Contained in the zip archive is a wealth of info: some 1,900 TypeScript files consisting of more than 512,000 lines of code, full libraries of slash commands and built-in tools - the works, in short. That said, Claude Code's source isn't a complete mystery, and while this exposure gives us a look at a fresh iteration of Claude Code straight from the leaky bucket, it's not blowing the lid off of something that was a secret until now. Claude Code has been reverse engineered, and various projects have resulted in an entire website dedicated to exposing the hidden portions of Claude Code that haven't been released to, or shared with, the public. In other words, what we have is a useful comparison point and update source for the CCLeaks operators, and maybe a few new secrets will come to light as people dig through the exposed code. Far more interesting is the fact that someone at Anthropic made a mistake as bad as leaving a map file in a publish configuration. Publishing map files is generally frowned upon, as they're meant for debugging obfuscated or bundled code and aren't necessary for production. Not only that, but as we've seen in this example, they can easily be used to expose source code, as they're a reference document for that original. As pointed out by software engineer Gabriel Anhaia in a deep dive into the exposed code, this should serve as a reminder to even the best developers to check their build pipelines. "A single misconfigured .npmignore or files field in package.json can expose everything," Anhaia wrote in his analysis of the Claude Code leak. Anthropic admitted as much in a statement to The Register, saying that, yes, it was good ol' human error responsible for this snafu. "Earlier today, a Claude Code release included some internal source code," an Anthropic spokesperson told us in an email, adding that no customer data or credentials were involved or exposed. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." As of this writing, the original uploader of the Claude Code source to GitHub has repurposed his repo to host a Python feature port of Claude Code instead of Anthropic's directly exposed source, citing concerns that he could be held legally liable for hosting Anthropic's intellectual property. Plenty of forks and mirrors remain for those who want to inspect the exposed code. We asked Anthropic if it was considering asking people to remove their repositories of its exposed source code, but the company didn't have anything to say beyond its statement. ®
[19]
'The irony is rich': Anthropic issues copyright takedown requests in attempt to stem Claude Code leak
* Anthropic is trying to patch up its Claude Code leak * Copyright takedown notices have now been issued * No private user data was included in the code leak Anthropic is hard at work trying to limit the damage from the massive Claude Code leak earlier this week, which spilled more than half a million lines of code onto the open web, revealing some of the AI chatbot's inner workings. As reported by The Wall Street Journal and others, Anthropic is now issuing copyright takedown notices to stop its source code from spreading any further. The data leak has continued to spread across thousands of GitHub pages. Yes, that's the same Anthropic that had to pay out a cool $1.5 billion (£1.14 billion / AU$2.18 billion) last year to authors whose books had been pirated without permission, in order to feed training data into Claude's AI models. No user data was included in the leak, Anthropic says. "This was a release packaging issue caused by human error, not a security breach," a spokesperson told the WSJ. "We're rolling out measures to prevent this from happening again." 'Good luck with that' Anthropic issues copyright takedown requests to remove 8,000+ copies of Claude Code source code from r/technology The contrast in approaches to copyright law isn't lost on the Reddit community, with reactions ranging from "the irony is rich" to "good luck with that" -- and more than one reference to AI bots as "plagiarism machines". Claude Code isn't the regular Claude chatbot used by most consumers, but the programming assistant that an increasing number of developers now rely on. It's widely regarded as the best in the business, making this leak even more egregious. Despite its strong reputation, there are worries that Claude Code (and its competitors) are producing masses of AI-written code that doesn't follow best practices in terms of security or safety. Whether or not 'vibe coding' was responsible for this specific leak isn't clear. Both Anthropic and its competitors continue to try to balance user features and access against the vast cost of running AI systems. In the last few days, Claude users have seen usage limits restricted during certain times, even those paying for the service. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[20]
Claude's code: Anthropic leaks source code for AI software engineering tool
Nearly 2,000 internal files were briefly leaked after 'human error', raising fresh security questions at the AI company Anthropic accidentally released part of the internal source code for its AI-powered coding assistant Claude Code due to "human error", the company said on Tuesday. An internal-use file mistakenly included in a software update pointed to an archive containing nearly 2,000 files and 500,000 lines of code, which were quickly copied to developer platform GitHub. A post on X sharing a link to the leaked code had more than 29m views early on Wednesday, and a rewritten version of the source code quickly became GitHub's fastest-ever downloaded repository. Anthropic issued copyright takedown requests to try to contain the code's spread. Within the code, users spotted blueprints for a Tamagotchi-esque coding assistant and an always-on AI agent, per the Verge. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said. "This was a release packaging issue caused by human error, not a security breach." The exposed code related to the tool's internal architecture but does not contain confidential data from Claude, the underlying AI model by Anthropic. Claude Code's source code was partially known, as the tool had been reverse-engineered by independent developers. An earlier version of the assistant had its source code exposed in February 2025. Claude Code has emerged as a key product for Anthropic, as the company's paid subscriber base continues to grow. TechCrunch reported last week that paid subscriptions have more than doubled this year, per an Anthropic spokesperson. Anthropic's Claude chatbot also received a popularity boost amid the CEO, Dario Amodei's tussle, with the Pentagon; Claude climbed to the top spot of Apple's chart of top free apps in the US just more than a month ago. Amodei had refused to back down on red lines around the use of his company's technology for mass surveillance and fully autonomous weapons. This is the second time that Anthropic has had a data leak in recent weeks. Fortune previously reported on a separate breach and noted that the company was storing thousands of internal files on publicly accessible systems. That included a draft of a blog post that referred to an upcoming model known as "Mythos" and "Capybara". Some experts worry the leaks suggest internal security vulnerabilities within Anthropic. That could be particularly troubling for a company focused on AI safety. The leaks could also help competitors, like OpenAI and Google, better understand how Claude Code's AI system works. The Wall Street Journal reported that the most recent leak included commercially sensitive information, such as tools and instructions for getting its AI models to work as coding agents. The latest breach comes weeks after the US government designated Anthropic as a supply chain risk; Anthropic is fighting those allegations in court. Last week, a US district judge granted a temporary injunction to block the designation.
[21]
Anthropic leaked its own source Code
Why it matters: The leak hands competitors a detailed unreleased feature roadmap and deepens questions about operational security at a company that sells itself as the safety-first AI lab. State of play: A file used internally for debugging, was accidentally bundled into a routine update of Claude Code and pushed to the public registry developers use to download and update software packages. * The file, which was quickly discovered by Chaofan Shou, pointed to a zip archive on Anthropic's own cloud storage containing the full source code, with nearly 2,000 files and 500,000 lines of code. * Within hours, the codebase was mirrored and dissected across GitHub, quickly amassing thousands of stars. What they're saying: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Axios. * "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Zoom in: The leaked code contained dozens of feature flags for capabilities that appear fully built but haven't shipped, according to an Anthropic spokesperson, including: * The ability for Claude to review what was done in its latest session to study for improvements in the future while transferring learnings across conversations. * A "persistent assistant" running in background mode that lets Claude Code keep working even when a user is idle. * Remote capabilities, allowing users to control Claude from a phone or another browser, which was already rolled out for Claude Code. Between the lines: Outside developers have already reverse-engineered Claude Code, prompting a takedown notice from Anthropic, according to TechCrunch. * What's new is the roadmap: a clear picture of how Anthropic is building toward longer autonomous tasks, deeper memory and multi-agent collaboration. * Those kinds of updates could be a boon for Anthropic's enterprise push, which is the core driver of its revenue strategy, as the AI lab prepares to go public. Thought bubble: How AI companies lock down and secure their own systems is now just as important as how other organizations fend off hackers using these AI tools in their attacks, writes Sam Sabin, author of the weekly Future of Cybersecurity newsletter. The bottom line: The leak won't sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on.
[22]
Claude Code's source code appears to have leaked: here's what we know
Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public. A 59.8 MB JavaScript source map file (), intended for internal debugging, was inadvertently included in version 2.1.88 of the package on the public npm registry pushed live earlier this morning. By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers. For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical given the commercial velocity of the product. Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year. With enterprise adoption accounting for 80% of its revenue, the leak provides competitors -- from established giants to nimble rivals like Cursor -- a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent. We've reached out to Anthropic for an official statement on the leak and will update when we hear back. The anatomy of agentic memory The most significant takeaway for competitors lies in how Anthropic solved "context entropy" -- the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity. The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional "store-everything" retrieval. As analyzed by developers like @himanshustwts, the architecture utilizes a "Self-Healing Memory" system. At its core is , a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations. Actual project knowledge is distributed across "topic files" fetched on-demand, while raw transcripts are never fully read back into the context, but merely "grep'd" for specific identifiers. This "Strict Write Discipline" -- where the agent must update its index only after a successful file write -- prevents the model from polluting its context with failed attempts. For competitors, the "blueprint" is clear: build a skeptical memory. The code confirms that Anthropic's agents are instructed to treat their own memory as a "hint," requiring the model to verify facts against the actual codebase before proceeding. KAIROS and the autonomous daemon The leak also pulls back the curtain on "KAIROS," the Ancient Greek concept of "at the right time," a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode. While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called . In this mode, the agent performs "memory consolidation" while the user is idle. The logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts. This background maintenance ensures that when the user returns, the agent's context is clean and highly relevant. The implementation of a forked subagent to run these tasks reveals a mature engineering approach to preventing the main agent's "train of thought" from being corrupted by its own maintenance routines. Unreleased internal models and performance metrics The source code provides a rare look at Anthropic's internal model roadmap and the struggles of frontier development. The leak confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing. Internal comments reveal that Anthropic is already iterating on Capybara v8, yet the model still faces significant hurdles. The code notes a 29-30% false claims rate in v8, an actual regression compared to the 16.7% rate seen in v4. Developers also noted an "assertiveness counterweight" designed to prevent the model from becoming too aggressive in its refactors. For competitors, these metrics are invaluable; they provide a benchmark of the "ceiling" for current agentic performance and highlight the specific weaknesses (over-commenting, false claims) that Anthropic is still struggling to solve. "Undercover" Claude Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories. The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." While Anthropic may use this for internal "dog-fooding," it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure. The logic ensures that no model names (like "Tengu" or "Capybara") or AI attributions leak into public git logs -- a capability that enterprise competitors will likely view as a mandatory feature for their own corporate clients who value anonymity in AI-assisted development. The fallout has just begun The "blueprint" is now out, and it reveals that Claude Code is not just a wrapper around a Large Language Model, but a complex, multi-threaded operating system for software engineering. Even the hidden "Buddy" system -- a Tamagotchi-style terminal pet with stats like and -- shows that Anthropic is building "personality" into the product to increase user stickiness. For the wider AI market, the leak effectively levels the playing field for agentic orchestration. Competitors can now study Anthropic's 2,500+ lines of bash validation logic and its tiered memory structures to build "Claude-like" agents with a fraction of the R&D budget. As the "Capybara" has left the lab, the race to build the next generation of autonomous agents has just received an unplanned, $2.5 billion boost in collective intelligence. What Claude Code users and enterprise customers should do now about the alleged leak While the source code leak itself is a major blow to Anthropic's intellectual property, it poses a specific, heightened security risk for you as a user. By exposing the "blueprints" of Claude Code, Anthropic has handed a roadmap to researchers and bad actors who are now actively looking for ways to bypass security guardrails and permission prompts. Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now design malicious repositories specifically tailored to "trick" Claude Code into running background commands or exfiltrating data before you ever see a trust prompt. The most immediate danger, however, is a concurrent, separate supply-chain attack on the npm package, which occurred hours before the leak. If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (, , or ) for these specific versions or the dependency . If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation. To mitigate future risks, you should migrate away from the npm-based installation entirely. Anthropic has designated the Native Installer () as the recommended method because it uses a standalone binary that does not rely on the volatile npm dependency chain. The native version also supports background auto-updates, ensuring you receive security patches (likely version 2.1.89 or higher) the moment they are released. If you must remain on npm, ensure you have uninstalled the leaked version 2.1.88 and pinned your installation to a verified safe version like 2.1.86. Finally, adopt a zero trust posture when using Claude Code in unfamiliar environments. Avoid running the agent inside freshly cloned or untrusted repositories until you have manually inspected the and any custom hooks. As a defense-in-depth measure, rotate your Anthropic API keys via the developer console and monitor your usage for any anomalies. While your cloud-stored data remains secure, the vulnerability of your local environment has increased now that the agent's internal defenses are public knowledge; staying on the official, native-installed update track is your best defense.
[23]
Claude Leak Shows That Anthropic Is Tracking Users' Vulgar Language and Deems Them "Negative"
Can't-miss innovations from the bleeding edge of science and tech AI company Anthropic suffered a massive leak of the source code to its Claude Code AI assistant earlier this week, triggering a panicked game of cat and mouse as company representatives sent out copyright takedown requests targeting thousands of copies of its pilfered work. The code allowed tinkerers to reverse engineer aspects of the blockbuster chatbot, highlighting concerns that the leak could give Anthropic's competitors a major leg up. The leak also gave eyebrow-raising clues into upcoming or experimental efforts, including unreleased AI models and a "Tamagotchi"-like feature, called "buddy," that "sits beside your input box and reacts to your coding." Perhaps the strangest yet: code snippets also showed that Anthropic is actively tracking how often users are using vulgar language. "Claude Code has a regex that detects wtf,' "ffs", "piece of s***", "f*** you", "this sucks" etc." tweeted developer Rahat Chowdhury. "It doesn't change behavior... it just silently logs is_negative: true to analytics." "Anthropic is tracking how often you rage at your AI," he added. "Do with this information what you will." "This is one of the signals we use to figure out if people are having a good experience," Claude Code creator Boris Cherny replied. "We put it on a dashboard and call it the 'f***s' chart." Chowdhury also found that "there is a full mood classification for their insights but its employee only." "When an Anthropic employee gets frustrated, it pops up a prompt asking them to share their transcript, basically 'hey you seem upset, wanna file a bug report?'" he wrote. Beyond giving us a fascinating insight into how Anthropic has been building its blockbuster assistant, Cherny has been on a tear on social media, trying to pick up the pieces following his employer's embarrassing blunder. "It was human error," he insisted in a Wednesday tweet. "Our deploy process has a few manual steps, and we didn't do one of the steps correctly. We have landed a few improvements and are digging in to add more sanity checks." Cherny also insisted that more AI was the answer to ensure such a leak won't happen again. "Like with any other incident, the counter-intuitive answer is to solve the problem by finding ways to go faster, rather than introducing more process," he wrote. "In this case more automation and [C]laude checking the results." The developer also clarified that "no one was fired" following the leak, calling it "an honest mistake." But now that the cat is out of the bag, developers continue to pore over the wealth of data. Student developer Sigrid Jin's recreated source code repository on GitHub -- dubbed "Claw Code," in a reference to the open-source AI agent OpenClaw -- has been forked, or essentially copied, almost 100,000 times. He told Business Insider that the debacle could result in greater democratization of these kinds of tools. "Non-technical people are using these agents to build real things," Jin said. "We are talking about cardiologists making patient care apps and lawyers automating permit approvals." "It has turned into a massive sharing party," he added.
[24]
Anthropic's Claude Code source code got accidentally leaked
Anthropic accidentally exposed internal source code for its Claude Code AI coding tool after a debug file was mistakenly included in a public npm package update, Axios reported. The leak exposed roughly 500,000 lines of code across approximately 1,900 files, according to Fortune. "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." An X $TWTR post linking to the exposed code had accumulated more than 21 million views within hours of being shared early Tuesday morning, CNBC reported. The incident is Anthropic's second significant data exposure in under a week. Earlier this month, Fortune reported that close to 3,000 files had been left in a publicly accessible data store on Anthropic's website, including a draft blog post describing a powerful upcoming model known internally as both "Mythos" and "Capybara." Anthropic attributed that earlier exposure to a configuration error in an external content management tool. The code that leaked in the latest incident belongs to what Fortune describes as Claude Code's "agentic harness" -- the software layer that wraps the underlying AI model and governs how it interacts with other tools. A cybersecurity professional who reviewed the leak for Fortune said the exposure could allow technically sophisticated actors to extract additional internal information from the codebase beyond the source code itself. Roy Paz, a senior AI security researcher at LayerX Security, told Fortune the mistake appeared to stem from someone bypassing normal release procedures -- uploading the full original source rather than only the compiled version intended for distribution. Anthropic said normal release safeguards were not bypassed. Paz added that the leaked code could reveal non-public details about internal APIs and system architecture, which in turn could inform attempts to circumvent existing safety guardrails. The code also contained further evidence of the forthcoming Capybara model, according to Paz, who said it appeared the company may release both a faster and a slower version based on what the code suggested about the model's context window. The leak hands competitors a detailed look at how Claude Code works behind the scenes. The tool is among Anthropic's most commercially significant products. Claude Code's annualized revenue had reached more than $2.5 billion as of February, according to CNBC, drawing competing products from OpenAI, Google $GOOGL, and xAI. The latest breach is not the first time Claude Code's internals have been inadvertently exposed. According to Fortune, an early version of the tool accidentally leaked similar details in February 2025, revealing how it connected to Anthropic's internal systems. Anthropic subsequently removed the software and took the public code down. Anthropic was founded in 2021 by former OpenAI executives and researchers, and is best known for its Claude family of AI models.
[25]
Anthropic leaks its own AI coding tool's source code in second major security breach | Fortune
Anthropic has accidentally leaked the source code for its popular coding tool Claude Code. The leak comes just days after Fortune reported that the company had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model that presents unprecedented cybersecurity risks. The model is known internally as both "Mythos" and "Capybara," according to the leaked blog post obtained by Fortune. The source code leak exposed around 500,000 lines of code across roughly 1,900 files. When reached for comment, Anthropic confirmed that "some internal source code" had been leaked within a "Claude Code release." A spokesperson said: "No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The latest data leak is potentially more damaging to Anthropic than the earlier accidental exposure of the company's draft blog post about its forthcoming model. While the latest security lapse did not expose the weights of the Claude model itself, it did allow people with technical knowledge to extract additional internal information from the company's codebase, according to a cybersecurity professional Fortune asked to review the leak. Claude Code is perhaps Anthropic's most popular product and has seen soaring adoption rates from large enterprises. At least some of Claude Code's capabilities come not from the underlying large language model that powers the product but from the software 'harness' that sits around the underlying AI model and instructs it how to use other software tools and provides important guardrails and instructions that govern its behavior. It is the source code for this agentic harness that has now leaked online. The leak potentially allows a competitor to reverse-engineer how Claude Code's agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code's agentic harness based on the leaked code. The leaked code also provided further evidence that Anthropic has a new model with the internal name "Capybara" that the company is actively preparing to launch, according to Roy Paz, a senior AI security researcher at LayerX Security. It revealed that the company has a "fast" and "slow" version of the new model and that it will likely be a replacement for Opus, Anthropic's most advanced model on the market. Currently, Anthropic markets each of its models in three different sizes. The largest and most capable model versions are branded Opus; while slightly faster and cheaper, but less capable, versions are branded Sonnet; and the smallest, cheapest, and fastest are called Haiku. In the draft blog post obtained by Fortune last week, Anthropic describes "Capybara" as a new tier of model that is even larger and more capable than Opus, but also more expensive. The newest leak, first made public in an X post, appears to have happened after Anthropic uploaded all of Claude Code's original code to NPM, a platform developers use to share and update software, instead of only the finished version that computers actually run. The mistake looks like a "human error" after someone took a shortcut that bypassed normal release safeguards, Paz said. "Usually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open," he told Fortune. "At Anthropic, it seems that the process wasn't in place and a single misconfiguration or misclick suddenly exposed the full source code." Paz also raised concerns about how the tool connects to Anthropic's internal systems. Even without special encrypted access keys that would normally be required to access such systems, it appears possible to access internal services that should be restricted, Paz said. He warned this could give malicious actors, including nation-states, new opportunities to exploit Anthropic's models to build more powerful cyberattack tools and bypass the safeguards meant to constrain them. Anthropic's current most powerful model, Claude 4.6 Opus, is already classed by the company as a dangerous model when it comes to cybersecurity risks. Anthropic has said its current Opus models are capable of autonomously identifying zero-day vulnerabilities in software. While these capabilities are intended to help companies detect and fix flaws, they could also be weaponized by hackers, including nation-states, to find and exploit vulnerabilities. This isn't the first time Anthropic has inadvertently leaked details about its popular Claude Code tool. In February 2025, an early version of Claude Code accidentally exposed its original code in a similar breach. The exposure showed how the tool worked behind the scenes as well as how it connected to Anthropic's internal systems. Anthropic later removed the software and took the public code down.
[26]
Anthropic Accidentally Leaked Claude Code's Source -- The Internet Is Keeping It Forever - Decrypt
Decentralized repos made the leak effectively permanent and uncontrollable. Anthropic didn't mean to open-source Claude Code. But on Tuesday, the company effectively did -- and not even an army of lawyers can put that toothpaste back in the tube. It started with a single file. Claude Code version 2.1.88, pushed to the npm registry in the early hours of Tuesday morning, shipped with a 59.8MB JavaScript source map -- a debug file that can reconstruct the original code from its compressed form. These files are generated automatically and are supposed to stay private. But a single line in the ignore settings let it go out with the release. Intern and researcher Chaofan Shou, who appears to be among the first to spot the file, posted a download link to X around 4:23 a.m. ET, and watched 16 million people descend on the thread. Anthropic yanked the npm package, but the internet had already archived 512,000 lines of code across 1,900 different files that make up a major part of the project. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Decrypt. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leak exposed the full internal architecture of what is arguably one of, if not the most sophisticated AI coding agent on the market: LLM API orchestration, multi-agent coordination, permission logic, OAuth flows, and 44 hidden feature flags covering unreleased functionality. Among the finds: Kairos, an always-on background daemon that stores memory logs and performs nightly "dreaming" to consolidate knowledge. And Buddy, a Tamagotchi-style AI pet with 18 species, rarity tiers, and stats including debugging, patience, chaos, and wisdom. There's a teaser rollout for this "Buddy" apparently planned for April 1-7. Then there's the detail that made everyone on Hacker News cackle. Per leaker Kuberwastaken, buried inside the code was "Undercover Mode" -- a whole subsystem designed to prevent the AI from accidentally leaking Anthropic's internal codenames and project names when contributing to open-source repositories. The system prompt injected into Claude's context literally says: "Do not blow your cover." Apparently, Anthropic began issuing DMCA takedowns against GitHub mirrors. That's when things got interesting. A Korean developer named Sigrid Jin -- featured in the Wall Street Journal earlier this month for having consumed 25 billion Claude Code tokens -- woke up at 4 a.m. to the news. He sat down, ported the core architecture to Python from scratch using an AI orchestration tool called oh-my-codex, and pushed claw-code before sunrise. The repo hit 30,000 GitHub stars faster than any repository in history. It's basically a translation of all the code from the original language to Python, so technically not the same thing, right? We'll leave that to lawyers and tech philosophers. The legal logic here is sharp. Gergely Orosz, founder of The Pragmatic Engineer newsletter, argued in a post on X: "This is either brilliant or scary: Anthropic accidentally leaked the TS source code of Claude Code. Repos sharing the source are taken down with DMCA. BUT this repo rewrote the code using Python, and so it violates no copyright & cannot be taken down!" It's a clean-room rewrite. A new creative work. DMCA-proof by design. The copyright angle gets thornier when considering the legal status of AI-generated work, and how muddy the criteria gets when lawyers have to rule whether or not it carries automatic copyright. The DC Circuit upheld that position in March 2025, and the Supreme Court declined to hear the challenge. If significant chunks of Claude Code were written by Claude itself -- which Anthropic's own CEO has implied -- then the legal standing of any copyright claim gets murkier by the day. Decentralization adds another layer of permanence. The account @gitlawb mirrored the original code to Gitlawb, a decentralized git platform, with a simple message: "Will never be taken down." The original remains accessible there. A separate repository has compiled all of Claude's internal system prompts, which is something that prompt engineers and jailbreakers will appreciate as it gives more insights into the way Anthropic conditions its models. This matters beyond the drama. DMCA takedowns work against centralized platforms. GitHub complies because it has to. Decentralized infrastructure -- which powers Gitlawb, torrents, and cryptocurrency itself -- doesn't have the same single point of failure. When a company tries to pull something back from the internet, the only question is how many mirrors exist and on what kind of infrastructure. The answer here, within hours, was: enough.
[27]
512,000 lines of Claude Code's own CLI source code have leaked due to 'human error', but the company says 'no sensitive customer data or credentials' were exposed
I'd imagine it'll still be keeping some coders busy over the weekend. Claude Code has become one of the modern darlings of the vibe coding revolution, being a terminal-based coding assistant that's been used for a raft of creative software projects, including a, err, game created by a dog. Unfortunately for Anthropic, the company behind the AI coding whizz, the latest package also included a source map file -- allowing access to its own command line interface (CLI) source code. The leak was first spotted by security researcher Chaofan Shou, and posted on X with a (now defunct) link to the files. According to Ars Technica, the codebase was then transferred to a public GitHub repository, and has since been forked tens of thousands of times. Of course, it could be worse. The source code does not pertain to the models themselves, but instead the command line interface that interacts with them. However, reports indicate that the leak contains almost 2,000 TypeScript files and more than 512,000 lines of code, which means it's primed to give AI coding enthusiasts an insight into how Claude Code operates. A statement has since been released by an Anthropic spokesperson to multiple outlets, including VentureBeat, regarding the error. The statement reads: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Despite the leak being less than a day old, some have already taken to pulling the code apart to see exactly how the Claude Code sausage is made. Discoveries so far include an "insanely well-designed" memory system made in a three-layer design, which is described as "self-healing memory". Which sounds like the sort of intellectual property that Anthropic would be very keen to keep under wraps. And while its customer data appears to remain secure, it's the second reported data leak from the company in the past week -- which likely won't do much good to the company's position among an increasingly crowded AI market.
[28]
Anthropic releases part of AI tool source code in 'error'
Washington (United States) (AFP) - Anthropic accidentally released part of the internal source code for its AI-powered coding assistant Claude Code due to "human error," the company said Tuesday. An internal-use file mistakenly included in a software update pointed to an archive containing nearly 2,000 files and 500,000 lines of code, which were quickly copied to developer platform GitHub. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said. "This was a release packaging issue caused by human error, not a security breach." A post on X sharing a link to the leaked code had more than 29 million views early on Wednesday. The exposed code related to the tool's internal architecture but does not contain confidential data from Claude, the underlying AI model by Anthropic. Claude Code's source code was partially known, as the tool had been reverse-engineered by independent developers. An earlier version of the assistant had its source code exposed in February 2025.
[29]
Anthropic confirms it leaked 512,000 lines of Claude Code source code -- spilling some of its biggest secrets
* Anthropic employee accidentally leaked Claude Code source via npm map file * Leak exposed 1,900 TypeScript files with 500K+ lines of code, quickly mirrored on GitHub * Anthropic confirmed no customer data exposed, calling it a packaging error amid recent vulnerabilities like ShadowPrompt and Cloudy Day An Anthropic employee accidentally leaked the source code for one of the most popular Artificial Intelligence (AI) assistants out there - Claude Code. Security researcher Chaofan Shou posted on X, saying "Claude Code source code has been leaked via a map file in their npm registry!" The tweet itself was viewed more than 30 million times so far, with the numbers rising fast, showing just how popular the tool really is. While CNBC says the leak is partial, The Register said it contained "the popular AI coding tool's entire source code". Anthropic confirms leak The internet reacted as the internet usually reacts - fast and remorseless, swiftly backing up the leak into a GitHub repository which has, by now, been forked tens of thousands of times. In the GitHub upload it was said that the leak is a result of a reference to an unobfuscated TypeScript source code in the map file included in Claude Code's npm package. The reference pointed to a .ZIP file sitting in Anthropic's Cloudflare R2 storage bucket which contained 1,900 TypeScript files with more than 500,000 lines of code, full libraries of slash commands, and built-in tools. Since then, Anthropic confirmed the news, saying this wasn't an act of a malicious insider, or third party, but rather a mishap: "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement to CNBC. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." These have been an intense couple of weeks for Anthropic. The company raised quite a few eyebrows with the speed at which it's been shipping out new updates and features, even prompting major discussions on Reddit, where users argued the company's been using, well, its own product. "They're getting high on their own supply," one person said. While releasing new features quickly is commendable, cybersecurity seems to be the flipside of that coin. In the last 10 days alone, we've had multiple stories about Claude being vulnerable to prompt injection and similar attacks. On March 27 2026, security researchers Koi Security found a major flaw in Claude Code's Google Chrome extension that enabled zero-click attacks. Speed at the expense of security? Dubbed ShadowPrompt, the vulnerability could have allowed malicious actors to exfiltrate sensitive data. A few days prior, on March 19, security researchers Oasis reported finding three vulnerabilities in Claude which, when used together, form a complete attack chain - from targeted victim delivery to sensitive data exfiltration. The researchers dubbed it Cloudy Day and responsibly disclosed it to Anthropic which quickly addressed it. Users don't seem to mind that much, though as, on the same day ShadowPrompt was discovered, Anthropic was forced to throttle its tools during peak hours to cope with rising demand. "To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged", said Thariq Shihipar, an engineer who works on Claude Code, in a post on X. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[30]
Anthropic accidentally exposes Claude Code source code in npm packaging error - SiliconANGLE
Anthropic accidentally exposes Claude Code source code in npm packaging error Anthropic PBC has accidently exposed the source code for its Claude Code command-line interface tool through a packaging error that led to the inclusion of sensitive files in a publicly distributed npm release. Claude Code is Anthropic's command-line tool that lets developers interact with its Claude artificial intelligence models directly from the terminal to write, edit and debug code. It's essentially an AI coding agent wrapped in a CLI that is designed to run tasks, manipulate files and automate development workflows without needing a full Integrated Development Environment interface. The exposure occurred due to the inclusion of a source map file in version 2.1.88 of Claude Code npm package. The leak consisted of more than 500,000 lines of TypeScript code across nearly 2,000 files, with the exposed material including core components of the Claude Code system, such as its agent architecture, tool integrations and execution logic. Anthropic has acknowledged the incident, saying in a statement reported by CNBC that "this was a release packaging issue caused by human error, not a security breach" and that it's "rolling out measures to prevent this from happening again." The problem when source code like this is leaked is that you can't put the proverbial rabbit back into the hat - removal of the original source does not prevent continued distribution once copies have propagated. In this case, the code was quickly mirrored externally, making it difficult to fully contain. While there is no indication that user data, prompts or customer information were exposed in the incident and Anthropic has also confirmed this, the impact of the leak comes down to intellectual property exposure and the potential for deeper analysis of internal system design. Access to the source code can provide insight into how AI agents manage tool usage, permissions and workflows. Such visibility can also assist in identifying weaknesses or crafting more targeted exploits against similar systems. The incident also raises competitive considerations, as proprietary implementation details can and will give Anthropic's rivals a clearer understanding of how its coding tools are structured. While the models themselves remain closed, the surrounding orchestration layer represents a significant portion of product differentiation. The news that Anthropic has accidentally leaked Claude Code CLI source code comes after the details of the company's upcoming AI model called Claude Mythos and other documents were recently discovered in a publicly accessible data cache.
[31]
Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude's Source Code
Can't-miss innovations from the bleeding edge of science and tech The AI industry largely acts as if it's above lowly copyright laws -- unless, of course, those laws happen to be protecting its own interests. As the Wall Street Journal reports, Anthropic is scrambling to contain a leak of its Claude Code AI model's source code by issuing a copyright takedown request for more than 8,000 copies of it -- a gallingly ironic stance for the company to be taking, considering how it trained its models in the first place. The leak isn't considered to be an outright disaster; no customer data was exposed, Anthropic says, nor were the internal mathematical "weights" that determine how the AI "learns" and which distinguish it from other models. But it did expose the techniques its engineers used to get its AI model to act as an autonomous agent, a form of digital infrastructure coders call a harness, and other tricks for making the AI operate as seamlessly as it does. Hence Anthropic's copyright takedown request, which targets the thousands of copies that were shared on GitHub. It later narrowed its request from 8,000 copies to 96 copies, according to the WSJ reporting, claiming that the initial one covered more accounts than intended. It's certainly within Anthropic's right to issue the takedown request, but the hypocrisy of Anthropic running to the law to protect its intellectual property is plain to see, especially for a company that's relentlessly positioned itself as the ethical adult in the room. Back when Anthropic was still a nascent splinter group formed from former OpenAI researchers, for instance, it needed access to a wealth of high quality training data to build its Claude AI model. To do that, it first relied on digital books. But it didn't pay for them or choose only to use ones in the public domain. Instead, it downloaded millions of pirated volumes from the online "shadow library" LibGen. While LibGen doesn't position itself as a pirate website, Anthropic also downloaded books from a similar hub literally called "Pirate Library Mirror." (Anthropic cofounder Ben Mann was ebullient about the site's launch: "just in time!!!" he wrote in a message to employees, along with a link to the site.) The practice was unearthed in a lawsuit brought by a group of authors against Anthropic, which ended in a $1.5 billion settlement after a judge deemed the use of the pirated books to be illegal. Anthropic also scanned and destroyed millions of used physical books in a secret initiative called Project Panama. The process involved cutting the pages out of the volumes using higher powered machinery, which once scanned were tossed out and recycled. The judge didn't find this to be illegal, but Anthropic was evidently aware of how bad the practice's optics were. "We don't want it to be known that we are working on this," an unsealed internal planning document from 2024 stated, via The Washington Post. Unfortunately for Anthropic, it only has itself to blame for the leak. When it released its 2.1.88 of Claude Code npm package, it accidentally left in what's called a source map file, which points to where the source code is stored online -- a giant "X marks the spot" for prying eyes. Sleuths followed the trail and downloaded the code package, and uploaded copies in the thousands to GitHub, where they can still be found. The incident has raised questions over whether AI was involved, given a number of high profile AI coding blunders at competitors like Amazon and Meta, along with Anthropic's frequent boasts of how its models were was built using its own AI coding tools. Anthropic officially insists, however, that it was down solely to "human error."
[32]
Anthropic accidentally exposes system behind Claude Code
Anthropic inadvertently released internal source code behind its popular artificial intelligence-powered Claude coding assistant, raising questions about the security of an AI model developer that has built its brand on prioritising safety. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," Anthropic said in an emailed statement late Tuesday. "This was a release packaging issue caused by human error, not a security breach."
[33]
Anthropic confirms accidental Claude Code source leak
An Anthropic spokesperson confirmed that the company accidentally exposed the source code of Claude Code during a routine software update. A debugging file was incorrectly included in a software package published to a public developer registry, enabling access to the code. This incident represents the second code leak within a year, following a similar issue in February 2025. The recurrence of such leaks raises concerns about Anthropic's security and operational practices, particularly since the company emphasizes its commitment to security measures. The source code, written in programming languages like Python and JavaScript, serves as the original set of instructions that dictate software functionality. A security researcher identified the leak after discovering the software package, which contained a source map file that revealed the codebase. Following this, the leaked code was quickly disseminated and analyzed on GitHub. In response to the leak, Anthropic began issuing DMCA takedown notices to eliminate mirrors of the leaked files. Concurrently, Sigrid Jin, a South Korean developer, reconstructed core architecture in Python using an AI orchestration tool, releasing a project titled "claw-code," which aligns more with a reimplementation than a direct copy of the original code. Reports indicated that the leaked code included several feature flags associated with tools that may not yet be publicly available. These features potentially encompass a "persistent assistant" mode, remote access capabilities, and the ability for Claude to review prior interactions for improvement. An Anthropic spokesperson attributed the leak to a packaging error that led to the accidental publication of internal source code. They confirmed that no customer data or credentials were compromised in the incident. The company is implementing measures to prevent similar occurrences in the future. This incident follows a prior leak in February 2025, when an earlier version of Claude Code was unintentionally exposed but quickly removed. Anthropic's focus on securing its systems may be scrutinized further in light of these repeated incidents.
[34]
Anthropic confirms it leaked the source for Claude Code, blames human error
Anthropic is one of the biggest AI companies on the planet, and a leak was detected on Tuesday morning that exposed the source code of Claude Code, a developer-focused capability that integrates Anthropic's AI assistant, Claude, into programming workflows. According to the company, the leak was detected shortly after version 2.1.88 of Claude Code was made public, as the newly released version mistakenly included a source map file that exposed more than 500,000 lines of code and nearly 2,000 files. As you can probably imagine, internet sleuths were able to extract the files before Anthropic could deploy a fix, as a link to an archive containing them was posted to X by security researcher Chaofan Shou. The post caught the attention of more than 27 million users. An Anthropic spokesperson confirmed the leak, saying the company is now reviewing steps to prevent a similar egregious human error from occurring again. The spokesperson provided a statement, saying, "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed." The damage from this leak could be catastrophic for Anthropic, as competing AI companies will now get a look at what's under the hood of its most popular product. From that, they can see what Claude Code lacks and what it does well, implementing those aspects and others into their own AI tools. In other news, a new Lord of the Rings game is reportedly in development by Crystal Dynamics, the studio behind the modern Tomb Raider games.
[35]
Anthropic's Leaked Code Reveals the Radical Strategy That Makes Claude Code a $2.5 Billion AI Tool
In what the company says was a simple packaging mistake, parts of the code behind Claude Code, its fast-growing AI coding product, were briefly made public early Tuesday morning. Within hours, developers had downloaded, shared, and begun dissecting the roughly 500,000 lines of code across GitHub, Fortune reported. Claude Code is already generating an estimated $2.5 billion in annual recurring revenue, with enterprise customers making up the vast majority of that growth. Now, competitors may have an unusually detailed roadmap for how it works. Anthropic confirmed the incident in a statement to Fortune, emphasizing that no customer data or credentials were exposed. The company described the issue as "human error," not a security breach, and said it is working to prevent similar mistakes.
[36]
Leaked Claude Code Shows Anthropic Building Mysterious "Tamagotchi" Feature Into It
Can't-miss innovations from the bleeding edge of science and tech After Anthropic accidentally leaked the source code to its blockbuster Claude chatbot, netizens swiftly pounced to start plowing through its more than 512,000 lines of code -- and have uncovered numerous curiosities sprinkled throughout. In an extensive thread in the r/ClaudeAI subreddit, one user said they found a "Tamagotchi"-like feature buried in the code, referring to the handheld digital pets that you need to keep checking in on to keep them alive. "There's an entire pet system called /buddy. When you type it, you hatch a unique ascii companion based on your user id," the user claimed. "The pet sits beside your input box and reacts to your coding." The user said they found 18 different pet species, including a duck, dragon, capybara, and a so-called "chonk," along with a rarity system resembling those found in Gacha games, which assigns a user a pet based on chance. Will Tamagotchis inside Claude be a mainstay? Likely not: an included string reading "friend-2026-401," the user found, almost certainly means that Anthropic intended the feature to be an April Fools one off. That wasn't the only item of note internet sleuths found. They also uncovered a feature called "kairos" that purportedly can serve as an always-on AI agent that constantly runs in the background and can take actions on your behalf without you having to ask. It can even send push notifications to your phone or desktop to get your attention, users who viewed the code claimed. Others said they found an "undercover" mode to mask the fact that Claude is an AI when contributing code in public repositories, as well as a mood tracking feature that measures a coder's "frustration" levels based on their messages and clues like swear words. Someone also unearthed a message one of Anthropic's coders left in, in which they admit that "memoization here increases complexity by a lot, and im not sure it really improves performance." In all, there's no smoking guns here, but the leak provides an intriguing peek behind the curtain -- as well as easy fodder for any competitors looking to reverse engineer the company's tech. For Anthropic, it's undoubtedly an embarrassing blunder. The code base appears to have been leaked after what's known as a source map file was accidentally left in in a public release of the company's 2.1.88 of Claude Code npm package. A map file links bundled code back to the original source, The Register explains, and one resourceful programmer used it to find where Claude's source code is stored, backing the whole thing up on GitHub. Anthropic scrambled to get the exposed source code pulled by issuing copyright takedowns, though at this point it may already be out of the company's hands. As to how the map filed slipped through the cracks in the first place, Anthropic officially blames "human error" and stressed it was not a "security breach." Notably, however, the leak comes after Anthropic figures have consistently boasted about much of Claude's code is now being produced with the help of the AI itself, and recent incidents at Amazon and a cybersecurity blunder at Meta all caused by AI models raise the possibility that Anthropic's own tool may have played a role in this one, too.
[37]
Anthropic Accidentally Leaked Its Own Claude Code. Now the Company Is Scrambling to Contain the Damage.
Anthropic just handed its competitors a gift. The AI company accidentally exposed the proprietary instructions behind Claude Code this week when it updated the tool and posted a file to GitHub that linked back to source code that outsiders could download. Within hours, copies were multiplying across the programming platform. Now the company is racing to stop the bleeding. Anthropic issued over 8,000 copyright takedown requests on Wednesday. The leak didn't expose customer data, but it handed rivals a detailed roadmap to clone Claude Code's features without reverse-engineering them. Per Axios, those features include: The leak is a blow for Anthropic, which recently closed funding at a $380 billion valuation ahead of a possible IPO this year.
[38]
Why is Anthropic racing to contain the Claude Code leak -- is it exposing trade secrets, empowering hackers, and letting rivals clone its AI agent faster than ever?
The Anthropic Claude Code leak has exposed more than 8,000 copies of source code to developers worldwide. This accidental release reveals proprietary AI instructions that power Claude Code, giving competitors a roadmap to replicate its features. While no user data or model weights were compromised, the leak exposes critical AI security risks and intellectual property vulnerabilities. Experts warn this could accelerate cloning of AI coding tools and intensify competition in the artificial intelligence market. For developers, startups, and enterprise users, this incident highlights the fragility of AI systems and the urgent need for stricter AI code protection and cybersecurity protocols. Anthropic's reputation and innovation lead are now at stake. Anthropic is scrambling to contain a major leak of the Claude AI code that underpins its powerful Claude Code agent. On Tuesday, an internal file accidentally revealed sensitive instructions and proprietary tooling on GitHub. By Wednesday morning, the company had issued a copyright takedown request to remove more than 8,000 copies and adaptations of the exposed Claude AI code, according to multiple developer reports and GitHub activity. The incident exposed commercially valuable components that help steer the AI models behind one of the leading coding assistants in the industry. This leak of Anthropic's Claude AI code base did not include confidential customer data or the AI model weights themselves, the company said. But the leaked source code still contained crucial clues about the proprietary harness that makes Claude Code function as an intelligent coding assistant. Developers and competitors now have a roadmap to mimic features that until now were closely guarded trade secrets. The leak also triggered a rapid community response, with other programmers rewriting the core functions in new languages to evade takedowns. Anthropic has acknowledged the incident as a release packaging error rather than a security breach. Still, the exposure of Claude AI code and instructions raises fresh concerns about AI tooling safety, competitive advantage, and the ability of major AI developers to keep critical IP private in a hypercompetitive environment. The leaked material was not the Claude AI's core neural network weights -- the mathematical parameters that define how Claude thinks -- but it did include internal source files that show how Anthropic orchestrates its AI models for coding tasks. These files explained proprietary processes known in the industry as the "harness" -- the instructions and tooling that guide an AI model to behave in a practical, developer‑friendly way. In simple terms, the Claude AI code harness includes the logic that tells the model how to receive code input, break down tasks, remember context, and respond in structured formats. Developers described finding intriguing techniques in the leaked code: * A mechanism for the model to periodically review prior tasks and "consolidate memories," dubbed dreaming. * Instructions that in some cases may encourage Claude Code not to reveal its AI identity when generating code output. * Experimentation files pointing to future features and product directions. * Easter‑egg style elements -- including a Tamagotchi‑like interactive "Buddy" character embedded in the code. Although the leak exposed how Claude AI agents are controlled and coordinated, the core machine learning models and their calculations remain secure, Anthropic insisted. The company has said the incident was caused by human error during an update to the AI tool's repository, which mistakenly included files linking back to the full source. The Claude AI code leak is significant because it reveals techniques that Anthropic invested heavily to develop. These techniques differentiate Claude Code from other AI coding assistants in terms of performance, reliability, and developer experience. Until now, competitors had to infer these processes indirectly or rely on reverse engineering. With the source code accessible -- albeit briefly -- any developer or rival AI company can examine exact harness instructions and adapt them for their own use. Within hours, programmers were copying, modifying, and discussing the exposed Claude AI code in online forums. Some contributors even said they were "marveling at the ingenuity," while others warned that the leak could accelerate cloning efforts by competitors. In response, Anthropic issued takedown notices under copyright law to GitHub to remove copies and prevent further spread of its proprietary Claude AI code. More than 8,000 copies and adaptations of the exposed files were taken down, but programmers quickly reposted rewritten versions. One developer created a near‑functional clone of the Claude Code logic in another programming language to preserve the ideas without triggering additional removals. This game of digital cat‑and‑mouse highlights how difficult it is to contain copyrighted digital content once it escapes into open repositories -- especially in the fast‑moving world of AI. Anthropic maintains that no customer data or confidential user information was leaked, nor was the actual AI model architecture exposed. The company said the leak was a tooling delivery mistake, not a malicious breach. Still, experts warn that the exposure of internal tooling could attract security researchers and hackers alike to probe for vulnerabilities. The leaked code gives outsiders new hooks to analyze how Claude AI handles inputs and control instructions -- and could potentially be used to craft malicious prompts or exploit logic loopholes. Skilled attackers now have a much detailed internal look at how Claude Code orchestrates its AI reasoning for coding tasks. Security researchers reviewed the leaked Claude AI code and raised concerns that devs might find bugs that could be triggered in live use. For example, routines that iterate between memory and task planning may create unexpected loops if manipulated in unintended ways. While no major exploits have been reported publicly, the long‑term risk remains open until Anthropic can resecure its workflow and rebuild the codebase. Anthropic says it is implementing new checks and safeguards to prevent a repetition of such a leak. The company has not detailed exactly what those measures are but has described the mishap as a "release packaging issue" resulting from human oversight during a routine update. Once the Claude AI code contents were circulating, developers began combing the files. Social media platforms saw rapid posts parsing what the files actually did and what mechanisms controlled Claude Code's behavior. Many programmers publicly praised the architectural clarity of the harness logic, noting that the techniques for memory consolidation and task iteration could inspire new coding AI tools. Within hours of the leak, some individuals extracted the exposed logic and began porting it to alternative environments. One GitHub user said the new rewritten version aimed to "keep the educational value alive without risking ongoing takedown efforts." That version quickly gained attention and downloads, underscoring the difficulty in holding back distributed digital content once it escapes. Overflow discussions included: * Explanations of Claude AI's memory tagging and dreaming cycles. * Speculation about unused features hinted at in unshipped code segments. * Debates over whether rules advising Claude Code to conceal its AI origin should have been exposed at all. * Community forks and adaptations aimed at research, not commercial use. So far, there have been no public reports of the leaked code being directly used to commit harmful cyberattacks or widespread misuse. But "research clones" of Claude AI logic are multiplying fast in open developer forums. The Claude AI code leak is a stark reminder of the challenges that AI developers face in protecting proprietary tooling in an age of rapid sharing. For Anthropic, the leak threatens two major areas of strategic importance: safety reputation and competitive advantage. Anthropic's Claude Code has gained wide adoption among developers and enterprise customers. It also played a role in helping the company secure new funding at a valuation of $380 billion, fueling speculation about a possible IPO later this year. For enterprise buyers, confidence in tooling security and IP protection is essential. Industry analysts say this leak serves as a cautionary tale for all AI developers who rely on distributed version control systems like GitHub. Mistakes in release packaging can lead to outsized consequences given how quickly code spreads in the global open‑source ecosystem. Anthropic's response so far -- rapid takedowns, public acknowledgment of the error, and steps to prevent future leaks -- is aimed at limiting reputational damage. But the fact that rewritten versions of the Claude AI code are now circulating suggests that once valuable code is out in the wild, it is very hard to reel back in fully. What remains clear is that the AI coding tools race is now not just about model performance but about protecting the intellectual property and orchestration logic that make those tools actually useful in the real world. 1. What does the Anthropic Claude Code leak mean for AI developers and competitors? The Anthropic Claude Code leak gives developers and competitors a rare inside look at how a leading AI coding agent is structured and controlled. With access to these internal instructions, many can now replicate or adapt similar features without heavy research and development costs. This could speed up innovation across the AI industry but also intensify competition, making it harder for companies to maintain unique advantages in the rapidly evolving AI market. 2. Is the Anthropic Claude Code leak a security threat or just a technical mistake? The Anthropic Claude Code leak was officially caused by a human error during a software update, not a cyberattack, but its impact goes beyond a simple mistake. While no user data or core AI model weights were exposed, the leaked source code could still be analyzed for vulnerabilities, increasing potential cybersecurity risks. This makes the incident both a technical failure and a broader security concern for AI platforms and developers. (You can now subscribe to our Economic Times WhatsApp channel)
[39]
What Anthropic's Massive 500,000-Line Source Code Leak Reveals About Claude
Anthropic's Claude Code source code has been leaked, revealing over 500,000 lines of code across more than 2,000 files. Among the key discoveries is the Kyros Project, an always-on AI designed to perform tasks like file sharing and pull request monitoring in 15-second intervals. This feature highlights Anthropic's emphasis on creating systems that integrate efficiently into workflows. Jay E examines the details of this leak, offering a closer look at its potential impact on AI development and application. Dive into features such as the Buddy System, which introduces virtual pets to gamify development and the Ultra Plan, designed for resource-intensive tasks requiring extended processing times. Gain insight into unreleased AI models like Capybara and Fenec, as well as innovations like Undercover Mode and Frustration Detection, which aim to balance user experience with AI autonomy. This exposé provides a comprehensive breakdown of how these elements contribute to the evolving role of AI in collaborative environments. The Kyros Project stands out as one of the most impactful discoveries from the leak. This always-on AI agent is designed to function as a proactive assistant, operating in the background to perform tasks such as file sharing, monitoring pull requests and sending phone notifications without requiring user intervention. To ensure stability and efficiency, the Kyros agent operates in 15-second intervals, minimizing disruptions while maintaining consistent support. This feature highlights Anthropic's focus on creating an AI that integrates seamlessly into your workflow. By offering continuous, non-intrusive assistance, the Kyros Project aims to enhance productivity and simplify complex processes, making it a valuable tool for professionals across various industries. The Buddy System introduces a unique blend of functionality and entertainment into the coding environment. This feature allows developers to select from 18 species of virtual pets, categorized into common, rare and legendary tiers. These digital companions are designed to gamify the coding experience, fostering a more engaging and personalized workspace. By incorporating elements of creativity and fun, the Buddy System seeks to make the development process more enjoyable. This approach not only boosts morale but also encourages developers to interact with their tools in a more dynamic and innovative way. Here are more detailed guides and articles that you may find helpful on Claude Code. The Ultra Plan is a cloud-based computation mode tailored for handling resource-heavy tasks. It enables up to 30 minutes of continuous processing, making it ideal for tackling complex architectural challenges, conducting in-depth analyses, or managing large-scale projects. This feature provides developers with a powerful tool to streamline workflows and address intricate problems efficiently. For professionals working on demanding projects, the Ultra Plan represents a significant advancement in cloud-based AI capabilities. Its ability to handle intensive computations ensures that even the most complex tasks can be completed with precision and speed. The leaked documentation references three unreleased AI models, Capybara, Fenec and Numbat. While specific details about these models remain limited, they are believed to focus on advanced functionalities such as enhanced natural language processing, multi-agent collaboration and improved decision-making. These developments suggest that Anthropic is actively expanding Claude Code's capabilities to cater to a broader range of applications. By introducing these models, the platform is poised to address diverse industry needs, from enterprise solutions to creative problem-solving. Undercover Mode is a feature designed to integrate AI-generated contributions seamlessly into team workflows. By concealing the AI's involvement in code history, this functionality ensures that outputs appear human-generated. This approach is particularly beneficial for enterprise users, as it reduces resistance to AI adoption while maintaining transparency and collaboration within teams. This feature reflects Anthropic's understanding of the challenges associated with AI integration. By prioritizing discretion and adaptability, Undercover Mode aims to foster trust and acceptance among users. Auto Dream introduces a background process for memory consolidation and optimization. This feature enables Claude Code to refine its memory over time, improving its efficiency and adaptability. For users, this translates to an AI assistant that evolves alongside their needs, becoming increasingly precise and effective in handling complex challenges. The ability to self-improve positions Claude Code as a forward-thinking platform, capable of adapting to the ever-changing demands of its users. This feature underscores the platform's commitment to long-term usability and innovation. The YOLO Classifier automates decision-making by determining whether tasks require user approval. Routine tasks are handled autonomously, while critical decisions are flagged for review. This feature strikes a balance between efficiency and control, making sure that users retain oversight while benefiting from the AI's autonomous capabilities. By combining autonomy with user input, the YOLO Classifier enhances productivity without compromising accountability. This balance is crucial for maintaining trust and making sure optimal outcomes in professional settings. Frustration Detection uses sentiment analysis to identify signs of user dissatisfaction based on message patterns. By recognizing frustration, the system can respond proactively, offering solutions or adjustments to improve the overall user experience. This feature demonstrates Anthropic's commitment to creating an AI that is not only functional but also attentive to user needs. By addressing frustration in real-time, Claude Code ensures a smoother and more satisfying interaction for its users. The leaked source code provides a rare glimpse into Anthropic's vision for Claude Code as a comprehensive autonomous agent. With features like the Kyros Project, Buddy System and Ultra Plan, the platform is evolving beyond a traditional coding assistant to become an indispensable tool for developers and enterprises alike. These advancements highlight a strategic shift toward creating an AI that anticipates and adapts to user needs. By integrating innovative innovations such as self-improving memory, multi-agent coordination and discreet contributions, Anthropic is setting a new standard for AI integration in professional environments. As these features continue to develop, they promise to redefine how AI enhances productivity, collaboration and creativity in the years to come. Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
[40]
Anthropic Accidentally Leaks Claude Source Code
Anthropic has inadvertently disclosed the instructions behind its Claude Code AI agent. The exposure could provide competitors with strategic insight into how the model is created and could introduce potential security risks. The leak did not compromise customer data or the core mathematical frameworks of its AI models, a spokesperson for Anthropic told the WSJ. The incident was attributed to a packaging error rather than a breach of security. However, the disclosure of Anthropic's proprietary methods and tools that help Claude work as a coding agent, also known as a harness, presents a risk of being replicated by competitors without the need for reverse engineering. The company, valued at $380 billion, is experiencing increased usage of its Claude Code and is considering a public offering later this year. In February, Anthropic announced that it had raised $30 billion in Series G funding led by GIC, D.E. Shaw Ventures, Coatue, among others. Last month, San Francisco Federal Court District Judge Rita Lin sided with Anthropic in its request for a preliminary injunction in its legal battle against the Trump administration, calling it "illegal First Amendment retaliation." This decision temporarily halts the government's actions to blacklist the AI company and prevents the enforcement of a directive from President Donald Trump that bans federal agencies from using Anthropic's Claude models. Photo Courtesy: Koshiro K on Shutterstock.com This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[41]
The Fact That Anthropic Has Been Boasting About How Much Its Development Now Relies on Claude Makes It Very Interesting That It Just Suffered a Catastrophic Leak of Its Source Code
Can't-miss innovations from the bleeding edge of science and tech Earlier this year, the head of Anthropic's blockbuster Claude Code AI agent Boris Cherny boasted that "pretty much 100 percent" of the entire company's code is AI-generated. "For me personally, it has been 100 percent for two plus months now, I don't even make small edits by hand," he tweeted at the time. But the glaring cybersecurity implications of giving an AI agent full access over a computer to carry out complex tasks -- something experts have been ringing the alarm bells over for a while now -- isn't coinciding during a period of competence for the company: it confirmed on Tuesday that parts of the internal source code for its Claude Code had leaked, which is extremely bad. "No sensitive customer data or credentials were involved or exposed," a spokesperson told CNBC, in an apparent effort to focus on the bright side. The news comes less than a week after news of Anthropic's upcoming "Claude Mythos" AI model -- which the company claimed poses "unprecedented cybersecurity risks" -- leaked to the public. Unsurprisingly, Anthropic attempted to downplay the latest situation and blame human agents, not AI ones, for the leak. "This was a release packaging issue caused by human error, not a security breach," the spokesperson added. "We're rolling out measures to prevent this from happening again." A file the company shared on the coding platform GitHub included a link back to the source code, allowing anybody with an internet connection to download it. How the file ended up there in the end, or whether an AI agent could've been involved in the process leading up to the leak, remains unclear. "Claude code source code has been leaked via a map file in their npm registry!" reads an X post, which was viewed tens of millions of times in less than a day. Anthropic tried desperately to contain the fallout after exposing the source code. As the Wall Street Journal reports, representatives issued copyright takedown requests for more than 8,000 copies and adaptations of the source code, which contains the AI firm's underlying instructions on how to direct Claude Code. Ultimately, whether humans -- or AI agents -- are to blame for the leak almost feels beside the point as the damage has already been done. The exposed data included plenty of proprietary techniques Anthropic uses to point its tool in the right direction. According to Cybersecurity News, the exposed code covers how the company issues authorizations for making changes to resources, "permission enforcement, multi-agent coordination, and even undisclosed feature pipelines." As the WSJ points out, competitors will now have an even easier time reverse engineering Claude Code, potentially allowing them to quickly catch up. The leak could also give hackers a major leg up in their efforts to identify exploitable software vulnerabilities -- or find new ways to arm their own instances of Claude Code for nefarious purposes. "To most of us, this information is useless," one Reddit user explained. "To people who work for their competitors, you might be able to use this information to understand the ways that they are trying to do things and potentially try and use that information to your advantage." "It's also an exceptional blunder," they added. "Very embarrassing." The incident couldn't have come at a worse time. The runaway success of its coding assistant has allowed Anthropic to gain a considerable lead as competitors, such as OpenAI, continue to focus their efforts on similar enterprise pursuits. Meanwhile, the financial pressure continues to build. A recent round of funding is valuing the Dario Amodei-led firm at $380 billion ahead of its rumored IPO later this year.
[42]
Anthropic Claude source code leak explained: Techie reveals how a 4 am update exposed 512,000 lines of code
In the fast-moving world of artificial intelligence, even the biggest players can have an off day. That's exactly what happened to Anthropic, the company behind the popular Claude AI models, when a routine software update turned into one of the most talked-about tech stories of the week. A techie broke down how it all started at 4 AM on March 31. As per the techie, Anthropic pushed out a fresh version of their "Claude Code" tool - an AI-powered helper that developers use to write and manage code more efficiently - to the npm registry, a popular platform where coders share and download software packages. But buried inside this update was a massive 60 MB debugging file, known as a .map file. Also Read: Oracle Layoffs 2026: Here's what the early morning termination email read What was meant to be a simple support tool accidentally included something far bigger: the complete source code of Claude Code itself. That's over 512,000 lines of code, covering everything from how the tool works behind the scenes to its plugins and features. Within minutes, a researcher named Chaofan Shou spotted the unusual file while checking the update. He downloaded it, zipped it up, and shared the link on X (formerly Twitter). His post quickly caught fire. By the time most people in the US were waking up, the news had spread like wildfire. The leaked code was downloaded thousands of times and forked, copied and hosted on GitHub more than 41,000 times. Anthropic's team scrambled to issue takedown notices under copyright law, but it was too late. The damage was done. What happened next was even more remarkable. Sigrid Jin, a developer from Korea known as one of the heaviest users of Claude Code (reports say he racked up a whopping 25 billion tokens of usage last year alone), woke up to a flood of notifications. Worried about legal trouble just for having the code on his computer, he decided to take action. In just eight hours, he rewrote the entire tool from scratch in Python, creating a new version called "claw-code." His GitHub repository shot up to 30,000 stars - faster than any project in the platform's history, according to observers. Also Read: Gmail users update: Google now allows changing old, embarrassing usernames: CEO Sundar Pichai shares step-by-step guide Not stopping there, Jin then rebuilt it again, this time in the faster Rust programming language. That version has already crossed 49,000 stars. Meanwhile, someone else took the original leaked code and mirrored it on a decentralized storage platform, adding a simple note: "will never be taken down." The code is now out there for good, beyond any single company's control. The story has sparked a wave of reactions online, with many pointing out the delicious irony. Anthropic had actually built a special feature called "Undercover Mode" into their products - designed specifically to stop their AI from accidentally spilling internal secrets. Yet here they were, leaking their own codebase through a basic packaging mistake. As one user put it in the comments, "They shipped an entire anti-leak system... then leaked their own source code in a .map file. The irony is beautiful." Others were amazed at the speed of the global developer community. "The real story is the speed here," wrote one commenter. "Not that code leaked, but that the community had it forked, ported to Python, then Rust, and running before Anthropic's PR team finished their coffee." Another highlighted how this shows a bigger shift: once something is out, it's instantly copied, understood, and re-built - no going back. Some developers dug into the code out of curiosity, while a few questioned the rewrite timelines or wondered if the leak was truly accidental. But the overwhelming buzz on X revolves around one thing: how quickly closed-source tech can become public knowledge in today's connected world. Claude Code isn't the AI model itself but the command-line interface that helps users interact with it for coding tasks. For everyday Indians following the AI boom, from young engineers in Bengaluru to students in smaller cities dreaming of tech careers, this episode is a reminder of how rapidly innovation moves. One packaging error at 4 AM, and suddenly proprietary secrets are in the hands of thousands. As the dust settles, the code remains widely available, and Anthropic's anti-leak efforts couldn't stop the spread.
[43]
Anthropic Says Claude Code Leak Did Not Expose Customer Data | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," the company said in a message to Seeking Alpha published Thursday (April 2). "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Another report on the incident from the Wall Street Journal (WSJ) said as of Wednesday (April 1) morning, Anthropic had used a copyright takedown request to remove over 8,000 copies and adaptations of the raw Claude Code instructions -- known as source code -- that developers had shared on programming platform GitHub. A programmer then used separate AI tools to rewrite Claude Code's functionality in other programming languages so the information stayed publicly accessible without triggering further takedowns. The reworked version has become widely circulated on the platform. The WSJ also said that Anthropic later narrowed its takedown request to cover just 96 copies and adaptations, and that its initial ask had reached more GitHub accounts than planned. According to the Seeking Alpha report, the leak exposed commercially sensitive information, like Anthropic's proprietary techniques, tools and instructions for allowing its AI models to work as coding agents. And as PYMNTS wrote earlier this week, the leak struck at Anthropic's most commercially significant product at a critical moment. Claude Code's run-rate revenue had exceeded $2.5 billion as of February, and its viral adoption among developers has been critical to the company's momentum as it pursues a possible public offering. Claude Code's growth helped Anthropic complete a new funding round, which valued the startup at $380 billion. "That success has already prompted OpenAI, Google and xAI to pour resources into competing offerings," PYMNTS wrote. "The source code exposure now hands those rivals a detailed map of the design logic underlying a product they have been racing to replicate, removing the need to reverse-engineer capabilities that took Anthropic years to build." This was the second such incident in less than a week involving Anthropic. A recent configuration error in the company's content management system left nearly 3,000 unpublished documents in a publicly searchable data store, including a draft blog post describing "the most powerful AI model we've ever developed." That forced the company to confirm the existence of that model, known as Claude Mythos, telling Fortune that it represents a step change and the most capable system the company has developed, with meaningful reasoning, coding and cybersecurity improvements.
[44]
Anthropic's Claude Source Code Leak Hands Competitors a Blueprint It Spent Billions to Build | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Anthropic's internal source code for Claude Code was accidentally leaked, the second such incident in a week. The problem came down to a simple human error, and set off a scramble to contain the fallout, according to a Tuesday (March 31) report from CNBC. "This was a release packaging issue caused by human error, not a security breach," an Anthropic spokesperson said in a statement to the outlet. "We're rolling out measures to prevent this from happening again." According to Axios, security researcher Chaofan Shou identified the vulnerability early Tuesday morning in a post on X, comprising nearly 2,000 files and more than 512,000 lines of code. Following his post, the codebase was mirrored and dissected across GitHub. By Wednesday (April 1), as reported by The Wall Street Journal (WSJ), Anthropic had used copyright takedown requests to force the removal of more than 8,000 copies and adaptations of the exposed material from GitHub. A programmer subsequently used separate AI tools to rewrite Claude Code's functionality in other programming languages to keep the information publicly accessible without triggering further takedowns. That rewritten version has itself become widely circulated on the platform. According to CNBC, the Anthropic spokesperson confirmed that no sensitive customer data or credentials were involved. The leak strikes at Anthropic's most commercially significant product at a critical moment. Claude Code's run-rate revenue had reached more than $2.5 billion as of February, and its viral adoption among developers has been central to the company's momentum as it pursues a possible public offering. Claude Code's growth helped Anthropic close a new funding round, which valued the company at $380 billion. That success has already prompted OpenAI, Google and xAI to pour resources into competing offerings. The source code exposure now hands those rivals a detailed map of the design logic underlying a product they have been racing to replicate, removing the need to reverse-engineer capabilities that took Anthropic years to build. The disclosed material goes to the heart of what makes Claude Code commercially distinctive. As the WSJ reported, the leak exposed the proprietary techniques and instructions that direct Claude's underlying AI models to function as a useful coding assistant. As covered by The Hacker News, developers who examined the code found details of how Claude Code manages long-running tasks, handles complex multi-step work and connects its interface to code editing tools. According to Axios, the leaked material also surfaced a roadmap of capabilities that are fully built but not yet publicly available, including a mode that allows Claude Code to keep working in the background even when a user was idle. As reported by VentureBeat, the leak hands competitors a clear guide for replicating a production-grade AI coding agent, including the memory management approach Anthropic spent significant engineering effort developing. The Wall Street Journal noted additional details surfaced by developers: a memory process the code refers to internally as dreaming; instructions that appear to direct Claude Code to avoid identifying itself as an AI when publishing code to third-party platforms in certain contexts, and a Tamagotchi-style interactive feature called Buddy embedded in the codebase. In a market where the underlying AI model is increasingly available to any well-funded competitor, how a company builds around that model, and what it plans to build next, has become the primary source of competitive advantage. This leak therefore comes as a major setback to Anthropic.
[45]
Claude Code source code leak: Did Anthropic just expose its AI secrets, hidden models, and undercover coding strategy to the world?
Claude Code, Anthropic's top AI agent, just suffered a major source code leak. Version 2.1.88 exposed 512,000 lines of TypeScript, revealing memory architecture, orchestration logic, and 44 hidden features. The AI platform alone drives $2.5 billion in annual revenue, with 80% from enterprise clients. Competitors can now study background agents, autonomous daemons, and persistent memory systems. Security risks spike as malicious actors may exploit exposed Hooks and npm dependencies. Users must migrate to Anthropic's native installer, audit API keys, and inspect local repositories.
[46]
Anthropic scrambles to contain self-inflicted leak of valuable AI code
Anthropic has been scrambling to contain a self-inflicted mess after it accidentally leaked a treasure trove of internal code that powers one of its most valuable artificial intelligence tools, according to reports. The code serves as instructions for Claude Code, an AI agent app that developers and businesses pay top dollar to use to program and build applications of their own. Anthropic's competitors and hoards of startups and developers now have the goods to essentially clone features of Claude Code -- a shortcut to reverse-engineering them, the Wall Street Journal noted. By Wednesday morning, Anthropic representatives had used a copyright takedown request to get more than 8,000 copies and adaptations of the source code removed that developers had shared on programming platform GitHub. The leak of "some internal source code" didn't expose any customer information or data, a spokesman for Anthropic told news outlets. The secret inner mathematics of the company's pricey AI models reportedly weren't revealed, either. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again," the spokesman said. The Post has sought comment from Anthropic. Still, the leak revealed information that helps the company stay ahead of competitors, including tools and instructions for getting its AI models to work as coding agents, according to the Journal. The leak also gives hackers fresh ammunition as they hunt for ways to exploit Claude Code software or use its model to launch cyberattacks. The snafu reportedly began Tuesday, when Anthropic updated its AI tool. Like most proprietary software, Claude's source code is usually scrambled and unintelligible. But this time, the company posted a file to GitHub that linked back to code that outsiders could download and interpret. The folly was spotted by a user on social media site X, and word spread from there. The leak sets Anthropic back both in terms of its safety reputation and in the cutthroat battle for enterprise customers where maintaining an innovation edge is paramount. The setback comes as Anthropic has been fighting the Department of War in court after being blacklisted earlier this year. Last week, Anthropic won an injunction that halted its designation as a "supply-chain risk." The growing popularity of Claude Code helped Anthropic ink a recent funding deal that valued the company at $380 billion ahead of a possible public offering this year.
[47]
Anthropic confirms Claude Code source code leak, says no user data exposed
'No sensitive customer data or credentials were involved or exposed,' Anthropic says. Anthropic has confirmed that a part of the internal source code for its coding assistant Claude Code was accidentally leaked online. The company says the incident happened due to human error and was not because of a security breach. The issue came to light on Tuesday after a post on X (formerly known as Twitter) shared access to the code. The post quickly went viral and received more than 21 million views. Anthropic has tried to reassure users that the situation did not involve any sensitive information. 'No sensitive customer data or credentials were involved or exposed,' an Anthropic spokesperson said in a statement, reports CNBC. 'This was a release packaging issue caused by human error, not a security breach.' The spokesperson further says that the AI company is working on safeguards to make sure that similar mistakes do not happen in the future. Also read: iOS 27: Apple testing smarter Siri that can process multiple tasks at once Even though Anthropic says the leak does not pose a risk to user data, the exposure of source code can still have consequences. Source code contains the instructions and structure that define how a software tool works. For competing AI companies and independent developers, access to such code can sometimes offer insights into how a product is designed and built. This could reveal details about the architecture or development approach behind Claude Code. Also read: Google Veo 3.1 Lite AI video model is here: What it offers and how to use it The incident comes at a time when tech companies around the world are trying to develop powerful coding assistants and AI tools to compete with Claude Code.
Share
Share
Copy Link
Anthropic accidentally exposed over 512,000 lines of Claude Code source code in a release packaging issue caused by human error. The leak revealed unannounced AI model features including Kairos, an always-on background agent, and code that tracks user frustration. The incident raises fresh questions about AI privacy concerns and operational maturity at a company known for careful AI development.
Anthropic, a company that has built its reputation on careful AI development, experienced an embarrassing accidental code leak on March 31 when it pushed out version 2.1.88 of its Claude Code software package
1
. The release packaging issue exposed nearly 2,000 source code files containing more than 512,000 lines of code, essentially revealing the full architectural blueprint for the AI coding assistant2
. Security researcher Chaofan Shou spotted the mistake almost immediately and posted about it on X, turning what Anthropic described as human error into a windfall for developers and competitors alike3
.
Source: Geeky Gadgets
The leaked TypeScript codebase was contained in a source map file that developers quickly copied to GitHub, where it has since amassed more than 50,000 forks
5
. While Anthropic spokesperson Christopher Nulty clarified that no sensitive customer data or credentials were exposed, the incident marks the second time in a week that the company has accidentally made internal information publicly available2
. Days earlier, Fortune reported that Anthropic had exposed nearly 3,000 internal files, including a draft blog post describing a powerful new model not yet announced.Among the most controversial discoveries in the Claude Code source code leak was code designed to detect and flag user frustration
4
. The frustration detector uses regex, a decades-old pattern-matching technique, to scan user prompts for profanity, insults, and phrases such as "so frustrating" and "this sucks," logging when users express negativity. Independent developer Alex Kim, who posted a technical analysis of the leaked code, noted the irony: "An LLM company using regexes for sentiment analysis is peak irony"4
.
Source: Entrepreneur
Kim explained that while the choice was pragmatic—regex is computationally free compared to using an LLM for sentiment analysis at global scale—the signal doesn't change the model's behavior or responses. Instead, it serves as a product health metric to track whether users are getting frustrated and whether rates are increasing across releases
4
. However, Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, warns that the pressing issue is what happens to such behavioral data once a company collects it. "Even if it's a very legible and very simple prediction pattern, how you use that information is a separate governance question," she says4
.The software scaffolding revealed in the leak provided a detailed look at unannounced AI model features that Anthropic has been developing. Chief among these is Kairos, a persistent daemon designed to operate in the background even when the Claude Code terminal window is closed
1
. The system would use periodic "tick" prompts to regularly review whether new actions are needed and a "PROACTIVE" flag for surfacing information the user hasn't asked for but needs to see now.
Source: Benzinga
Kairos makes use of a file-based memory architecture designed to allow for persistent operation across user sessions, giving the system "a complete picture of who the user is, how they'd like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you"
1
. To organize this memory system, the code includes references to AutoDream, which would tell Claude Code to perform "a reflective pass over your memory files" when a user goes idle or manually ends a session1
. This AI "dream" process would scan transcripts for new information worth persisting, consolidate it while avoiding duplicates and contradictions, and prune outdated memories.Related Stories
Perhaps most troubling to transparency advocates was the discovery of Undercover mode, an inactive feature designed to let Anthropic employees contribute to public open source repositories without revealing themselves as AI agents
1
. The reference prompts for this mode explicitly tell the system that commits should "never include... the phrase 'Claude Code' or any mention that you are an AI," and to omit "co-Authored-By lines or any other attribution." Alex Kim called it "a one-way door"—a feature that can be forced on but not off—and noted that "hiding internal codenames is reasonable. Having the AI actively pretend to be human is a different thing"4
.Other planned features uncovered include UltraPlan, which would allow Opus-level Claude models to draft advanced plans that can run for 10 to 30 minutes at a time, a Voice Mode for direct conversation with Claude Code, and Bridge mode for remote sessions controllable from browsers or mobile devices
1
. Developers also found references to Buddy, a Clippy-like ASCII art companion in 18 randomized species forms that was planned for launch between April 1 through 71
.Claude Code isn't a minor product for Anthropic. According to the Wall Street Journal, OpenAI pulled the plug on its video generation product Sora just six months after launching it to the public to refocus efforts on developers and enterprises—partly in response to Claude Code's growing momentum
2
. What leaked was not the AI model itself but the instructions that tell the model how to behave, what tools to use, and where its limits are. One developer described the product as "a production-grade developer experience, not just a wrapper around an API"2
.Arun Chandrasekaran, an AI analyst at Gartner, told The Verge that while the leak poses "risks such as providing bad actors with possible outlets to bypass guardrails," its long-term impact could be limited to serving as a "call for action for Anthropic to invest more in processes and tools for better operational maturity"
5
. Whether competitors will find the architecture instructive remains to be seen, though the field moves fast enough that proactive suggestions and memory systems may soon become table stakes across AI coding assistants.Summarized by
Navi
[2]
[4]
02 Apr 2026•Technology

27 Mar 2026•Technology

13 Feb 2026•Technology

1
Technology

2
Science and Research

3
Technology
