3 Sources
3 Sources
[1]
Exclusive: Gottheimer presses Anthropic on source code leaks
Driving the news: Gottheimer, a leading House Democrat on AI and cybersecurity issues, sent the letter after source material powering Claude Code leaked this week for the second time in over a year. * "Claude is a critical part of our national security operations. If it is replicated, we sacrifice the competitive edge we have worked so diligently to maintain in all facets of our national security," he wrote. Zoom in: Gottheimer opened the letter by saying he supports Anthropic in its dealings with the Pentagon. * "I was deeply concerned with the Trump Administration's decision to block Claude from all government contracts, and how that decision could significantly set us back in our AI race against China." Gottheimer pressed Amodei on why the company has rolled back certain internal safety protocols. * He pointed to a Chinese Communist Party-backed group hacking Claude last year, writing that he's concerned upcoming Claude model Mythos will be used to conduct more cyberattacks. * "Anthropic needs to do everything it can to prevent CCP-backed companies like DeepSeek from conducting distillation campaigns." Between the lines: Gottheimer is threading a needle -- clearly disagreeing with the Trump administration's blacklisting of Anthropic while demanding the company improve its security so advanced AI can still be used in national security operations. The other side: "A Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Axios following the leak Tuesday. * "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." What we're watching: Gottheimer asked Anthropic to answer a number of questions about Claude's capabilities, risks of malicious use and susceptibility to further attacks from outside parties.
[2]
House Democrat pushes Anthropic on safety protocols, source code leak
Rep. Josh Gottheimer (D-N.J.) pressed Anthropic on Thursday about recent changes to its internal safety protocols following reports that part of the source code for the AI firm's Claude Code tool was accidentally leaked. The company narrowed its AI safety policy pledge in late February, removing a previous commitment to halt development of its AI models if they outpace its safety procedures. Going forward, Anthropic said it would grade itself on "nonbinding but publicly-declared" goals. "Given that we know Claude has been a repeated target of malign Chinese Communist Party (CCP) actors, combined with Claude Code's source code recently being leaked to the public, I don't understand why Anthropic would risk walking back any of its security measures," Gottheimer wrote in a letter to Anthropic CEO Dario Amodei. "The safety and security of our AI systems are critical to our national security, and Anthropic's protocols should properly protect against this threat," he added. Several outlets reported Wednesday that Anthropic accidentally leaked a portion of Claude Code's internal source code. A spokesperson told CNBC that no "sensitive customer data or credentials were involved or exposed" and that the issue was "caused by human error, not a security breach." "We're rolling out measures to prevent this from happening again," the company spokesperson said in a statement. The Hill has reached out to Anthropic for comment. Gottheimer also raised concerns Thursday about the ability of CCP-linked actors to use Anthropic's AI tools to conduct cyberattacks, as well as efforts to copy and reproduce its AI products. The company revealed in November that Chinese hackers had used its coding tool to conduct a large-scale cyberattack with limited human involvement. The congressman pressed the firm for answers about how it plans to prevent similar malicious uses of its tools and whether it expects such risks to increase with new models. In February, Anthropic also said it identified three "industrial-scale campaigns" by China-based AI labs to "illicitly extract Claude's capabilities to improve their own models" in a process referred to as distillation. Gottheimer questioned whether these AI labs can distill information about "how AI is used in U.S. national security" and how the company is preventing access to such information. "We cannot allow the CCP to reverse engineer and exploit American AI, built for our national defense," he added. "Claude is a critical part of our national security operations. If it is replicated, we sacrifice the competitive edge we have worked so diligently to maintain in all facets of our national security. Anthropic needs to do everything it can to prevent CCP-backed companies like DeepSeek from conducting distillation campaigns."
[3]
Why Anthropic's Massive Code Leak Is Now a National Security Concern in D.C.
Representative Josh Gottheimer, a New Jersey Democrat and leading House voice on AI and cybersecurity, wrote to Anthropic CEO Dario Amodei on Thursday warning of potential risks tied to the leak, according to a letter obtained by Axios. His message reflects mounting pressure from lawmakers as AI systems become increasingly embedded in U.S. defense and intelligence operations. Leaks Raise Lawmaker Concerns The letter follows what Gottheimer described as the second time in just over a year that source material powering Claude has leaked publicly. "Claude is a critical part of our national security operations. If it is replicated, we sacrifice the competitive edge we have worked so diligently to maintain in all facets of our national security," he wrote, according to Axios. At the same time, Gottheimer made clear he is at odds with the Trump administration's approach to Anthropic. He voiced support for the company amid its ongoing tensions with the Pentagon, writing, "I was deeply concerned with the Trump Administration's decision to block Claude from all government contracts, and how that decision could significantly set us back in our AI race against China."
Share
Share
Copy Link
Rep. Josh Gottheimer is pressing Anthropic CEO Dario Amodei for answers after Claude Code's internal source code leaked for the second time in over a year. The House Democrat warns the breach could compromise U.S. competitive edge in AI, particularly as Chinese actors have previously targeted Claude systems. The incident comes amid broader tensions over AI safety protocols and government contracts.
Rep. Josh Gottheimer, a leading House Democrat on AI and cybersecurity, sent a formal letter to Anthropic CEO Dario Amodei following the second Anthropic source code leak in just over a year
1
. The latest incident involved internal source code from the AI model Claude Code becoming publicly accessible, raising immediate concerns about national security implications. "Claude is a critical part of our national security operations. If it is replicated, we sacrifice the competitive edge we have worked so diligently to maintain in all facets of our national security," Gottheimer wrote in his letter1
.
Source: The Hill
An Anthropic spokesperson characterized the incident as a "release packaging issue caused by human error, not a security breach," emphasizing that no sensitive customer data or credentials were involved or exposed
1
2
. The company stated it is "rolling out measures to prevent this from happening again"2
.Gottheimer's letter also challenged Anthropic's recent decision to narrow its AI safety commitments. In late February, the company removed a previous pledge to halt development of its AI models if they outpace safety procedures, replacing it with "nonbinding but publicly-declared" goals
2
. "Given that we know Claude has been a repeated target of malign Chinese Communist Party (CCP) actors, combined with Claude Code's source code recently being leaked to the public, I don't understand why Anthropic would risk walking back any of its security measures," the congressman wrote2
.The timing of these policy changes has drawn scrutiny from lawmakers who view robust internal safety protocols as essential to protecting U.S. technological advantages. Gottheimer emphasized that "the safety and security of our AI systems are critical to our national security, and Anthropic's protocols should properly protect against this threat"
2
.The congressman's concerns extend beyond accidental leaks to deliberate targeting by Chinese Communist Party-backed actors. Anthropic revealed in November that Chinese hackers had used its coding tool to conduct a large-scale cyberattack with limited human involvement
2
. Additionally, the company identified three "industrial-scale campaigns" in February by China-based AI labs attempting to illicitly extract Claude's capabilities through distillation campaigns—a process designed to replicate AI model performance .
Source: Inc.
Gottheimer specifically warned about the upcoming Claude model Mythos, expressing concern it could be exploited to conduct more cyberattacks
1
. "We cannot allow the CCP to reverse engineer and exploit American AI, built for our national defense," he wrote, adding that "Anthropic needs to do everything it can to prevent CCP-backed companies like DeepSeek from conducting distillation campaigns"2
. The congressman questioned whether these AI labs could distill information about how AI is used in U.S. national security operations and how the company plans to prevent such access2
.Related Stories
Gottheimer is navigating complex political terrain, supporting Anthropic while demanding stronger security measures. He opened his letter by expressing opposition to the Trump Administration's decision to block Claude from all government contracts, stating he was "deeply concerned" about how that decision "could significantly set us back in our AI race against China"
1
3
. This positions him as threading a delicate needle—disagreeing with the administration's blacklisting of Anthropic while simultaneously demanding improved security to ensure advanced AI can still support Pentagon operations1
.
Source: Axios
Gottheimer has requested Anthropic answer specific questions about Claude's capabilities, risks of malicious use, and vulnerability to external attacks from outside parties
1
. As AI systems become increasingly embedded in U.S. defense and intelligence operations, this incident signals that congressional oversight will intensify3
. The U.S. competitive edge in AI development now depends not just on innovation speed but on maintaining robust protections against both accidental exposure and deliberate theft by adversaries.Summarized by
Navi
26 Nov 2025•Policy and Regulation

12 Feb 2026•Policy and Regulation

26 Mar 2026•Policy and Regulation

1
Technology

2
Science and Research

3
Startups
