11 Sources
[1]
NSA spies are reportedly using Anthropic's Mythos, despite Pentagon feud | TechCrunch
The National Security Agency is said to be using Mythos Preview, Anthropic's recently announced model that it withheld from public release, Axios reports. The news comes weeks after the NSA's parent agency, the Department of Defense, labeled Anthropic a "supply chain risk," after the company refused to allow Pentagon officials unrestricted access to its model's full capabilities. Anthropic announced Mythos earlier this month as a frontier model designed for cybersecurity tasks, but claimed the model was too capable of offensive cyberattacks to be released publicly. As a result, the AI firm limited access to Mythos to around 40 organizations, of which it has publicly named only a dozen. The NSA appears to be among the undisclosed recipients, and is said to be using Mythos primarily for scanning environments for exploitable vulnerabilities. The UK's AI Security Institute has also confirmed it has access to Mythos. The U.S. military's expanding use of Anthropic's tools comes as it simultaneously argues in court that those tools can threaten national security. The Pentagon's dispute originated when Anthropic refused to make Claude available for mass domestic surveillance and autonomous weapons development. The NSA's access to Mythos comes as Anthropic's relationship with the Trump administration appears to be thawing. Last Friday, Anthropic chief executive Dario Amodei met with White House chief of staff Susie Wiles and Secretary of the Treasury Scott Bessent. The White House reportedly called the meeting productive. TechCrunch has reached out to the NSA for comment. Anthropic declined to comment.
[2]
The White House-Anthropic Spat Needs to End
Anthropic PBC's latest artificial-intelligence model, called Claude Mythos Preview, has an unusual capability: It's a world-class hacker. In fact, the company says, Mythos has already uncovered "thousands of high-severity vulnerabilities, including some in every major operating system and web browser." Some of the flaws had gone undetected for decades. Disturbed by these results, as Bloomberg News recently reported, Anthropic withheld the model from the public, published a detailed system card, briefed the government and worked with a group of key companies (in an initiative called Project Glasswing) to patch vulnerabilities -- exactly the kind of public-spirited process that one would hope for from a responsible AI lab. It's more than a little puzzling, then, that the government is simultaneously trying to designate Anthropic a national-security threat. AI's potential to undermine cybersecurity has been clear for some time. Frontier models can scan large datasets quickly, find long-hidden flaws, facilitate malicious activity and otherwise make life easier for criminals or foreign spies. "As AI reduces the expertise needed for cyberattacks and automates complex operations," one recent study warned, "adversaries can launch more frequent, sophisticated, and stealthy campaigns at low cost." In the longer term, these capabilities could prove broadly beneficial, as AI systems are tailored to routinely find weaknesses and accelerate fixes. Anthropic has described Glasswing as "an urgent attempt to put these capabilities to work for defensive purposes." Unfortunately, this process has been marred by a needless spat with the White House. Anthropic has a contract with the Defense Department to offer a specialized version of Claude for military purposes. It purportedly wanted to limit use of its tool for things like mass surveillance and autonomous weaponry. The Pentagon asserted the right to use it for "all lawful purposes." After an operatic showdown, Defense Secretary Pete Hegseth vowed to label Anthropic a "supply-chain risk" and the president directed federal agencies to stop using its products within six months. (The two sides are now in court.) Even if one sympathizes with the Pentagon's point of view, such a designation -- extremely unusual for a domestic company -- makes little sense. The administration has elsewhere argued that Anthropic is essential for protecting national security, while the military has reportedly continued to use its tools during the conflict with Iran, which would be rather unwise if the company were as big a threat as the administration alleges. Where Mythos is concerned, this effort is doubly wrongheaded. Defense contractors, which should be prioritizing the tool, may not want to use it for fear of violating the designation. Federal agencies that might normally collaborate with an initiative like Glasswing may hesitate. (Some of them might get access to a version of the tool, details TBD.) Other potential partners may worry about legal uncertainties or political blowback. In effect, the White House is impeding the rollout and adoption of a critical security tool. Perhaps more to the point, this is a crazy way to treat a cutting-edge American company. Anthropic's boss met with the White House last week; on Tuesday, the president said, "I think we'll get along with them just fine." Perhaps a detente is in view. But however this fight turns out, it will almost certainly erode the trust of other AI labs, distort business priorities, obstruct innovation, and invite rent-seeking and corruption. Perhaps worse, as the Mythos saga shows, it could prove actively harmful to national security -- especially as other models develop similar capabilities. Many things are uncertain about the AI revolution. That's all the more reason for the government and the industry to get on the same page. More From Bloomberg Opinion: * Anthropic Deserves Support, Not a Blacklisting: Gautam Mukunda * AI Washing Distracts From a Deeper Crisis: Catherine Thorbecke * US Cybersecurity Cutbacks Come at Exactly Wrong Time: Dave Lee Want more Bloomberg Opinion? OPIN <GO>. Web readers, click here . Or you can subscribe to our daily newsletter.
[3]
The NSA is reportedly using Anthropic's new model Mythos
Despite the months-long feud between Anthropic and the Pentagon, the National Security Agency is using the AI company's new Mythos Preview, according to Axios, which spoke to two sources with knowledge of the matter. Anthropic announced Mythos Preview at the beginning of April, describing it as a general-purpose language model that is "strikingly capable at computer security tasks." But back in February, Trump ordered all government agencies to stop using Anthropic's services after the company refused to budge on certain safeguards for military uses during contract talks. The news comes days after Anthropic CEO Dario Amodei met with White House chief of staff Susie Wiles and other officials, reportedly to discuss Mythos. The White House later said the meeting on Friday was "productive and constructive," though President Trump said he had "no idea" about it when asked by reporters, Reuters reports. According to Axios' sources, the NSA is one of the roughly 40 organizations Anthropic gave access to Mythos Preview, and one said it's "being used more widely within the department" too. The company is still embroiled in a legal battle with the US government. Anthropic filed lawsuits against the Department of Defense in two courts in March after the Trump administration labeled it a "supply chain risk," and the Pentagon filed a response shortly after. While Anthropic was granted a preliminary injunction by one court to temporarily block this designation, federal judges in the other denied its motion to lift the label.
[4]
US security agency still using Mythos despite ban - government using new security tool despite Pentagon's 'supply chain risk' designation
Despite trading blows with the company, US government is still using Anthropic's products * NSA reportedly using Anthropic's Mythos Preview AI despite Pentagon labeling company a supply‑chain risk * Mythos, part of Project Glasswing, capable of discovering and exploiting zero‑days * Anthropic previously refused DoD request to weaken guardrails; lawsuits followed, while Trump administration met CEO to discuss cooperation The US National Security Agency (NSA) is using Anthropic's Mythos Preview AI tool, despite the Pentagon deeming the company a supply-chain risk earlier this year. Citing sources familiar with the matter, Axios said Mythos Preview is being used "more widely" within the department. At this moment, neither the US Department of Defense (DoD), nor the NSA commented on the news. Anthropic hasn't spoken about it, either. In February this year, the US government asked Anthropic to remove the guardrails set up for its AI tools, which the company declined over fears they could be used for 'mass domestic surveillance' and 'fully autonomous weapons' What is Mythos Preview? Mere days after Anthropic CEO Dario Amodei declared the company "cannot in good conscience accede" to the DoD's request, the US government deemed the AI company as a "supply chain risk". Anthropic responded with two federal lawsuits, claiming violations of protected speech. Liz Huston, a spokeswoman for the White House, commented on the lawsuits saying Anthropic was "a radical left, woke company," adding that, "Under the Trump Administration, our military will obey the United States Constitution - not any woke AI company's terms of service." Mythos Preview is a newer Anthropic Generative Artificial Intelligence (GenAI) model, and part of Project Glasswing. It was first mentioned earlier in April, when the company said it would not be releasing it to the public because it was too dangerous. Apparently, the tool was able, with very little input, to discover and exploit software vulnerabilities, including "zero-days" flaws. Access to Project Glasswing was limited to a handful of large software companies, such as Apple, Cisco, Microsoft, Nvidia, and a handful of others, who were offered to get a head start and secure their products. In its report, Reuters said that US President Donald Trump's administration met with Anthropic's CEO last week to discuss working together, for the first time since the dispute began. Via Reuters Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[5]
Scoop: NSA using Anthropic's Mythos despite blacklist
Why it matters: The government's cybersecurity needs appear to be outweighing the Pentagon's feud with Anthropic * The department moved in February to cut off Anthropic and force its vendors to follow suit. That case is ongoing. * The military is now broadening its use of Anthropic's tools while simultaneously arguing in court that using those tools threatens U.S. national security. Breaking it down: Two sources said the NSA was using Mythos, while one said the model was also being used more widely within the department. * It's unclear how the NSA is currently using Mythos, but other organizations with access to the model are using it predominantly to scan their own environments for exploitable security vulnerabilities. * Anthropic restricted access to Mythos to around 40 organizations, contending that its offensive cyber capabilities were too dangerous to allow for a wider release. * Anthropic only announced 12 of those organizations. One source said the NSA was among the unnamed agencies with access. * The NSA's counterparts in the U.K. have said they have access to the model through the country's AI Security Institute. Driving the news: Anthropic CEO Dario Amodei met White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent on Friday to discuss the use of Mythos within government and Anthropic's wider plans and security practices. * Sources said next steps after the meeting were expected to focus on how departments other than the Pentagon engage with the model. Both sides described the meeting as productive. * Anthropic and the Pentagon declined to comment. The NSA and Office of the Director of National Intelligence did not respond to requests for comment. Zoom out: The breakdown between the Pentagon and Anthropic came during tense contract renegotiations earlier this year.
[6]
One of the most controversial US agencies is reportedly taste-testing Anthropic uber-powerful Mythos AI
The agency's reported use of Mythos highlights a widening split inside the US government over AI risk The US government's AI fight just got harder to square. The National Security Agency is reportedly using Anthropic's Mythos Preview even as senior Pentagon officials keep pushing to cut the company off over supply chain concerns. It shows how quickly real security needs can outrun official policy. Since February, the Defense Department has been trying to block Anthropic and push vendors to do the same. Yet, according to an Axios report, the NSA appears to be moving ahead with one of the company's most powerful models anyway, suggesting cybersecurity demand is carrying more weight than the feud now playing out inside government. Why Mythos access is so limited Mythos stands out because Anthropic appears to be keeping it on a short leash. Sources said the company limited access to around 40 organizations because of the model's offensive cyber capabilities, and only some of those users have been publicly named. One source said the NSA was among the unnamed agencies with access. That makes this look less like a normal chatbot deployment and more like a high stakes security tool. Sources said groups with access have mainly used Mythos to scan their own systems for exploitable vulnerabilities, which helps explain why national security officials would still want it despite the clash over trust. Washington's AI contradiction The bigger issue is the contradiction now sitting in plain sight. One part of the government is treating Anthropic as a risk, while another is reportedly testing its top model. That makes the blacklist look less settled than advertised. Recommended Videos The fight appears to run deeper than procurement. Defense officials wanted Anthropic to make Claude available for all lawful purposes, while the company resisted uses tied to mass domestic surveillance and autonomous weapons. Some officials took that as proof Anthropic could not be counted on when the military needed it, a claim the company disputes. What happens after this The next question is whether Mythos stays an NSA exception or becomes a broader opening across government. Sources said a recent meeting between Anthropic CEO Dario Amodei, White House chief of staff Susie Wiles, and Treasury Secretary Scott Bessent focused on Mythos in government and the company's broader security posture. Both sides described the meeting as productive. If more agencies move in, this episode will look like a preview of how Washington handles powerful AI when internal policy fights collide with tools officials do not want to give up.
[7]
NSA Is Using Anthropic's Powerful Claude Mythos AI as CEO Meets With White House: Report - Decrypt
An administration source told Axios that every federal agency except the Department of Defense wants access to Anthropic's AI tools. The National Security Agency is running Anthropic's Claude Mythos Preview inside its classified networks, according to two sources cited by Axios -- a surprising development given that the NSA falls under the Department of Defense, which declared Anthropic a supply-chain risk in March and is currently fighting the company in federal court. Claude Mythos is not a standard enterprise tool. When Anthropic unveiled the model earlier this month, it restricted access to a handful of vetted organizations, arguing that the model poses serious offensive security risks. Anthropic's own technical documentation found that Mythos was able to identify critical vulnerabilities in every widely used operating system and web browser. The company judged it too dangerous for open release. Most organizations with access are using the model defensively, scanning their own infrastructure for weaknesses before adversaries do. The initiative, branded Project Glasswing, includes Microsoft, Google, Apple, Amazon Web Services, JPMorgan Chase, and Nvidia. What the NSA is doing with Mythos is less clear, though the agency's mission is not purely defensive. A third source told Axios the model is being used more broadly within the intelligence department. The Pentagon's hostility toward Anthropic traces to negotiations that went bad. In July 2025, the two sides signed an agreement making Claude the first frontier AI model cleared for use on classified networks. Talks soured when the Pentagon sought to renegotiate, demanding the military be allowed to use Claude "for all lawful purposes" without restriction. Anthropic refused, drawing two firm lines: no autonomous weapons, and no domestic mass surveillance. When negotiations collapsed, Defense Secretary Pete Hegseth declared Anthropic a supply-chain risk in late February -- an unprecedented designation, and the first ever applied to an American company. A California federal judge blocked the move, but then a D.C. appeals court denied Anthropic's separate bid to halt the blacklisting while litigation plays out. The two sides remain in court. While the legal fight grinds on, the rest of the administration is moving in a different direction. On April 17, Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. Anthropic described the session as "productive", Reuters reported. The White House said the parties "discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology." President Trump, asked by reporters about the meeting, said he had "no idea" Amodei had been at the White House, after he previously ordered the administration not to use Anthropic's "woke" models. Bessent and Federal Reserve Chair Jerome Powell have separately been encouraging major bank CEOs to test Mythos and be prepared for security threats, and an administration source told Axios that every federal agency except the Defense Department wants access to Anthropic's tools. The NSA's reported use of Mythos comes as questions mount about whether the model's capabilities can be contained at all. Decrypt reported last week that researchers at Vidoc Security reproduced several of Mythos's most alarming cybersecurity findings using publicly available models -- including OpenAI's GPT-5.4 and Anthropic's own Claude Opus 4.6 -- without any special access to Mythos itself. Anthropic did not immediately respond to a request for comment by Decrypt.
[8]
Anthropic had a 'productive and constructive' meeting with White House officials after the preview release of its new cybersecurity-challenging AI model
Anthropic and the US government have not been on the best of terms in recent months, as the US Department of Defense deemed the company a "supply chain risk" after its refusal to remove safeguards designed to prevent its products being used for autonomous weapon and mass surveillance purposes. The company has since sued, but that hasn't stopped Anthropic CEO Dario Amodei from attending a meeting at the White House after the announcement of its new Claude Mythos AI model earlier this month. Mythos may have caused a stir all the way to the very top, as it's said to be capable of identifying thousands of high-severity cybersecurity vulnerabilities, and can write its own exploits to demonstrate them. The model is currently in the preview stage, with a few dozen companies given access to its capabilities. The Anthropic chief spoke to US treasury secretary, Scott Bessent, and White House chief of staff, Susie Wiles, last Friday, according to BBC News, in a meeting that was described by the White House as "productive and constructive". "We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology," the White House said in a reported statement. "The conversation also explored the balance between advancing innovation and ensuring safety." This friendly-sounding chat stands in stark contrast to comments made by US President Donald Trump about Anthropic in the middle of its dispute with the US Department of War in February. Trump called it a "radical, left, woke company", and claimed that the US government "will not do business with them again." Despite these remarks, it appears that Claude Mythos has warranted a cooler approach from the US government in regards to dealing with Anthropic's AI developments. Certainly, an AI model with these sorts of capabilities could represent a potential security risk if it fell into the wrong hands, and a powerful tool for any company, or government, to wield. For Anthropic's part, a spokesperson for the company said: "The meeting reflected Anthropic's ongoing commitment to engaging with the US government on the development of responsible AI. We are grateful for their time and are looking forward to continuing these discussions."
[9]
Anthropic becomes impossible for White House to ignore
Anthropic's new Mythos model is keeping the company's foot in the White House's door despite the Trump administration blacklisting the firm's products from military and government work earlier this year. Mythos, Anthropic's most advanced model to date, has drawn interest from various parts of the federal government, giving the artificial intelligence firm a chance to smooth over its rocky relationship with the Trump administration. Less than two months ago, President Trump condemned Anthropic as an "out of control radical left" company, prompting uncertainty about the firm's future in a Republican-led Washington. But in the two weeks since Anthropic unveiled Mythos, the White House has seemingly softened its tone on the company. Trump told CNBC's "Squawk Box" on Tuesday the AI firm "tends to be on the left," but "we get along with them." "They're very smart, and I think they can be of great use. I like smart people, I like high-IQ people and they definitely have high IQs," he said. "I think we'll get along with them just fine." Days earlier, Anthropic CEO Dario Amodei met with administration officials at the White House, including chief of staff Susie Wiles and reportedly Treasury Secretary Scott Bessent, to discuss Mythos and other cybersecurity concerns. The White House had some "very good talks" with the company, Trump said, adding the firm is "shaping up." The comments, combined with Anthropic's recent meetings with the White House and other federal agencies, signal a potential reconciliation with the company even as it fights the government in court. Anthropic sued the federal government in two courts last month over the Pentagon's decision to label the company a "supply chain risk" after negotiations over safety guardrails fell apart. The designation, typically reserved for companies of foreign adversaries, prohibits the use of Anthropic's products by the Department of Defense. The AI firm is also challenging Trump's social media directive addressed to civilian agencies to stop using Anthropic's products. A California judge granted a preliminary injunction to temporarily halt the designation and directive, while a Washington, D.C., court of appeals panel rejected Anthropic's request for an emergency stay. As both sides prepare to square off in court, Anthropic released Mythos, an AI model that can spot decades-old security vulnerabilities in major web browsers and software. The model can help companies or organizations more easily find and repair these gaps, but it could also present new risks when in the hands of bad actors looking to exploit vulnerabilities. The company released it to a limited group of technology and cybersecurity companies and Wall Street banks. Speculation quickly swirled over whether Mythos would be accessible to the government, given the company's ongoing feud with the administration. "It really does feel like this is the government cutting off its nose to spite its face," Jessica Tillipman, the associate dean for government procurement law studies at the George Washington University Law School, told The Hill. Despite the ban, the White House was quick to engage with the company again following the release of Mythos. Bessent and Federal Reserve Chair Jerome Powell met with Wall Street executives on the issue earlier this month. Bessent also joined Vice President Vance for a call with Amodei and other AI leaders to discuss cybersecurity associated with AI models, CNBC reported. And now, federal agencies are requesting access to Mythos for their own defensive work. An Anthropic official told The Hill the company has briefed teams at the National Institute of Standards and Technology and the Cybersecurity and Infrastructure Security Agency. Further, Axios reported this week the National Security Agency -- housed under the Pentagon -- is using Mythos despite the ban, while the Treasury Department requested access, per Bloomberg. Trump on Tuesday went as far to say a deal to use Anthropic in the Defense Department is "possible." The administration's tone is decidedly dialed back from just a few weeks ago, when various voices in Trump's orbit were focused on Anthropic's political leanings and seemed less interested in engaging with the company. Dean Ball, a co-author of Trump's White House AI Action Plan who supported Anthropic in its clash with the Pentagon, suggested the release of Mythos underscores the Trump administration's mistaken approach to the company. "I think the gods are trying to tell us something about the correct way that we are dealing with the government, if they're trying to tell the government something about how to deal and not deal with this technology and with this industry," said Ball, who left the White House last summer. "You can't say that it looked particularly wise," he added. Even if the White House does not agree ideologically with the company, technology experts say Anthropic's capabilities might force the Trump administration to work with it. Anthropic is one of only a handful of companies producing the country's most advanced models. "The government is on some level losing some of its leverage because there are only a handful of frontier models that they can tap into. It's a highly concentrated market," Tillipman said, adding, "It's clear to me that they [the government] were grasping for whatever to try to use as leverage, because Anthropic wouldn't budge and they wound up backing themselves into this legal position." Matt Mittelsteadt, a senior frontier security researcher at the Institute for AI Policy and Strategy, added it would be in the government's best interest to have multiple model vendors to boost cybersecurity. "If you don't have optionality, if you put your eggs all in one basket, then you are going to find yourself limited," Mittelsteadt told The Hill on Tuesday. "Diversity is almost certainly going to be key here." "Things like vulnerability discovery, even if we have two models that are roughly both very good at this, because they are designed differently, because their cognition ... is different, it is very likely they're going to be surfacing slightly different things," he said.
[10]
NSA Is Running Anthropic's Mythos AI: Report - Amazon.com (NASDAQ:AMZN), Alphabet (NASDAQ:GOOG)
The U.S. National Security Agency (NSA) is allegedly using Anthropic's restricted artificial intelligence model, Claude Mythos Preview, for cyber defense despite the Pentagon blacklisting the company by designating it a supply chain risk. The AI model has also reportedly been used more broadly across the Defense Department, Axios reported. Mythos was announced earlier this month. It is part of Anthropic's "Project Glasswing," which allows select organizations to access the unreleased Claude Mythos Preview model. Anthropic has described Mythos as its "most capable yet for coding and agentic tasks," a reference to software that can take actions with less direct human prompting. Despite the lawsuit, the government is reportedly considering granting federal agencies access to Anthropic's advanced AI model "Mythos." In an email, Gregory Barbaccia, CIO at the White House Office of Management and Budget, said that the department is preparing safeguards to give agencies access to Mythos. This will reportedly be a modified version of the advanced AI model. The email did not specify a timeline for the rollout. Anthropic's CEO Dario Amodei visited the White House on Friday to meet with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent in an effort to resolve the company's ongoing lawsuit with the Pentagon. The White House described the meeting with Amodei as "productive" and "constructive," CNBC reported. The administration said the talks focused on balancing innovation with "safety" and managing risks tied to scaling powerful models. Anthropic echoed the tone, calling the discussion "productive," in a statement to the publication. Despite the high-level engagement, Trump appeared out of the loop. When asked about Amodei's visit on a runway in Arizona, he responded "Who?" and later added he had "no idea." The meeting marks a shift from the recent tensions, when the administration labeled Anthropic a national security risk and ordered agencies to stop using its technology. The company responded with lawsuits challenging the move, and those cases are ongoing. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[11]
US security agency is using Anthropic's Mythos despite blacklist, Axios reports
April 19 (Reuters) - The United States National Security Agency is using Anthropic's Mythos Preview AI tool despite the Pentagon hitting the company with a formal supply-chain risk designation, Axios reported on Sunday. The Mythos Preview model was being used more widely within the department, Axios said, citing sources. Reuters could not immediately verify the report. Anthropic, the NSA and the Department of Defense did not immediately respond to requests for comment outside regular business hours. The NSA is part of the Defense Department. Last week, U.S. President Donald Trump's administration and Anthropic's CEO discussed working together for the first time since a dispute earlier this year between the Pentagon and the AI firm over how that company's models should be used. The talks came amid growing fears the artificial intelligence startup's latest model Mythos will supercharge cyberattacks. The model is the company's "most capable yet for coding and agentic tasks," Anthropic has previously said, referring to the model's ability to act autonomously. Its capabilities to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said. (Reporting by Gursimran Kaur in Bengaluru; Editing by Bill Berkrot)
Share
Copy Link
The National Security Agency is reportedly using Anthropic's powerful Mythos Preview model to scan for exploitable vulnerabilities, even as the Pentagon has designated the AI company a supply chain risk. The contradiction highlights the government's struggle to balance cybersecurity needs with its ongoing dispute over AI safety guardrails, mass surveillance, and autonomous weapons development.
The National Security Agency is using Mythos Preview, Anthropic's recently announced AI model designed for computer security tasks, according to multiple sources cited by
Axios
5
. The NSA appears to be among roughly 40 organizations that Anthropic granted access to the model, and sources indicate Mythos is "being used more widely within the department"3
. Organizations with access to Mythos are primarily using it to scan their own environments for exploitable vulnerabilities5
. The UK's AI Security Institute has also confirmed it has access to the model1
.
Source: TechRadar
Anthropic announced Mythos earlier in April as a frontier model with offensive cyber capabilities so potent that the company deemed it too dangerous for public release
1
. The model has already uncovered "thousands of high-severity vulnerabilities, including some in every major operating system and web browser," with some flaws having gone undetected for decades . The tool can discover and exploit zero-day vulnerabilities with very little input4
. As part of Project Glasswing, Anthropic limited access to key companies including Apple, Cisco, Microsoft, and Nvidia to patch cybersecurity vulnerabilities before broader deployment4
.
Source: Decrypt
The military use of AI has created a sharp contradiction within the government. The Department of Defense labeled Anthropic a supply chain risk in February after the company refused to allow Pentagon officials unrestricted access to Claude for "all lawful purposes"
1
. Anthropic declined the Department of Defense request over concerns the tools could be used for mass surveillance and autonomous weapons development3
4
. The Trump administration subsequently ordered all government agencies to stop using Anthropic's services within six months3
. Anthropic responded with two federal lawsuits claiming violations of protected speech, and while one court granted a preliminary injunction to temporarily block the supply chain risk designation, another denied the motion3
.
Source: Bloomberg
Related Stories
Dario Amodei, Anthropic's CEO, met with White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent on Friday to discuss the use of Mythos within government and the company's security practices
5
. Both sides described the meeting as productive, with next steps expected to focus on how departments other than the Pentagon engage with the model5
. The White House reportedly called the meeting "productive and constructive," though President Trump said he had "no idea" about it when asked by reporters3
. Days later, Trump stated, "I think we'll get along with them just fine" .The government's cybersecurity needs appear to be outweighing the Pentagon's feud with Anthropic
5
. The military is broadening its use of Anthropic's tools while simultaneously arguing in court that using those tools threatens national security5
. Critics argue the supply chain risk designation makes little sense, noting the administration has elsewhere argued that Anthropic is essential for protecting national security, while the military has reportedly continued to use its tools during the conflict with Iran . Defense contractors may hesitate to use Mythos for fear of violating the designation, and federal agencies that might collaborate with Project Glasswing may hold back . As frontier models develop similar capabilities for cyberattacks, the standoff could prove actively harmful to national security by impeding the rollout of critical security tools .Summarized by
Navi
[2]
23 Apr 2026•Policy and Regulation

30 Apr 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
