3 Sources
3 Sources
[1]
Pentagon's Biggest Champion of Blacklisting Anthropic Has a Few Million Reasons for His Stance
When the Pentagon attempted to strong-arm Anthropic over the company's refusal to allow its model to be used for domestic surveillance or fully autonomous weapons deployment, there were apparent standoffs behind the scenes with Defense Secretary Pete Hegseth, but the public push to punish Anthropic came from Emil Michael, the Trump administration’s Undersecretary of Defense for Research and Engineering and the Pentagon’s Chief Technology Officer. According to a report from The Lever, he may have had some ulterior motives for how hard he pushed to ban the AI firm from government dealings. A financial disclosure filed by Michael when he took office and spotted by The Lever revealed that he holds "vested and unvested stock" in Perplexity, an Anthropic rival, and also served on the company's board. Per the filing, he owns $2-$10 million in Perplexity stock. While Perplexity doesn't have a contract directly with the Department of Defense, it did enter an agreement last year to deploy its AI search engine to all federal agencies and is one of three companies to be considered for approval to host government-provided AI systems on their own servers. Michael also had holdings in and served as an advisor to the Sam Altman-run Tools for Humanity, makers of the eye-scanning orb for human verification. It's probably worth noting that Altman's other company, OpenAI, is the one that is taking over the Pentagon contract that Anthropic was forced to forfeit. Michael having his hand in an emerging tech industry isn't really a surprise given his history. He was previously the senior vice president of business and chief business officer at Uber, serving as founder Travis Kalanick's right-hand man until both were pushed out by investors. Michael recently said he'll "never forget...nor forgive" the shareholders for ousting them. The fact that Michael is someone who holds a grudge probably doesn't offer much assurance for Anthropic in its quest to get its contract back. That might lead to some complicated conversations down the line. On Tuesday, a judge overseeing one of the lawsuits filed by Anthropic against the Department of Defense for allegedly retaliating against the company seemed inclined to side with the AI firm. US district judge Rita Lin described the Pentagon's decision to designate Anthropic as a supply chain risk as "an attempt to cripple Anthropic." It's not like the Pentagon has weaned itself off of Anthropic's tech, either, despite all its bluster and accusations of being a security threat. The Department of Defense reportedly used Anthropic's Claude in the earliest days of its attack on Iran, and does not seem to have weaned itself off the AI despite its own warnings about the technology.
[2]
Scoop: Altman told staff he tried to "save" Anthropic in Pentagon clash
Why it matters: The messages reveal Altman's conflicted thinking as OpenAI swooped in to secure a Pentagon contract Anthropic had just lost -- portraying himself as a peacemaker even as his company emerged as the primary beneficiary. * The Slack messages, paraphrased below, trace Altman's thinking from Feb. 24 through March 2 as the dispute escalated into one of the most dramatic confrontations yet between the AI industry and the federal government. Behind the scenes: The standoff had been public for 10 days -- Axios first reported Feb. 14 that the Pentagon was weighing cutting ties with Anthropic -- when Altman chimed into a Slack channel with employees who were anxious about the escalation on Feb. 24. * Altman wrote that it might be a good time to work out a deal for classified use of OpenAI's systems with Defense Under Secretary Emil Michael, who was leading the negotiations. * Michael ended up calling Altman first later that afternoon, according to a source with knowledge of the discussions. The two sides began exchanging draft contract language the next day. On Feb. 26, Altman sent an all-staff message saying OpenAI shared Anthropic's red lines and wanted to help de-escalate -- while making clear he still hoped to strike his own deal with the Pentagon. * He acknowledged that the optics may not look good in the short term, but stressed the nuance of the situation and said he was committed to acting on principle rather than appearances. On Feb. 27, Altman relayed to a core group of staff that negotiations between the Pentagon and Anthropic had taken a turn for the worse due to the perception that Amodei was playing to the press. * Later that day, as the Pentagon's 5pm deadline approached, Altman told the group that the Pentagon believed it could offer Anthropic an off-ramp from the supply chain risk designation. * Altman remarked that he found it strange to be working so hard to "save" a rival whose CEO had, in his view, spent years trying to destroy OpenAI. * That night, Altman announced to all staff that OpenAI had reached a deal with the Pentagon, and said he was asking the government to extend the same terms to other AI companies to de-escalate. Between the lines: When the dust settled, OpenAI had secured a Pentagon deal but arguably lost a public relations fight due to the perception Altman had capitalized on Amodei's principled stand. * The messages suggest Altman believed -- or at least told employees -- that he was acting in good faith to help his rival and resolve a dispute that has major implications for the entire AI industry. * Many on the Anthropic side have a less charitable view of Altman's motives -- including Amodei, who called Altman's peacemaker framing "straight up lies" in a leaked memo. He later apologized. What we're watching: Anthropic is still being used by the Pentagon and a deal remains possible -- but a March 2 Slack message shows Altman acknowledging a critical obstacle he hadn't anticipated. * OpenAI's contract included a carve-out requiring the Pentagon to negotiate a separate agreement before deploying ChatGPT in intelligence agencies like the NSA. * The Pentagon confirmed it was offering OpenAI's deal to other companies, but would not give Anthropic the ability to opt out of working with intelligence agencies. The reason: Claude is already so embedded across those agencies that the carve-out OpenAI had secured could not be offered to Anthropic, Altman told employees. He lamented that he hadn't thought of that ahead of time.
[3]
Behind the Curtain: How Anthropic's Pentagon deal could get revived
Why it matters: Anthropic is suing the Trump administration for nixing use of Claude, the company's large language model, after the company refused to allow its AI to be used for fully autonomous warfare or mass surveillance of Americans (which the Pentagon says is already illegal). So a compromise seems undoable. Defense Secretary Pete Hegseth is pretty dug in. A source familiar with his thinking told us: "Unless they come back to the government and say, 'We're going to agree to any lawful use,' there isn't anything to talk about. ... The secretary's bottom line is 'all lawful uses.'" * But there is a path, sources on both sides tell us. Behind the scenes: In private, some leaders inside the federal government want Anthropic AI, both for warfighting and cyber defense. These officials believe Anthropic is a big reason the U.S. is probably 6-12 months ahead of China in leveraging AI for national defense. Claude is also being used extensively for the Iran war. * Also in private, lots of people at Anthropic or advising its CEO, Dario Amodei, believe they were within inches of a deal that would have and should have satisfied both sides. They're pushing Amodei and the Pentagon to quietly revive the talks. An Anthropic court filing included a one-paragraph email from Emil Michael, the Pentagon official negotiating with the company, saying "we are very close here" to an agreement on language. The note was dated March 4, five days after Hegseth declared the company a "Supply-Chain Risk to National Security." * The day after the letter, Michael tweeted: "I want to end all speculation: there is no active @DeptofWar negotiation with @AnthropicAI." Earlier, Michael said on X that Amodei "is a liar and has a God-complex." One possibility: Anthropic agrees to parameters on ensuring AI is used lawfully, and arranges to donate to Trump Accounts, and/or back other AI policies both sides support. * Brad Gerstner could be a middleman. Gerstner's firm, Altimeter Capital, is an Anthropic investor, and he's an Amodei adviser. He's also an architect of Trump Accounts, which provide $1,000 to every newborn whose parents enroll, and allow parents and others to contribute until the child turns 18. Older kids can get Trump Accounts through philanthropic efforts like last year's massive gift by Michael and Susan Dell. Amodei and top Anthropic officials have said publicly they want the country, not just themselves, to be enriched by their fast-growing company. So they could conceivably fund Trump Accounts. * It's also possible they strike a deal by compromising on language governing how the Pentagon will use Anthropic's Claude. There are also several Trump policies Amodei supports to help grease a deal. * It's doubtful the exact language OpenAI agreed to will suffice, which remains a substantial and perhaps insurmountable obstacle. Either way, soon-to-be-released models are likely to stir government urgency to strike some kind of deal. Anthropic, in private discussions with government officials, is warning that the next big advancement will supercharge offensive and defensive cyber capabilities. * That means the chances of something big, bad and very public hitting U.S. infrastructure or institutions will rise substantially. If Anthropic continues to outperform other models, the government's cyberwarriors will push to keep access to Claude. The bottom line: Any deal probably requires marriage-counselor-level mediation between Hegseth and Amodei. This is truly the territory of: Conservative defense secretaries are from Venus, and liberal Silicon Valley CEOs are from Mars. They couldn't think, talk or act more differently. * In fact, take the two men out of it, and almost everyone else involved in the talks tells us we wouldn't be writing this column, because Anthropic AI would be the default AI of the U.S. military -- until something superior comes along. Axios' Maria Curi contributed reporting.
Share
Share
Copy Link
Emil Michael, the Pentagon's Chief Technology Officer who led efforts to blacklist Anthropic over AI usage restrictions, holds $2-10 million in stock from rival firm Perplexity. Meanwhile, Sam Altman claims he tried to mediate the dispute even as OpenAI secured the Pentagon contract Anthropic lost, sparking questions about conflicts of interest in government dealings.
The Pentagon clash with Anthropic over AI usage restrictions has taken a new turn with revelations about potential conflicts of interest. Emil Michael, the Trump administration's Undersecretary of Defense for Research and Engineering who spearheaded efforts to blacklist AI firm Anthropic, holds $2-10 million in vested and unvested stock in Perplexity, an Anthropic rival
1
. The financial disclosure, filed when Michael took office, also shows he served on Perplexity's board and held positions with Tools for Humanity, run by Sam Altman1
. These financial holdings raise questions about the motivations behind the Department of Defense's aggressive stance against Anthropic, particularly as Perplexity entered an agreement to deploy its AI search engine across federal agencies and is among three companies being considered to host government-provided AI systems1
.
Source: Gizmodo
The conflict centers on Anthropic's refusal to allow its large language model Claude to be used for fully autonomous warfare or mass surveillance of Americans. Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk to national security on February 27, demanding the company agree to "all lawful uses" of its AI
3
. US district judge Rita Lin described the Pentagon's decision as "an attempt to cripple Anthropic" during court proceedings1
. Yet even as the Pentagon threatened Anthropic, the Department of Defense continued using Claude during the earliest days of its attack on Iran, highlighting the military's dependence on the technology1
.
Source: Axios
As the dispute escalated, Sam Altman positioned himself as a mediator while OpenAI ultimately secured the Pentagon contract that Anthropic lost. Internal Slack messages reveal Altman told staff on February 24 that it might be a good time to work out a deal for classified use of OpenAI's systems with Emil Michael
2
. Michael contacted Altman that same afternoon, and the two sides began exchanging draft contract language the next day2
. By February 27, as the Pentagon's 5pm deadline approached, Altman remarked to staff that he found it strange to be working so hard to "save" a rival whose CEO had spent years trying to destroy OpenAI2
.
Source: Axios
That night, Altman announced OpenAI had reached a deal with the Pentagon and said he was asking the government to extend the same terms to other AI companies to de-escalate
2
. However, Anthropic CEO Dario Amodei called Altman's peacemaker framing "straight up lies" in a leaked memo, though he later apologized2
. The AI industry engagement with federal government has exposed deep tensions between commercial interests and ethical boundaries, with OpenAI emerging as the primary beneficiary despite Altman's claims of acting in good faith2
.Related Stories
Despite the public confrontation, sources on both sides indicate a potential Anthropic Pentagon deal could still be revived. Federal officials privately acknowledge that Anthropic is a major reason the U.S. maintains a 6-12 month lead over China in leveraging AI for national defense, and Claude is being used extensively for the Iran war
3
. A court filing revealed that Emil Michael sent an email on March 4 stating "we are very close here" to an agreement on language, just five days after Hegseth's supply chain risk designation3
. However, Michael tweeted the following day that there was no active negotiation with Anthropic and previously called Amodei "a liar" with "a God-complex"3
.One possibility involves Anthropic agreeing to parameters ensuring lawful military use of large language model technology while potentially backing Trump Accounts or other AI policies both sides support
3
. Brad Gerstner, whose firm Altimeter Capital is an Anthropic investor, could serve as mediator3
. A critical obstacle emerged when OpenAI's contract included a carve-out requiring separate agreements before deploying ChatGPT in intelligence agencies like the NSA, but the Pentagon cannot offer Anthropic the same terms because Claude is already deeply embedded across those agencies2
. Anthropic is warning government officials that upcoming models will supercharge offensive and defensive cyber capabilities, potentially increasing urgency for a deal as threats to U.S. infrastructure rise3
. The situation illustrates how autonomous warfare concerns and mass surveillance debates are reshaping government contracts in the AI era.Summarized by
Navi
[1]
12 Feb 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

1
Technology

2
Technology

3
Policy and Regulation
