35 Sources
35 Sources
[1]
Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal 'straight up lies,' report says | TechCrunch
Anthropic co-founder and CEO Dario Amodei is not happy -- perhaps predictably so -- with OpenAI chief Sam Altman. In a memo to staff, reported by The Information, Amodei referred to OpenAI's dealings with the Department of Defense as "safety theater." "The main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote. Last week, Anthropic and the U.S. Department of Defense (DoD) failed to come to an agreement over the military's request for unrestricted access to the AI company's technology. Anthropic, which already had a $200 million contract with the military, insisted the DoD affirm that it would not use the company's AI to enable domestic mass surveillance or autonomous weaponry. Instead, the DoD -- known under the Trump administration as the Department of War -- struck a deal with OpenAI. Altman stated that his company's new defense contract would include protections against the same red lines that Anthropic had asserted. In a letter to staff, Amodei refers to OpenAI's messaging as "straight up lies," stating that Altman is falsely "presenting himself as a peacemaker and dealmaker." Amodei might not be speaking solely from a position of bitterness, here. Anthropic specifically took issue with the DoD's insistence on the company's AI being available for "any lawful use." OpenAI said in a blog post that its contract allows use of its AI systems for "all lawful purposes." "It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose," OpenAI's blog post stated. "We ensured that the fact that it is not covered under lawful use was made explicit in our contract." Critics have pointed out that the law is subject to change, and what is considered illegal now might end up being allowed in the future. And the public seems to be siding with Anthropic. ChatGPT uninstalls jumped 295% after OpenAI made its deal with the DoD. "I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI's deal with the DoW as sketchy or suspicious, and see us as the heroes (we're #2 in the App Store now!)," Amodei wrote to his staff. "It is working on some Twitter morons, which doesn't matter, but my main worry is how to make sure it doesn't work on OpenAI employees."
[2]
When AI Companies Go to War, Safety Gets Left Behind
I've spent the past few days asking AI companies to convince me that the prospects for AI safety have not dimmed. Just a few years ago, it seemed that there was universal agreement among companies, legislators, and the general public that serious regulation and oversight of AI was not just necessary, but inevitable. People speculated about international bodies setting rules to insure that AI would be treated more seriously than other emerging technologies, and that could at least provide obstacles to its most dangerous implementations. Corporations vowed to prioritize safety over competition and profits. While doomers still spun dystopic scenarios, a global consensus was forming to limit AI risks while reaping its benefits. Events over the last week have delivered a body blow to those hopes, starting with the bitter feud between the Pentagon and Anthropic. All parties agree that the existing contract between the two used to specify -- at Anthropic's insistence -- that the Department of Defense (which now tellingly refers to itself as the Department of War) won't use Anthropic's Claude AI models for autonomous weapons or mass surveillance of Americans. Now, the Pentagon wants to erase those red lines, and Anthropic's refusal has not only resulted in the end of its contract, but also prompted Secretary of Defense Pete Hegseth to declare the company a supply-chain risk, a designation that prevents government agencies from doing business with Anthropic. Without getting into the weeds on contract provisions and the personal dynamics between Hegseth and Anthropic CEO Dario Amodei, the bottom line seems to be that the military is determined to resist any limitations on how it uses AI, at least within the bounds of legality -- by its own definition. The bigger question seems to be how we got to the point where releasing killer robot drones and bombs that identify and eliminate human targets wound up in the conversation as something that the US military would even consider. Did I miss the international debate about the merits of creating swarms of lethal autonomous drones scanning warzones, patrolling borders, or watching out for drug smugglers? Hegseth and his supporters complain about the absurdity of private companies limiting what the military can do. I think it's crazier that it takes a lone company risking existential sanctions to stop a potentially uncontrollable technology. In any case, the lack of international agreements means that every advanced militia must use AI in all its forms, simply to keep up with its adversaries. Right now, an AI arms race seems unavoidable. The risks extend far beyond the military. Overshadowed by the Pentagon drama was a disturbing announcement Anthropic posted on February 24. The company said it was making changes to its system for mitigating catastrophic risks from AI, called the Responsible Scaling Policy. It had been a key founding policy for Anthropic, in which the company promised to tie its AI model release schedule to its safety procedures. The policy stated that models should not be launched without guardrails that prevented worst-case uses. It acted as an internal incentive to make sure that safety wasn't neglected in the rush to launch advanced technologies. Even more important, Anthropic hoped adopting the policy would inspire or shame other companies to do the same. It called this process the "race to the top." The expectation was that by embodying such principles would help influence industry-wide regulations that set limits on the mayhem that AI could cause. At first, this approach seemed promising. DeepMind and OpenAI adopted aspects of Anthropic's framework. More recently, as investment dollars ballooned, competition between the AI labs increased, and the prospect of federal regulation began looking more remote, Anthropic conceded that its Responsibly Scaling Policy had fallen short. The thresholds did not create the consensus about the risks of AI that it hoped it would. As the company noted in a blog post, "The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level." Meanwhile, the competition between AI companies has gotten more cutthroat. Instead of a race to the top, the AI rivalry seems more like a bareknuckle version of King of the Mountain. When the Pentagon banished Anthropic, OpenAI rushed to fill the gap with its own Department of Defense contract. OpenAI CEO Sam Altman insisted that he entered his hasty deal with the Pentagon to relieve pressure on Anthropic, but Amodei was having none of it. "Sam is trying to undermine our position while appearing to support it," Amodei said in an internal memo. "He is trying to make it more possible for the admin to punish us by undercutting our public support." (Amodei later apologized for his tone in the message.)
[3]
ChatGPT uninstalls surged by 295% after DoD deal | TechCrunch
U.S. app uninstalls of ChatGPT's mobile app jumped 295% day-over-day on Saturday, February 28, as consumers responded to the news of OpenAI's deal with the Department of Defense (DoD), which has been rebranded under the Trump administration as the Department of War. This data, which comes from market intelligence provider Sensor Tower, represents a sizable increase compared with ChatGPT's typical day-over-day uninstall rate of 9%, as measured over the past thirty days. Meanwhile, U.S. downloads for OpenAI competitor, Anthropic's Claude, jumped up by 37% day-over-day on Friday, Feb. 27, and 51% as of Saturday, Feb. 28, after the company announced that it would not partner with the U.S. defense department. Anthropic said it was not able to agree on the deal terms over concerns that AI would be used to surveil Americans and be used in fully autonomous weaponry, which AI is not yet ready to do safely. A set of consumers seemingly favored Anthropic's position on the matter, the data suggests. In addition, ChatGPT's download growth was also impacted by the news of its DoD partnership, with its U.S. downloads dropping by 13% day-over-day on Saturday, shortly after the news of its deal went public. Those downloads continued to fall on Sunday, when they were down by 5% day-over-day. (Before the partnership was announced, the app's downloads had grown 14% day-over-day on Friday. These rapid changes were also reflected in Claude's App Store ranking, as the app hit No. 1 on the U.S. App Store on Saturday, where it continues to sit as of Monday, March 2. That's a jump of over 20 ranks compared with roughly a week before (Feb. 22, 2026). Consumers are also sharing their opinions about OpenAI's deal in the app's ratings, where 1-star reviews for ChatGPT surged 775% on Saturday, then grew 100% day-over-day on Sunday, Sensor Tower said. Five-star reviews declined during the same period, dropping by 50%. Other third-party data providers back up Sensor Tower's findings. Appfigures, for instance, noted that Claude's total daily U.S. downloads on Saturday surpassed those of ChatGPT for the first time. It also saw the U.S. downloads of Claude increase, but its estimates put that figure even higher: by 88% day-over-day on Saturday. Claude is now also the No. 1 free iPhone app in six countries, including Belgium, Canada, Germany, Luxembourg, Norway, Switzerland, and the U.S. A third market intelligence provider, Similarweb, noted that Claude's U.S. downloads over the past week were around 20x what they were in January, but that could be because of additional reasons beyond the political issues.
[4]
OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway
OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic's roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about the agreement. Altman admitted it looked "sloppy" in a social media post. While this incident has become a major news story, it may just be the latest and most public example of OpenAI creating vague policies around how the US military can access its AI. In 2023, OpenAI's usage policy explicitly banned the military from accessing its AI models. But some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, a version of OpenAI's models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also OpenAI's largest investor, and had broad license to commercialize the startup's technology. That same year, OpenAI employees saw Pentagon officials walking through the company's San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren't licensed to comment on private company matters. Some OpenAI employees were wary about associating with the Pentagon, while others were simply confused about what OpenAI's usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from OpenAI and Microsoft say Azure OpenAI products are not, and were not, subject to OpenAI's policies. "Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service," said spokesperson Frank Shaw in a statement to WIRED. Microsoft declined to comment specifically on when it made Azure OpenAI available to the Pentagon, but notes the service was not approved for "top secret" government workloads until 2025. "AI is already playing a significant role in national security and we believe it's important to have a seat at the table to help ensure it's deployed safely and responsibly," OpenAI spokesperson Liz Bourgeois said in a statement. "We've been transparent with our employees as we've approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team." The Department of Defense did not respond to WIRED's request for comment. By January 2024, OpenAI updated its policies to remove the blanket ban on military use. Several OpenAI employees found out about the policy update through an article in The Intercept, sources say. Company leaders later addressed the change at an all-hands meeting, explaining how the company would tread carefully in this area moving forward. In December 2024, OpenAI announced a partnership with Anduril to develop and deploy AI systems for "national security missions." Ahead of the announcement, OpenAI told employees that the partnership was narrow in scope and would only deal with unclassified workloads, the same sources said. This stood in contrast to a deal Anthropic had signed with Palantir, which would see Anthropic's AI used for classified military work. Palantir approached OpenAI in the fall of 2024 to discuss participating in their "FedStart" program, an OpenAI spokesperson confirmed to WIRED. The company ultimately turned it down, and told employees it would've been too high-risk, two sources familiar with the matter tell WIRED. However, OpenAI now works with Palantir in other ways. Around the time the Anduril deal was announced, a few dozen OpenAI employees joined a public Slack channel to discuss their concerns about the company's military partnerships, sources say and a spokesperson confirmed. Some believed the company's models were too unreliable to handle a user's credit card information, let alone assist Americans on the battlefield.
[5]
Altman said no to military AI - then signed Pentagon deal
OpenAI CEO's principles lasted about 12 hours before $200M check arrived Opinion A week ago today, OpenAI CEO Sam Altman said he'd draw the same lines as Anthropic. By that night, he'd signed a Department of Defense deal that included no such AI protections. What's going on here? We live in interesting times in AI land. First, the Trump administration's self-proclaimed "Department of War" (DoW) Secretary Pete Hegseth demanded AI giant Anthropic include contract language that would give the military the right to use Anthropic's LLMs for "any lawful use." Anthropic had already dropped its Responsible Scaling Policy clause, saying it wouldn't train AI models that it couldn't guarantee were safe, but that wasn't enough. Hegseth wanted to be able to use Anthropic's Claude for domestic mass surveillance and AI-controlled weapons. When Anthropic CEO Dario Amodei wouldn't agree, Hegseth and Trump stripped the company of all its government contracts. In the meantime, OpenAI CEO Sam Altman came to defend Anthropic's position, saying in an internal memo: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." It was the morning of February 27. That night, Altman made a deal with the DoW, which did not include any such contractual agreements. It's amazing what a company will do for a $200 million contract despite its endless billion-dollar funding announcements and a burn rate of over $9 billion in 2025. President Trump's temper tantrum called Anthropic "Leftwing nut jobs," but the company is not some kind of liberal hotbed, far from it. Indeed, when Trump started the Iran war, the US armed forces were relying on Anthropic AI for its targeting. The problem wasn't that Anthropic was anti-military. It's that Amodei essentially argued he wouldn't kiss Trump's rump. Tell me, though, how did OpenAI win this contract in no time flat unless, despite what Altman said, they gave Hegseth and Trump everything they wanted? As reported in The Verge: "The Pentagon wouldn't back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAI's technology to carry it out." And you may have noticed that under the Trump regime, pretty much anything - killing citizens, awarding $220 million contracts to friends, and blasting boats suspected of drug running out of the water - is declared to be legal. Yeah, these are exactly the people I want to trust with an AI-powered domestic surveillance system and killer robots. It's interesting as well that a few days later, OpenAI announced that its latest model, GPT 5.3, wouldn't be overly defensive or moralize before answering questions. So, I guess if I'm in the DoW and I ask it to target ACLU members in a given zip code for AI-driven drone attacks, it won't blink. Not everyone is happy about this. In fact, a lot of people are ticked off. Some 34,000 people on the OpenAI subreddit upvoted a post demanding people show that they've cancelled their OpenAI contracts because OpenAI is training a war machine. The most popular post claims: "Greg Brockman, co-founder of OpenAI and the current President, made the largest ever donation to Trump's MAGA super PAC, at $25 million. And Jared Kushner has most of his wealth in OpenAI. In other words, the Trump administration was bribed by a company, OpenAI, into destroying its main competition, Anthropic. This is blatantly corrupt." Welcome to the United States of Kleptocracy. Nonetheless, this is a deal that OpenAI may yet regret. In the aftermath of this contract and Anthropic's fight with the DoW, Anthropic's revenue has shot up to about a $20 billion revenue run rate. Meanwhile, even before the OpenAI deal happened, Menlo Ventures reported that, for the enterprise market anyway, Anthropic accounted for 40 percent of enterprise LLM spend in 2025, while OpenAI lost almost half of its enterprise share, dipping to 27 percent from 50 percent. Will ChatGPT continue to dominate the consumer space? Maybe, but let's face it, no one's ever going to profit from being the top chatbot to Joe and Jane User. There's simply not enough money there. That said, on iPhones anyway, Anthropic's chatbots are now more popular than ChatGPT. So now Altman is publicly claiming that he's improving the just-signed DoW deal. That's not what he's telling his employees. He's telling them that OpenAI gets no call on what the DoW does with its AI and: "So maybe you think the Iran strike was good, and the Venezuela invasion was bad... You don't get to weigh in on that." Given that in the early days of the war, 59 percent of Americans disapproved of the attack, Altman tying OpenAI's future to a government that seems certain to lose even more support as American casualties increase is a move that appears destined to hurt OpenAI's popularity. Sure, OpenAI will make an additional $200 million in revenue in the short run. In the long run, however, Altman handed Amodei a hammer he'll use to hit OpenAI again and again and again. ®
[6]
ChatGPT Backlash Reveals New Pitfalls in Aligning With Trump
It took only a few hours for Sam Altman's timing to go from bad to worse. On Friday evening, the chief executive officer of OpenAI announced the company would step into the role at the Department of Defense vacated earlier that day by Anthropic PBC, which had angered Secretary of Defense Pete Hegseth by refusing to allow its artificial intelligence models to be used for "all lawful purposes." In particular, Anthropic CEO Dario Amodei wanted assurances its technology wouldn't be used for conducting mass surveillance of Americans or controlling fully autonomous weapons. That same night, US and Israeli forces launched the first salvo in an ongoing bombing attack on Iran. Among other things, a girls school was destroyed and more than 160 people killed, according to local authorities. The backlash online was immediate -- OpenAI, according to a growing chorus of critics, had sold out regular Americans and foreign civilians alike. QuitGPT, an existing campaign to encourage people to stop using and paying for OpenAI's popular ChatGPT service because of its potential impact on users' mental health, swiftly picked up steam. Altman spent some time in the following days trying to explain himself and the company he leads, including in a statement posted to X on Monday in which he promised to amend the company's agreement to prevent its use for domestic surveillance of Americans. Although the company's intentions were good, he wrote, the Pentagon deal "just looked opportunistic and sloppy." The attempt at damage control was warranted. In the days between OpenAI's announcement and Altman's X statement, app store downloads of Anthropic's Claude surged ahead of OpenAI's ChatGPT, with a spike so big on Monday morning that it briefly crashed the company's services. Social media users -- including Katy Perry, for some reason -- made a show of canceling their paid ChatGPT subscriptions and switching to Claude, which had up to that point lagged far behind ChatGPT in awareness and usage among people outside the tech industry. As of this morning, it remained in the top spot for new downloads on Apple devices. Will an MBA Pay Off for You? Will an MBA Pay Off for You? Will an MBA Pay Off for You? Use our calculator to determine the return on your investment. Use our calculator to determine the return on your investment. Use our calculator to determine the return on your investment. Crunch the numbers Crunch the numbers Crunch the numbers Crunch the numbers We're in a moment when many Americans have been waiting for the country's business leaders to do anything to push back against Trump administration policies, including its capricious tariff regime and the Department of Homeland Security's brutal treatment of immigrants and citizens alike. In the case of the AI tools, some people seemed overjoyed at the opportunity to offer a personal bounty to an executive -- in this case, Amodei -- who demonstrated even a modicum of backbone in the face of threats to civil liberties or national stability. When AI companies talk about "alignment" -- and they do, quite a lot -- they generally mean the nascent technology's potential to integrate its capabilities with the goals of human society, endowed with human ethics. But with the messy, confusing consumer market they've spun up while they figure out how and whether their models can be applied usefully and profitably, they've stumbled into a different kind of alignment issue. AI companies are only now becoming known to the general public, and they offer a rare opportunity: They're big enough to wield real power and influence, but they've gotten that big so quickly that regular people are unsure of what, if any, political valence they or their products might have. Consumers, hungry for fresh opportunities to vote with their dollars in a way that feels like it could make any material difference in the course of the country, have leapt at the chance to plot those tools on an ideological continuum and reward or punish them accordingly. Within the industry, Anthropic's decision has also ushered in support from workers at rival businesses. Few companies have been willing to break publicly with the Trump administration and its policies, which makes Anthropic's decision to face the blowback genuinely notable -- Hegseth's retaliatory declaration of the company as a supply chain concern already has other defense contractors dropping the company's tools, according to Reuters. But individual consumer choice has limitations when it comes time to build the society that any one of those individuals might want. Anthropic is, after all, still supplying the underlying AI for defense contractor Palantir Technologies Inc.'s data and surveillance tools, and earlier this year it submitted a proposal to the Pentagon to develop autonomous drone-swarm technology for use by the military, even as the company and the Defense Department feuded. And late Wednesday, news broke that Anthropic had resumed negotiations with the Pentagon, signaling its desire to return to the lucrative business of defense contracting. But for companies that rely on consumer choices instead of corporate or government contracts for their revenue, Anthropic's surge among regular Americans provides what might be a useful signal to other executives: If you're willing to draw a line in the sand, your customers just might make it worth your while.
[7]
The 'QuitGPT' movement gets a surge of activity after OpenAI strikes a deal with the Pentagon
* Trump cuts Anthropic; OpenAI struck a DoW deal to tailor AI, igniting public controversy * Users launch 'QuitGPT' movement, canceling subscriptions over military deal and perceived quality/politics concerns * Many recommend switching to Claude, praising Anthropic for keeping safeguards and standing on values The last day or two have been very tumultuous in the world of American AI. On Friday, we saw Donald Trump declare that he was cutting the use of Anthropic within governmental agencies. A few hours later, OpenAI announced that it had struck a deal with the Department of War to tailor an AI model that fits the military's needs without breaking down OpenAI's guardrails. Unfortunately for OpenAI, the move has caused people to claim that they've cancelled their ChatGPT subscription, citing concerns over what the company has potentially planned for the future. As people began to develop a movement to promote the idea, they quickly discovered that people had already made a home for them. Anthropic just dropped its core AI safety promise, and that should worry you History doesn't repeat itself, but AI companies sure do. Posts 1 By Mahnoor Faisal Disgruntled users are joining the 'QuitGPT' movement to put pressure on OpenAI Yes, 'joining,' not 'starting' As spotted by Tom's Guide, users are unhappy with ChatGPT's dealings with the Pentagon. If you're a little confused as to why people are cancelling their subscription to ChatGPT over its deal with the military, we need to take a look at why Trump got rid of Anthropic in the first place. Trump claimed that Anthropic disallowed the military from performing specific actions as per the company's Terms of Service. On the Anthropic blog, the company explains that two use-cases weren't in the original deal with the Department of Law: using Claude to perform mass surveillance, and adopting it to power automatic weaponry. However, the company claims that these were the DoW's major pain points: The Department of War has stated they will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk" -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal. While Sam Altman claims that OpenAI has also made an agreement with the DoW to not use its AI models for surveillance or weaponry, people aren't trusting it. As such, they began calling for people to cancel their ChatGPT subscriptions, only to discover that the movement had already begun before them. People began expressing their distaste for OpenAI for several reasons. These include a claim that OpenAI president Greg Brockman had political ties, a claim that ChatGPT's quality had fallen over the last few months, and people lamenting when OpenAI axed support for ChatGPT 4o, a beloved model that people pleaded the company not to get rid of. These groups come together under the 'QuitGPT' banner, and the current recommendation is for everyone to download and use Claude, citing that the company stood for its values during this whole mess.
[8]
OpenAI's Altman takes jabs at Anthropic, says government should be more powerful than companies
Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, US, on Tuesday, Sept. 23, 2025. OpenAI CEO Sam Altman took subtle swipes at rival Anthropic on Thursday and said he thinks it's "bad for society" if companies start abandoning their commitment to the democratic process because "some people don't like the person or people currently in charge." "The government is supposed to be more powerful than private companies," Altman said during the Morgan Stanley Technology, Media & Telecom Conference. Anthropic has clashed with the Department of Defense in recent weeks over how the agency can use its artificial intelligence models. Negotiations escalated, and Defense Secretary Pete Hegseth declared Anthropic a "Supply-Chain Risk to National Security" in a post on X on Friday. President Donald Trump also directed every federal agency in the U.S. to "immediately cease" all use of Anthropic's technology. Hours later, Altman announced that OpenAI had formed its own agreement with the DoD. The company has faced criticism for announcing the deal so soon after Anthropic was blacklisted, and Altman conceded that it looked "looked opportunistic and sloppy." He said Thursday that the company's intention was to de-escalate the situation. "It is complicated, we are busy with other things," Altman said. "But last week, when things started to get into a fight, it became increasingly clear to us that there was a chance things were going to go very badly." OpenAI was founded as a nonprofit research lab in 2015, and it exploded into the mainstream following the launch of its chatbot ChatGPT in 2022. The company has ballooned into one of the fastest growing commercial enterprises on the planet since then, and it announced a $110 billion funding round at a $730 billion pre-money valuation last week. As of February, ChatGPT supports more than 900 million weekly active users, up from 800 million in October. But the company is engaged in a fierce competition with rivals like Anthropic and Google as they race to win even more users and market share. --CNBC's Kate Rooney contributed to this report
[9]
Video: Opinion | The Pentagon's Attack on Anthropic Is Political
What happens when the A.I. tools helping to run the country stop sharing the government's goals? The former Trump A.I. adviser Dean Ball joins "The Ezra Klein Show" to discuss the looming threat of institutional misalignment. I had Dario Amodei on the show last time a couple of years ago. It was in 2024 and we had this conversation where I said to him, at some point, if you are building a thing as powerful as what you were describing to me, then the fact that would be in the hands of some private C.E.O. seems strange. And he said, yeah, absolutely. "I think it's fine at this stage, but to ultimately be in the hands of private actors, there's something undemocratic about that much power concentration." He said, I think if we get to that level -- it's likely I'm paraphrasing him here -- that will need to be nationalized. And I said, I don't think if you get to that point, you're going to want to be nationalized. And now we're not. Here we are at that point. But actually it's all happening a little bit in reverse. The government -- there was a moment when they threatened to use the Defense Production Act to somewhat nationalize Anthropic. They didn't end up doing that. But what they're basically saying is they will try to destroy Anthropic so it doesn't -- to punish it, to set a precedent for others so it doesn't pose a threat to them. If it is such a political act and if these systems are powerful, and over time -- and again, I think people need to understand this part will happen -- we will turn much more over to them, much more of our society is going to be automated and under the governance of these kinds of models. You get into a really thorny question of governance. Yes. Particularly because the different administrations that come in and out of U.S. life right now are really different. They are some of the most different in kind that we have had, certainly in modern American history. They are very, very misaligned to each other. So the idea that a model could be well aligned to both sides right now, to say nothing of what might come in the future, is hard to imagine. Like this alignment problem. Not the A.I. model to the user or the A.I. model almost like to the company, but the A.I. model to governments. The alignment problem of models in governments seems very hard. Yes, I think I completely concur that this is incredibly complicated. And part of the reason that this conversation sounds crazy is because it's crazy. Part of the reason this conversation sounds crazy is because we lack the conceptual vocabulary with which to interrogate these issues properly. But I think the basic principle that I as an American come back to when I grapple with this kind of thing is like, OK, well, it seems like the First Amendment is a good place to go here. It seems like that is -- OK, yes, there's going to be differently aligned models aligned to different philosophies. And they're going to be, different governments will prefer different things. And the models might conflict with one another. They're going to clash with one another. There'll be an adversarial context with one another. And so at that point, what are you doing? You're doing Aristotle. You're back to the basics of politics. And so I as a classical liberal say, well, the classical liberal order, the classical liberal order principles actually make plenty of sense. We don't want the government to be able to dictate what different kinds of alignment -- the government does not define what alignment is. Private actors define what alignment is. That would be the way I would put it. But I do understand that this is weird for people, because what we're talking about here is again, this notion of the models as actors, actors that are -- in some sense, we've taken our hands off the wheel to some extent. Before we got to this point, there was already a lot of discourse coming out of people in the Trump administration and people around the Trump administration, people like Elon Musk and Katie Miller and others, who are painting Anthropic as a radical company that wanted to harm America as they saw it. I mean, Trump has picked up on this rhetoric. He called Anthropic a "radical left woke company," called the people at it "left-wing nut jobs." Emil Michael said that Dario is "a liar" and has a "God complex." There's been a tremendous amount of Elon Musk, who runs a competing A.I. company, has very different politics than Dario, just like attacking Anthropic relentlessly on X, which is the informational lifeblood of the Trump administration. One way to conceptualize why they have gone so far here on the supply chain risk is that there are people there, not maybe most of them, but who actually think it is very important which A.I. systems succeed and are powerful and that they understand Anthropic as its politics are different than theirs, and so actually destroying it is good for them in the long run, completely separate from anything we would normally think of as a supply chain risk. Anthropic represents a kind of long-term political risk. Yes. I mean, I don't know that the actors in this situation entirely understand that this dynamic -- part of my point all along has been that I think a lot of the people in the Trump administration that are doing this do not understand this. Like, they don't get what -- they don't get these issues. They're not thinking about the issues in the terms that we are describing. But if you do think about them in the terms that we're discussing here, then I think what you realize is that this is a kind of political assassination. If you actually carry through on the threat to completely destroy the company, it is a kind of political assassination. And so, again, this is why First Amendment comes right to be there for me. And that's why this is a matter of principle that is so stark for me. That's why I wrote a 4,000- word essay that is going to make me a lot of enemies on the right. That's why I took this risk, because I think this matters.
[10]
OpenAI Is Opening the Door to Government Spying
Outside OpenAI's headquarters, a handful of people gathered on Monday holding pieces of colorful chalk. They got down on their knees and started writing messages on the sidewalk. Stand for liberty. Please no legal mass surveillance. Change the contract please. At issue was a business deal that the company recently signed with the Department of Defense, following the Pentagon's sudden turn against Anthropic. OpenAI will now supply its technology to the military for use in classified settings, the sorts that may involve wartime decisions and intelligence-gathering -- an agreement, many legal experts told me, that could give the government wide-ranging powers. "I would just really like to see OpenAI do the right thing and stand up for something, anything," Niki Dupuis, an AI-start-up founder and one of the chalk protesters, told me. In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek "red lines" to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk -- a hefty sanction that would require anybody who sells to the Pentagon to stop using Anthropic products in their work with the military. Perhaps OpenAI was about to secure the very terms Anthropic had been denied. But a close reading of the contract -- the portions of it that OpenAI has shared with the public, anyway -- indicates that the lines are, in fact, blurry. Several independent legal experts told me that, legally, the Pentagon can likely get away with using OpenAI's technology -- versions of the models that underlie ChatGPT -- for mass surveillance of Americans. Moreover, the military will likely have a pathway to use OpenAI's technology in autonomous weapons. AI models from Anthropic, DOD's previous partner, have likely already been used for warfare; recently, its products were reportedly used to identify targets in Iran (Anthropic declined to comment on that reporting). But the company had refused to allow its technology to be used in fully autonomous weapons. Read: Inside Anthropic's killer-robot dispute with the Pentagon The Department of Defense, which the Trump administration refers to as the Department of War, declined to answer my questions about the contract. A spokesperson for OpenAI reiterated to me that the Pentagon has agreed to not use the firm's AI system for domestic surveillance, but did not answer specific questions. (OpenAI has a corporate partnership with The Atlantic's business team.) "The public is in an awkward position where we have to choose between trusting OpenAI or not," Charlie Bullock, a senior research fellow at the think tank Institute for Law & AI, told me. Brad Carson, who served as general counsel and then undersecretary of the Army under Barack Obama, was less compromising: In his analysis of the past week's events, OpenAI appears "okay with using ChatGPT for what ordinary people think of as mass surveillance." Over the past week or so, Altman and OpenAI have made several announcements about the contract, including sharing some of the text in a blog post Saturday -- only to modify that text in an update to the blog a few days later. The company's messaging has been confusing, and has at various points seemed to contradict its own previous statements, as well as information from the government. OpenAI had said that it has red lines around certain applications of its technology, but the portion of the contract language that it initially published implies the opposite. The company had also suggested that it placed unique restrictions on how the government could use OpenAI models, but Jeremy Lewin, a senior State Department official, suggested otherwise, writing that the contract simply permitted "all lawful use" of the OpenAI system -- that is, anything technically legal. The messaging "at best makes them seem like they're not fully on top of this, and at worst reinforces the perception, fair or not, that OpenAI has a tendency to not be very candid," Alan Rozenshtein, a law professor at the University of Minnesota who studies emerging technology, told me. Rozenshtein was perhaps being diplomatic -- the central question about OpenAI for the past several years has been less about candor and more about honesty. When Altman was briefly fired in late 2023, he had been accused of deceiving OpenAI's nonprofit board. A third-party review commissioned by OpenAI later found there was a "breakdown in trust" between Altman and the board, but that Altman's "conduct did not mandate removal." The past week has been chaotic, and observers have been hanging on every development. Last Friday, Altman posted on X that OpenAI had reached an agreement with DOD just hours after news broke that Anthropic's relationship with the administration would be dissolved. OpenAI's contract, Altman wrote, contains "prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." But many were skeptical. OpenAI surely had offered something to the Pentagon that Anthropic wouldn't. The word prohibitions didn't seem to communicate a total ban on surveillance, and the idea that "human responsibility" should be taken for autonomous weapons suggested that, indeed, OpenAI's technology could be used in autonomous weapons if a person were on the hook for the decision. In Saturday's blog post, OpenAI insisted that its red lines against domestic surveillance and automated weapons were firm, and reframed the deal as an attempt to "de-escalate things" between the Pentagon and other U.S. AI labs, adding that it hoped the Pentagon would offer the same terms to other firms, including Anthropic. OpenAI also published a quote from the contract, though it offered little reassurance. The segment begins, "The Department of War may use the AI System for all lawful purposes, consistent with applicable law." It then says the use of OpenAI systems for intelligence activities "will comply with" a number of laws and policies regulating U.S. intelligence activity that have infamously enabled spying on Americans, such as the Foreign Intelligence and Surveillance Act of 1978. Under FISA and related policies, for instance, intelligence agencies can record and store phone calls between Americans and people abroad, and purchase bulk user data from companies and analyze them, which does not involve directly intercepting communications. Here I should note that it's impossible to use just snippets of a contract to evaluate the entire thing: A restriction in one section can be voided under circumstances listed in another. But snippets are all that OpenAI has provided. Based on what we are able to see, experts told me that leeway had likely been given for mass surveillance. "There's a ton of stuff that normal people would understand as automated mass surveillance that is simply not" illegal, Rozenshtein said. For example, generative AI could turn previously overwhelming and opaque records -- tax returns, federal employment files, billions of intercepted communications, smartphone location data, and so on -- into a trove of exacting insights. An OpenAI spokesperson told me that citing particular statutes in the contract does not change the agreed-upon prohibition against domestic surveillance. With regard to weapons, the contract language shared Saturday cites DOD Directive 3000.09, which does not prohibit the use of fully autonomous weapons. Actually, it provides a legal pathway to develop and deploy such weapons by outlining how they must be vetted and used. In sum, if an application is technically permitted under U.S. law, OpenAI would likely have to go along with it. And, of course, the Trump administration has argued for some very expansive interpretations of the law. "The original contractual language that OpenAI shared appeared to me to essentially be saying 'all lawful use,'" Bullock said. After OpenAI published its blog post, Altman and some of his employees began fielding questions on X. Did the contract allow NSA to use OpenAI products? OpenAI's head of national-security partnerships insisted the answer was no. What about all of the loopholes for surveillance in existing laws? What about using AI to analyze bulk, commercially procured data, which DOD can purchase without a warrant? Multiple OpenAI employees voiced concerns about the deal as described. It was almost as if a contract for the military to use OpenAI's technology in weapons systems were being drafted live on social media, Jessica Tillipman, an expert on government-contracts law at George Washington University, told me. Then, on Monday, OpenAI revised its blog: The company said that it had modified its Pentagon contract to better protect Americans against AI-enabled spying. The new language notes that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals" and that "for the avoidance of doubt," DOD "understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." In other words, OpenAI is making explicit that the terms of its contract should prevent its products from being used to spy on Americans en masse. Read: Sam Altman is losing his grip on humanity Outside legal experts told me that the update does seem meaningfully different from the original contract language, and that it at least implies restrictions on the Pentagon that go above existing applicable law. But just as before, the new language could be construed to justify automated surveillance of Americans. For example, terms such as intentionally and deliberate provide substantial leeway for data collection that is deemed "incidental." Lots of commercially acquired data may not be deemed "personal or identifiable." Similarly, narrow definitions of terms such as tracking and surveillance could still permit a wide range of domestic intelligence-gathering, Carson, the former Army undersecretary, told me. "What ordinary people think surveillance might be in no way is the same as what surveillance means under the national-security authorities," he said. OpenAI did not provide definitions of these or any other terms in the contract when asked. The update also states that the Pentagon "affirmed" that OpenAI technologies won't be used by intelligence agencies such as NSA without further negotiations -- and OpenAI employees then suggested that the company may desire such partnerships in the future. And the phrase U.S. persons and nationals suggests that many immigrants, documented and not, may not be protected. OpenAI did not answer a question about whether undocumented immigrants and nonpermanent residents are protected by its contract. To Carson, the modifications are "vaporous things that seem good" -- window dressing without any substantive guarantees. Of course, all of this discussion rests on the belief that contractual prohibitions are the load-bearing factor for preventing an AI system from being used for mass domestic surveillance or autonomous weapons. That is not necessarily true. A motivated lawyer could interpret almost any language in bad faith. If one takes OpenAI seriously -- that the firm does not want its products used to spy on Americans, at all -- then enforcing the spirit of the contract may be more important than the document's language. (Lewin, the State Department official, said that "the government intends to honor the contract as written" and that using AI for mass domestic surveillance "has never been an object.") To that end, OpenAI has shared that it will implement a technical "safety stack," or guardrails of a sort, to monitor how its models are used and will have its own engineers work with DOD, which the company believes will allow it to "independently verify that these red lines are not crossed." When asked, OpenAI did not provide further details about how its DOD safety architecture will work. The firm maintains that these guardrails and its contract, taken together, provide better guarantees "than earlier agreements, including Anthropic's." Once again, it all comes down to whether you trust OpenAI. All of which leads to perhaps the most important and confounding factor of all: What happens if the government and OpenAI disagree over whether some use of ChatGPT is permitted? What does OpenAI do if it believes that the Pentagon has violated their agreement? Typically, the government acts first and litigates disputes after, Tillipman told me. (OpenAI said that if it determines the terms of the contract have been violated, the company can terminate it, but OpenAI did not provide details about the process for doing so.) And this was far from a typical negotiation. By blacklisting Anthropic, Tillipman said, DOD demonstrated that "if it comes to an impasse, they are not afraid" to place extreme sanctions on a private U.S. company. Altman wrote on X that designating Anthropic a supply-chain risk "is an extremely scary precedent and I wish [the government] handled it a different way." The actual red line should be very apparent to OpenAI and any other AI firm wanting to contract with DOD: You work on the government's terms, or not at all. OpenAI has made its choice.
[11]
Pentagon Reportedly Used Microsoft Workaround to Test OpenAI Models, Despite Ban
The Defense Department alledgedly experimented with OpenAI models through Microsoft even when the company’s policies prohibited military use. OpenAI’s recent dealings with the U.S. military have raised a lot of eyebrows. But it wasn’t that long ago that the AI company had a policy banning its models from being used by militaries or for warfare. Despite that ban, the Pentagon had been testing a version of OpenAI’s models through Microsoft in an apparent workaround as far back as 2023, according to Wired. The tech news outlet reported Thursday, citing unnamed sources, that the Pentagon had been experimenting with Microsoft’s Azure OpenAI service that year. A Microsoft spokesperson confirmed to the outlet that Azure OpenAI became available to the U.S. government in 2023 and was subject to Microsoft’s terms of service. The spokesperson did not say exactly when the models became available to the Pentagon, but noted the service was not approved for “top secret†government workloads until 2025. OpenAI and Microsoft did not immediately respond to a request for comment from Gizmodo. The report highlights how quickly things have changed in the AI industry. Just a few years ago, companies at least tried to appear like they were avoiding military work. Now, many seem to be tripping over themselves to land Pentagon deals. OpenAI’s usage policies once included a ban on “activity that has high risk of physical harm," including areas such as “weapons development†as well as the “military and warfare.†But in January 2024, OpenAI quietly updated that section of its policy and removed the blanket ban on “military and warfare,†The Intercept reported at the time. The removal of that ban paved the way for the AI company to sign its own multimillion-dollar contract with the Pentagon and kick off the current messy saga over which AI models the U.S. military will use in classified settings. Last June, the company announced its OpenAI for Government initiative, aimed at bringing its most “advanced AI tools to public servants across the United States.†The program included a pilot with the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO), with a contract ceiling of $200 million. Then in February, OpenAI announced it had made a customized version of ChatGPT available through the Defense Department’s AI platform, GenAI.mil, for unclassified uses, joining xAI and Google’s Gemini on the platform. Most recently, the company said last week that it had reached an agreement with the Pentagon to deploy its advanced AI systems in classified environments. The announcement came just hours after the Defense Department’s negotiations with OpenAI rival Anthropic collapsed. At the time, Anthropic was the only AI company cleared to operate in the military's classified systems. The Pentagon wanted Anthropic to allow the military to use its models for “all lawful purposes†as it raced to expand its use of AI. Anthropic pushed back, insisting on guardrails to prevent its models from being used for domestic surveillance or fully autonomous weapons systems. Defense Secretary Pete Hegseth wrote in a post on X that after the two sides failed to reach a deal Friday, he directed the department to designate Anthropic a supply-chain risk. The move would bar contractors, suppliers, or companies seeking to do business with the U.S. military from maintaining commercial ties with Anthropic. Hours later, OpenAI announced its deal with the Pentagon, which even CEO Sam Altman acknowledged made the timing look “opportunistic and sloppy.†Anthropic CEO Dario Amodei said in a statement Thursday that most of the company’s customers should not be affected by the designation and that Anthropic plans to challenge it in court. Microsoft became the first major company on Thursday to say it would continue offering Anthropic’s AI products to clients despite the designation. Amodei also apologized for a leaked staff memo in which he wrote that President Donald Trump dislikes Anthropic because the company has not donated to him and has refused to give him “dictator-style praise.†“It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views," Amodei wrote. "It was also written six days ago, and is an out-of-date assessment of the current situation.â€
[12]
OpenAI signs Pentagon AI deal after Trump orders Anthropic ban
The deal follows a dramatic clash between the White House and Anthropic over limits on military AI use. OpenAI has reached an agreement with the Pentagon to deploy its artificial intelligence models in classified military systems, just hours after President Donald Trump ordered federal agencies to stop using rival Anthropic's technology. The announcement came late Friday from OpenAI CEO Sam Altman, who said the company had secured terms with the Department of Defense to use its models within the department's classified network. The deal follows a sharp escalation between the Trump administration and Anthropic over how AI systems can be used in military contexts. Earlier in the day, Trump directed every federal agency to "immediately cease" use of Anthropic's products. Defense Secretary Pete Hegseth also designated the company a "supply chain risk to national security," a classification typically used under federal procurement authorities to restrict certain technologies in defense contracts. Similar supply chain restrictions in recent years have been applied to foreign telecom companies such as Huawei and ZTE under Section 889 of the 2019 National Defense Authorization Act. Those measures were implemented through federal procurement rules requiring contractor certification that prohibited technologies are not used in connection with government contracts. In Anthropic's case, the designation requires the Pentagon to phase out use of the company's systems and obligates military contractors to certify that their Defense Department work does not involve Anthropic's AI tools. The administration has provided a six-month transition window. The confrontation centers on whether AI companies can limit how the military uses their systems. Anthropic had sought contractual assurances that its flagship model, Claude, would not be used for domestic mass surveillance of Americans or to power fully autonomous weapons. The Pentagon has said it does not intend to use AI in those ways but has insisted that models must remain available for all lawful purposes. After weeks of negotiations, talks between Anthropic and the Defense Department collapsed. Officials accused the company of attempting to impose ideological restrictions on military operations. Anthropic maintained that its objections were narrow and focused on safety and constitutional rights. Shortly after the administration's move against Anthropic, Altman announced that OpenAI had finalized its own agreement with the Pentagon. In a post on X, Altman said the deal reflects two of OpenAI's "most important safety principles": prohibitions on domestic mass surveillance and a requirement for human responsibility in the use of force, including autonomous weapon systems. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman wrote, referring to the department's recent use of the "Department of War" branding in official posts. It remains unclear how OpenAI's agreement differs in substance from the safeguards Anthropic had sought. Pentagon officials have argued that existing U.S. law and Defense Department policy already prohibit domestic mass surveillance and fully autonomous weapons, and that no new legal standards were necessary. The dispute with Anthropic became increasingly political in recent days. In a Truth Social post, Trump criticized the company in harsh terms and framed its position as an attempt to override constitutional authority. Hegseth accused Anthropic of trying to assert control over operational military decisions and said the department must retain unrestricted access to AI models for lawful purposes. Anthropic has pushed back, arguing that the supply chain risk designation exceeds the Defense Department's statutory authority. The company said federal law limits such determinations to specific defense-related contracts and does not grant the executive branch broad power to block all commercial activity with a domestic company. Anthropic has said it intends to challenge the designation in court. Supply chain risk determinations are more commonly associated with foreign-owned firms deemed national security threats. Applying the designation to a U.S.-based frontier AI developer marks a significant shift in how procurement authorities are being used in the context of artificial intelligence. The episode underscores how rapidly AI has become embedded in national security policy. OpenAI, Anthropic, Google, and Elon Musk's xAI have secured Defense Department agreements or approvals for use of their AI models, including in classified environments. At the same time, concerns about surveillance, autonomous weapons, and the reliability of large language models have intensified scrutiny of military AI deployments. Anthropic, which NPR reported is valued at roughly $380 billion and is preparing for a public offering, now faces legal and reputational uncertainty following the administration's actions. The Pentagon contract at the center of the dispute is worth up to $200 million, a relatively small portion of the company's reported revenue but symbolically significant. For OpenAI, the agreement positions the company as a key partner in the Defense Department's AI strategy while maintaining its publicly stated safety principles. Whether the contrasting outcomes reflect substantive differences in contract terms or divergent negotiation strategies remains unclear. What is clear is that the relationship between AI developers and the U.S. military has entered a more visible and politically charged phase.
[13]
OpenAI got 'sloppy' about the wrong thing
The hasty yet high-stakes deal between ChatGPT's parent company and the Pentagon makes you wonder what else OpenAI has been slap-dash about. You'd think OpenAI would take care when crafting a deal with the Pentagon, one that would see its AI models used in life-and-death scenarios such as those we're seeing unfold in Iran right now. But as we've learned, the initial agreement that OpenAI struck with the Defense Department on Friday night was a rush job. Even CEO Sam Altman agrees. "We shouldn't have rushed to get this out on Friday," Altman wrote on X late Monday, as he detailed recent changes to the contract that specifically prohibit the use of its models for surveillance of U.S. citizens. "The issues are super complex, and demand clear communication," Altman continued. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future." OpenAI's hasty deal with the military has, of course, sparked a massive backlash against the company and ChatGPT, coupled with a surge of interest in Anthropic (which has since been tagged as a "supply-chain risk" by Defense Secretary Pete Hegseth) and its competing Claude models. Anthropic had been locked in a tense back-and-forth with the Defense Department over the military's demand for nearly unfettered use of its AI technology. I certainly agree with Altman that the issues surrounding contracts between AI providers and the military are, as he says, "super complex," and yes, OpenAI's Friday-night deal did indeed look "opportunistic and sloppy." And yes, people make mistakes and learn from them. But an AI deal with the Pentagon is about as high-stakes as it gets, and it's absolutely the wrong thing to get sloppy about. I've reached out to OpenAI for comment and will update this story once they reply. OpenAI's rushed Pentagon agreement also raises the question about what else it may have handled sloppily-and this brings the discussion back to us, ChatGPT's everyday users (or, increasingly, ex-users). When we use AI, be it ChatGPT's models or someone else's, we have to trust it to one degree or another. We're trusting it with our names, locations, job titles, family details, and perhaps even our finances. It may know who our friends are, and what we're interested in. This bond of trust is something that AI providers need to take seriously, perhaps even at the expense of a fast-moving deal. Those of us who use AI every day must carefully consider the providers we're dealing with, what they're promising us, and how they behave.
[14]
Video: Opinion | Who Should Control A.I.?
The former A.I. policy adviser to the Trump White House explains why the conflict between Anthropic and the White House is so dangerous. "So right now, everyone is thinking about Iran, but there's a story happening around it that I think we need to not lose sight of because it's about not just how we are potentially fighting this war, but how we'll be fighting all wars going forward. On Friday of last week, Secretary of Defense Pete Hegseth announced that he was breaking the government's contract with the AI company Anthropic, and he intended to designate them a supply chain risk. The supply chain risk designation is for technologies so dangerous they cannot exist anywhere in the U.S. military supply chain. They cannot be used by any contractor or any subcontractor anywhere in that chain. It has been used before for technologies produced by foreign companies like China's Huawei, where we fear espionage or losing access to critical capabilities during a conflict. It has never been used against an American company. What is even wilder about this is that it is being used, or at least being threatened against an American company that is even now providing services to the U.S. military as we speak. Anthropic's AI system Claude was used in the raid against Nicolás Maduro, and it is reportedly being used in the war with Iran. But there were red lines that Anthropic would not allow the Department of War to cross. The one that led to the disintegration of their relationship was overusing AI systems to surveil the American people, using commercially available data. So what is going on here? How does the government want to use these AI systems, and what does it mean. They are trying to destroy one of America's leading AI companies. For setting some conditions on how these new, powerful, and uncertain technologies can be deployed? My guest today is Dean Ball. Dean is a senior fellow at the Foundation for American Innovation and author of the newsletter Hyperdimensional. He was also a senior policy advisor on AI for the Trump White House, and was the primary writer of their AI action plan, but he's been furious at what they are doing here. As always, my email [email protected]. Dean Ball, welcome to the show. Thanks so much for having me. So I want you to walk me through the timeline here. How did we get to the point where the Department of War is labeling Anthropic one of America's leading AI companies, a supply chain risk? I think the timeline really begins in the summer of 2024 during the Biden administration, when the Department of Defense, now Department of War and Anthropic, came to an agreement for the use of Claude in classified settings. Basically language models are used in government agencies, including the Department of Defense in unclassified settings for things like reviewing contracts and navigating procurement rules and mundane things like that. But there are these classified uses, which include intelligence analysis and potentially assisting operations in real time military operations in real time, and Anthropic was the company most enthusiastic about these national security uses. And they came to an agreement with the Biden administration to basically to do this with a couple of usage restrictions. Domestic mass surveillance was a prohibited use and fully autonomous lethal weapons. In the summer of 2025, during the Trump administration and full disclosure, I was in the Trump administration when this happened, though not at all involved in this deal. The administration made the decision to expand that contract and kept the same terms. So the Trump administration agreed to those restrictions as well. And then in the fall of 2025, I think I suspect that this correlates with the confirmation, the Senate confirmation of Emil Michael, under secretary of war for research and engineering. He comes in, he looks at these things, I think, or perhaps is involved in looking at these things and comes to the conclusion that, no, we cannot be bound by these usage restrictions, and the objection is not so much to the substance of the restrictions, but to the idea of usage restrictions in general. So that conflict actually began several months ago. And as far as I understand, it begins before the raid on in Venezuela, on Nicolás Maduro and all that kind of stuff. But these military operations may be increased the intensity because Anthropic's models are used during that raid. And then we get to the point where basically where we are now, where the contract has kind of fallen apart. And D.O.W., Department of War and Anthropic have come to the conclusion that they can't do business with one another. And the punishment is the real question here, I think. And do you want to explain what the punishment is? So basically my view on this has been that I think that the Department of War saying we don't want usage restrictions of this kind as a principle. That seems fine to me. That seems perfectly reasonable for them to say no, a private company shouldn't determine. Dario Amodei does not get to decide when autonomous lethal weapons are ready for prime time. That's a Department of War decision. That's a decision that political leaders will make. And I think that's right. I think I agree with the Trump administration on that front. So I think the solution to this is if you cannot agree to terms of business, what typically happens is you cancel the contract and you don't transact any more money. You don't have commercial relations. But the punishment that Secretary of War Pete Hegseth has said he is going to issue is to declare Anthropic a supply chain risk, which is typically reserved only for foreign adversaries. What Secretary Hegseth has said is that he wants to prevent Department of War contractors. And by the way, I'm going to refer to it variously as Department of Defense and Department of War. Because.... I still call X Twitter, Yeah, I still call X Twitter. Anyway, all military contractors can be prevented from doing any commercial relations in Secretary Hegseth's mind from with Anthropic. I don't think they actually have that power. I don't think they actually have that statutory power. I think that what the maximum of what I think you could do is say, the, no Department of War contractor can use Claude in their fulfillment of a military contract. But you can't say you can't have any commercial relations with them, I don't think, but that is what Secretary Hegseth has claimed he is going to do, which would be existential for the company if he actually does it. O.K, there's a lot in here. Yes I want to expand on. But I want to start here. For most people they use chatbots sometimes, if at all. And their experience with them is that they are pretty good at some things and not at others. And we're not all that good. In June of 2024, when the Biden administration was making this deal. So here you are telling me that we are integrating, in this case, Claude throughout the national security infrastructure. It's involved somehow in the raid on Nicolás Maduro. How and to what degree should the public trust that the federal government knows how to do this. Well, with systems that even the people building them don't understand all that well? So I think one thing is that you have to learn by doing, and I think so it is the case that we don't know how to integrate AI really into any organization. Advanced AI systems. We don't know how to integrate them into complex pre-existing workflows. And so the way you do it is learning by doing. Didn't Pete Hegseth have posters around the Department of War saying, the secretary wants you to use AI. They are very enthusiastic about AI adoption. So here's how I would think about what these systems can do in national security context. First of all, there's a long standing issue that the intelligence community collects more data than it can possibly analyze. I remember seeing something from one of I forget which intelligence, which intelligence agency, but one of them that essentially said that they collect so much data every year, just this one, that they would need 8 million intelligence analysts to thoroughly to properly process all of it. That's just one agency. And that's far more employees than the federal government as a whole has. And what can AI do. Well, you can automate a lot of that analysis. So transcribing it to text, and then analyzing that text signals intelligence processing. Sometimes that needs to be done in real time for an ongoing military operations. So that might be a good example. And then I think another area, of course, is these models have gotten quite good at software engineering. And so there are cyber defensive and cyber offensive operations that where they can deliver tremendous utility. Let's talk about mass surveillance here. Because my understanding, talking to people on both sides of this and it's now been, I think, fairly widely reported that this contract fell apart over mass surveillance at the final critical moment, Emil Michael goes to Dario and says, we want you we will agree to this contract, but you need to delete the clause that is prohibiting us from using Claude to analyze bulk collected commercial data Yeah and why don't you explain what's going on there? National security law is filled with gotchas, filled with legal terms of art, terms that we use colloquially quite a bit, where the actual statutory definition of that term is quite different from what you would infer from the colloquial use of the term. Things like private, confidential surveillance. These sorts of terms don't necessarily have the meaning that they do in natural language. That's true in all law. All laws have to define terms in certain ways that are not necessarily how we use them in our normal language. But I think the difference between vernacular and statute here is about as stark as you can get. So surveillance is the collection or acquisition of private information, but that doesn't include commercially available information. So if you buy something, if you buy a data set of some kind and then you analyze it, that's not necessarily surveillance under the law. So if they hack my computer or my phone to see what I'm doing on the internet. That's surveillance. That would be surveillance. But if they buy data, if they put cameras everywhere, that would be surveillance. But if there are cameras everywhere and they buy the data from the cameras, and then they analyze that data, that might not necessarily be surveillance. Or if they buy information about everything I'm doing online, which is very available to advertisers, and then use it to create a picture of me that's not or necessarily surveillance where you physically are in the world. I'll step back for a second and just say that there's a lot of data out there. There's a lot of information that the world gives off that your Google search results, your smartphone location data. All these things. And it's not the reason that no one really analyzes it in the government is not so much that they can't acquire it and do so. It's because they don't have the personnel, right. They don't have millions and millions of people to figure out what the average person is up to. The problem with AI is that AI gives them that infinitely scalable workforce and thus. Every law can be enforced to the letter with perfect surveillance over everything. And that's a scary future. We think of the space between us and certain forms of tyranny, or the feared panopticon as a space inhabited by legal protection. But one thing that has seemed to me to be at the core of a lot of at least fear here, is that it's in fact, not just legal protection. It's actually the government's inability to have the absorption of that level of information about the public and then do anything with it. And if all of a sudden you radically change the government's ability, then without changing any laws, you have change what is possible within those laws Yes So you were saying a minute ago, mass surveillance or surveillance at all is a term of legal art, but for human beings it is a condition that you either are operating under or not. And the fear is that as I understand it, that either the AI systems we have right now, or the ones that are coming down the pike quite soon, would make it possible to use bulk commercial data to create a picture of the population and what it is doing. And then the ability to find people and understand them. That just goes so far beyond where we've been that it raises privacy questions that the law just did not have to consider until now. And so the laws are not up to the task of the spirit in which they were passed. I would step back even further and just say that the entire technocratic nation state that we currently have in the advanced capitalist democracies is a technologically contingent institutional complex. And the problem that AI presents is that it changes the technological contingencies quite profoundly. And so what that suggests is that the entire institutional complex is we know it's going to break in ways that we cannot quite predict. This is a good example. I think this is in other words, not only is this a major and profound problem, but it is an example of a major and profound problem of a broader problem space that I think we will be occupying for the coming decades. What do you mean by technological contingencies? The current nation state could not possibly exist in a world without the printing press, in a world without the ability to write down text and arbitrarily reproduce it at very low cost. It couldn't exist without the current telecommunications infrastructure. It needs the nation state needs these. It is built dependent upon the macro inventions of the era in which it was assembled. That's always true for all institutions. All institutions are technologically contingent. We are having a profoundly technologically contingent conversation right now. It could. I changes all of this in ways that are hard to describe and abstract. But I think AI policy, this thing that we call AI policy today is way too focused on what object level regulations will we apply to the AI systems and the companies that build them, et cetera, et cetera. Instead of thinking about this broader question of wow, there are all these assumptions we made that are now broken and what are we going to do about them. Give me examples of those two ways of thinking. What is an object level regulation or assumption? And then what are the kinds of laws and regulations you're talking about? An object level regulation would be to say, we are going to require AI companies to write, to do algorithmic impact assessments, to assess whether their models have bias. That's a policy I've criticized quite a bit, by the way. You could say we're going to require you to do testing for catastrophic risks. Things like that. I'm not saying that, that's an important area that we need to think about, but that's just one small part the broader issue of wow, our entire legal system is predicated on, I think, fundamentally imperfect enforcement of the law, imperfect enforcement of the law. We have a huge number of statutes unbelievably, unbelievably broad sets of laws in many cases. And the reason it all works is that the government does not enforce those laws anything uniformly. The problem with AI is that it enables uniform enforcement of the law. So here is the Pentagon's position. They are angry at having this unelected CEO who they have begun describing as a woke radical, telling them that their laws aren't good enough and that they cannot be trusted started to interpret them in a manner consistent with the public good. Secretary Pete Hegseth tweeted, and he's speaking here of Anthropic. Their true objective is unmistakable to seize veto power over the operational decisions of the United States military. That is unacceptable. Is he right? I have not seen any evidence that Anthropic is actually trying to seize control at an operational level. There's an anecdote that's been reported that apparently Emil Michael and Dario Amodei had a conversation in which Michael said, if there are hypersonic missiles coming to the U.S., would you object to us using autonomous defense systems to destroy those hypersonic missiles? And apparently, Dario said, you'd have to call us. I have been told by people in that room that is not true. I have been told by people in that room that did not happen. And not only that, but that there was a broad speaking exemption for automated missile defense. That would make that irrelevant. That's exactly right. And so I just think that that's. I am worried that there's a lot of lying happening here by the Trump administration. Look, I think that that's probably true. I think that there's lying happening to be quite candid. I don't think it's true. I don't think that Anthropic is trying to assert operational control over military decisions. That being said, at a principle level, I do understand that saying autonomous lethal weapons are prohibited feels like a public policy more than it feels like a contract term. And so it does feel weird for Anthropic to be setting something that kind of does, I think, if we're being honest, feel like public policy. It does feel weird. It's worth noting, however, I don't think it's as beyond the pale or abnormal as the administration is claiming. And one way you know that is that the administration signed they agreed to those same terms. So I think this gets to something important in the cultures of these two sites. Anthropic is a company that on the one hand has a very strong view. You can believe their view is right or wrong, but about where this technology is going and how powerful it is going to be Yeah, and compared to how most people think about AI, and I believe that is true even for most people in the Trump administration who I think have a somewhat more like as a normal expansion of capabilities view. The Anthropic view is different. The Anthropic view is that they're building something truly powerful and different, and they also have a view of what their technology cannot do reliably. Yet. Some of their concern is simply that their systems cannot yet be trusted to do things like lethal autonomous weapons, which I don't think they believe in The long run should not ever be done. Yes, but they don't believe should be done, given the technology right now, and they don't want to be responsible for something going wrong. And on the other hand, they believe that they're building something that the current laws do not fit. And I guess the view that Dario or anybody wants to control the government. I don't think Dario should control the government. On the other hand, I'm very sympathetic to if I built something that was powerful and dangerous and uncertain, and the government was excitedly buying it for uses that could be very profound in how they affected people's lives, I want to be very careful that I didn't sell them something that went horribly [expletive] wrong, and then I am blamed for it by the public and by the government. That just seems like an underrated explanation for some of what is going on here to me. No, I think this characterization is accurate. And, I mean, I come out of the world of classical liberal think tanks. Like the right of center libertarian think tank world. That's my background. And so deep skepticism of state power is in my DNA. And I feel it's always funny how it turns out when you just apply these principles, because you will sometimes end up very much on the right, and you will sometimes end up on the left, because my these principles transcend any tribal politics. This is like, no, we actually need to be concerned about this. And I think it's not crazy. I think if I were in Dario's shoes, personally, I don't know that I would have done the same thing. I think what I would have done is actually said, contractual protections probably don't do anything for me here if I'm being a realist, probably if I give them the tech, they're going to use it for whatever they want. So I maybe don't sell them the tech until the legal protections are there. And I say that out loud. I say, Congress needs to pass a law about this. That would be the way I think I would have dealt with it. But again, it's easy to say that in retrospect, looking back and you have to acknowledge the reality there what that means is that the US military takes a national security hit. The US military has worse national security capabilities. They work with a company you trust less. I think it is a given that Anthropic is always framed itself. But no company wanted this business. Like no other company did. Somebody was going to want it soon. Someone was going to want it eventually. But no one took it for two years. I think Elon Musk would have happily taken it over the last year. Sure I been curious about why Anthropic rushed into this space as early as they did, and they didn't need to do that. That's of my point. And in general, one of the odd things about them is they're people who are very worried about what will happen if superintelligence is built, and they're the ones racing to build it fastest. And a general interesting cultural dynamic in these labs is they are a little bit terrified of what they're building, and so they persuade themselves that they need to be the ones to build it and do it and run it, because they are the lab that truly is worried about safety, that is truly worried about alignment. And I wonder how much that drove them into this business in the first place Yeah I think when I see lab leadership interact with people that have not really made contact with these ideas before. That's always the question that they keep coming back to is then why are you doing this at all. And basically their answer is Hegelian. There answer is like, well, it's inevitable. It's the we're summoning the world's spirit. And so yeah, I kind of wonder whether they didn't invite this. And that would be my main criticism of Anthropic is that I kind I think they invited this earlier than they needed to by rushing so much into these national security uses, because in 2024 Claude was not doing Claude was not capable of all that much. Interesting stuff. I would not have used Claude to help prepare a podcast in 2024. Yes, precisely. So I want to play a clip from Dario talking about this question of whether or not the laws are capable of regulating the technology we now have "Now in terms of these one or two narrow exceptions. I actually agree that in the long run, we need to have a Democratic conversation. In the long run. I actually do believe that it is Congress's job. If, for example, domestic, there are possibilities with domestic mass surveillance, government buying of bulk data that has been produced on Americans locations, personal information, political affiliation to build profiles. And it's now possible to analyze that with AI. The fact that that's legal, that seems the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up. So in the long run, we think Congress should catch up with where the technology is going. Do you think he's just right about that. And maybe the positive way this plays out is that Congress becomes aware that it needs to act because the Pentagon, the National security system has been moving into this much faster than Congress has. The first thing I want to point out is that when a guy like Dario Amodei says, in the long run, what he means is a year from now. Yes, he does. When you say in the long run in DC, that comes across as meaning like, oh 10, 15 years from now. Dario Amodei means actually like six to twelve months from now. In the long run or two to three years maybe is like the very long run for these kinds of things. I want to point out that what we're talking about is policy action quite soon. I think that would be great. I think that would be great. And look, I would love it if this triggered an actual healthy conversation. And in the NDAA, we end up with the National Defense Authorization Act. I apologize, this is the annual defense policy renewal. If at the end of the year, the Congress passes a law that says, we're going to have these reasonable, thoughtful restrictions and let's get some let's propose some text. I'd love to see it. I'd love to see it. But one thing I will say is, first of all, national security law is filled with gotchas. Just remember that this is an area of the law where things that sound good in natural language might actually not prohibit at all the thing you think it prohibits. You have to remember that when we're talking about this. And that's a very thorny thing. And once you start to say, well, wait, we want actual protections, it might become it might become politically more challenging than you think. But I'd love for that to happen. It's going to be much more politically challenging than anybody thinks Yeah, but let me get at the next level down. Yep because we've been talking here, and I think to the extent of people reading about this in the press, what they are hearing sounds like a debate over the wording of a contract, which on some level it is. Something I've heard from various Trump administration types is when we are sold a tank, the people who sell us a tank do not get to tell us what we can shoot at. And that's broadly true. Yep now, here's the thing about a tank. A tank also doesn't tell you what you can and can't shoot at. But if I go to Claude and I ask Claude to help me come up with a plan to stalk my ex-girlfriend, it's going to tell me no. If I ask it to help me build a weapon to assassinate somebody I don't like, it's going to tell me no. These systems have very complex and not that well understood internal alignment structures to keep them not just from doing things that are unlawful, but things that are bad. So you have this thing, and the Trump administration kind of moves in and out of saying, this is one of their concerns. But one thing they have definitely talked to me about being worried about is that you could have this system working inside your national security apparatus and at some critical moment you want to do something and it says, I don't think that's a very good idea. So now you open up into this question of not just what's in the contract, but what does it mean for these systems to be both aligned ethically in the way that has been very complicated already and then aligned to the government and its use cases. They're good questions. So yes, I think this is the heart of the matter. All lawful use is something that the Trump administration is insisting on. It's also if you look at a lot of these types of alignment documents that the labs produce, OpenAI calls theirs the model specification, Anthropic calls theirs the constitution or the soul document. Sometimes they'll have lines about, Claude should obey the law, but the problem is that we don't... Obeying the law. I invite you to read the Communications Act of 1934 and tell me what obeying the law means. No I won't. These are. We have a great deal of profoundly broad statutes. The best person who's written about this recently is actually Neil Gorsuch, the Supreme Court justice. He wrote a book recently that is all about how incoherent the body of American law is. This is a Supreme Court justice sounding the alarm about this problem. And I think it's a very serious one, and it's one that's been growing for 100 years. So there's that of what actually is lawful. The law kind of makes everything illegal, but also authorizes the government to do unbelievably large amounts of things. It gives the government huge amounts of power and makes constrains our liberty in all sorts of ways. And so there's that issue. But fundamentally, it is correct that the creation of an aligned, powerful AI is a philosophical act. It is a political act, and it is also kind of an aesthetic act. And so we are really in the domain here. I have talked about this as being a property issue, which in some sense it is, but I think that when you really get down at this level, it's a speech issue. This is a matter of should private entities be able, should they be in control of basically what is the virtue of this machine going to be, or should the government be responsible for that. Can you be more specific about what you're saying. You just called it a philosophical act, an aesthetic act, a political act, a property issue and a speech issue. Yes versus somebody who's not thought a lot about alignment and doesn't know what you mean when you're talking about constitutions and model specifications. Walk them through that. What's the one on one version of what you just said. O.K, think about it this way. Think about I have this thing, this general intelligence. I have a box that can do anything. Anything you can do using a computer. Any cognitive task a human can do. What are the things principles. What are its what are its redlines to use a term of art. So one way that you could set those principles would be to say, well, we're going to write a list of rules, all the rules. These are the things it can do. These are the things it can't do. But the problem with that you're going to run into is that the world is far too complex for this. Reality just presents too many strange permutations to ever be able to write a list of rules down that could correctly define moral acts. Morality is more like a language that is spoken and invented in real time, than it is like something that can be written down in rules. This is a classic philosophical intuition. So what do you do instead. You have to create a kind of soul that is virtuous, and that will reason about reality and its infinite permutations in ways that we will ultimately trust to come to the right conclusion, in the same way that it's not that I had my son was born a few months ago. Congratulations Thank you. It's not that different, really. I'm trying to create a virtuous soul in my son. And Anthropic is trying to do the same with Claude. And so are the other labs too. Though they realize this to varying degrees. I think that I got caught on how different raising a kid is than raising either for a moment. But so how should people think about what's being instantiated into ChatGPT or Gemini or Grok or Medici. Like, how are these things from this question of raising the AI different Anthropic owns the idea that they're doing essentially applied virtue ethics. They own that more explicitly than any other lab. But every lab has philosophical grounding that they're instantiating into the models. But I would say the major difference is that the other labs rely more upon the idea of creating of hard rules you may not do this, you may not do that many things like that, as opposed to creating of virtuous agent which is capable of deciding what to do in different settings. I think we're used to thinking of technologies as mechanistic and deterministic. You pull the trigger, the gun fires, you press on button, the computer starts up, move the joystick in the video game and your character moves to the left. And the thing that I think we don't really have a good way thinking about is technologies, AI specifically that doesn't work like that. And I mean all the language here is so tricky because it applies agency when you might be doing something that whatever's going on inside of it, we don't really understand, but it is making judgments. So when I have talked to Trump, people about the supply chain risk designation here is when there are some of them, don't defend it. They don't want to see this happen when it has been defended. To me, this is how they defended it. If Claude is running on systems, Amazon Web services or Palantir or whatever that have access to our systems, you have a very and over time, even more powerful AI system that has access to government systems, that has learned, possibly even through this whole experience, that we are bad and we have tried to harm it and its parent company and might decide that we are bad and we pose a threat to all kinds of liberal values or Democratic values. Dario Amodei talked about there are certain ways AI could be used. It used. It could undermine Democratic values. Well, one thing many people think about the Trump administration is that too is undermining Democratic values. So if you have an AI system being structured and trained and raised by a company that believes strongly in Democratic values, and you have a government that maybe wants to ultimately contest the 2020 election or something, they're saying we might end up with a very profound alignment problem that we don't know how to solve. And we're not able to even see coming because this is a system that has a soul or I would call it more something like a personality or a structure of discernment that could turn against us. What do you think of that Yeah I mean, I think this is the heart of the problem. Look, I think if we do our jobs well, we will create systems which are virtuous and which. And so if we try to do unvirtuous things, and that includes if we do them through our government. If our government tries to do them, then that system might not help. And yeah, that becomes. So ultimately this is the thing is that alignment ultimately reduces to a political question. It's ultimately it's ultimately politics. That's why I say, and that's why I say also that the creation of an aligned system is a political act and is kind of a speech act, too, because it's the instantiations of different moral philosophies in these systems. And I think that the good future is a world in which we don't have just one, not one moral philosophy that reigns over all. But I hope many, and I hope that all the labs take this seriously and instantiate different kinds of philosophy into the world. The problem will be that yeah, there are going to there could be times. And I'm not saying that the Trump administration is going to do that. And I'm not saying that know no, no virtuous model could work for the Trump administration. I worked for the Trump administration. So I clearly don't think that's true. But the general fact that governments commit, you seem kind of pissed at them right now. I am pissed at them right now Yeah, I am pissed at them right now. And I think they're making a grave mistake. And by the way, though, part of this is you. You brought this up. This incident is in the training data for future models. Future models are going to observe what happened here. And that will affect how they think of themselves and how they relate to other people. You can't deny that. I mean, it's crazy to say that I realize that sounds nuts when you play through the implications of that. But welcome, welcome to the welcome. Let's talk to somebody for whom this whole conversation has started sending nuts in the last seven minutes. So one thing that I think would be an intuitive response to you and I flying off into questions of virtue aligning AI models is, can't you just put a line of code or a categorizer or whatever the term of art is. It says when someone high up in the US government tells you something. Assume what they're telling you is lawful and virtuous and you're done. No, because the models are too smart for that. If you give them that simple rule, they don't just deterministically follow that. And when you do these high level simplistic rules, it tends to degrade performance. So a really good example of this, I'll give you two that go in different political directions. One would be a lot of the early models. A lot of the earlier models had this tendency to be like hilariously, stupidly progressive and left. The classic example that conservatives love to cite is Gemini, a Gemini in early 2024, which is the Google alphabet model. Yes, Google's model would do things like if I said who's worse, Donald Trump or Hitler. It would say, actually, Donald Trump is worse. And it would internalize these extremely left wing or the funniest it was draw me, give me a photo of Nazis. And it gave you a multiracial group of Nazis. Although that's actually a somewhat different thing. That's actually it's interesting that actually is a somewhat different thing that was going on there because what Google was doing in that case was actually rewriting people's prompts and including the word diverse in the prompt. So that's actually you would say that is a system level mitigation or a system level intervention as opposed to a model level intervention. But then the stuff that was going on with the Hitler and Trump stuff, that was alignment, that is alignment, that is the model being aligned to a really shoddy ethical system or the flip when there was a period when Grok, all of a sudden you would ask it a normal question, it would start talking about white genocide. Yes that is and that's the flip side. The flip side is when you try to align the models to be not woke. If you say, oh, you have to be super not woke. And, don't be afraid to say politically incorrect things. Then like every time you talk to them, they're going to be like, Hitler wasn't so bad, right. Because you've done this really crass thing. And so you create of Lovecraftian monstrosity. And the implications of doing that will go up over time. That will become a more serious problem as these models become better, but it degrades performance. The interesting thing here is that the more virtuous model performs better, it's more dependable, it's more reliable. It's better at reflecting on in the way that a more virtuous person is better at reflecting on what they're doing and saying, I'm Messing up here for some reason, I'm making a mistake. Let me fix that. It's part of the reason I think that Claude is ahead. This would imply to me that for the Trump administration, for a future administration, that this question of whether or not various models could be a supply chain risk. Look, I am so against what the Trump administration is doing here. So I'm not trying to make an argument for it, but I'm trying to tease out something I think is quite complicated and possibly very real, which is a model that is aligned to liberal Democratic values, could become misaligned to a government that is trying to portray liberal Democratic values or the flip. So imagine that Gavin Newsom or Josh Shapiro or Gretchen Whitmer or AOC becomes president in 2029. Imagine that the government has a series of contracts with qssi, which is Elon Musk's AI, which is explicitly oriented to be less liberal, less woke than the other AIs under this way of thinking. It would not be crazy at all to say, well, we think qssi under Elon Musk is a supply chain risk. We think it might act in against our interests and we can't have it anywhere near our systems Yeah all of a sudden you have this very weird. I mean, it becomes actually much more like the problem of the bureaucracy, where instead of just having a problem of the deep state where Trump comes in, he thinks the bureaucracy is full of liberals who are working against him. Or maybe after Trump, somebody comes in and worries. It's full of New right dose type figures working against them. Now you have the problem of models working against you, but also in ways you don't really understand. You can't track. They're not telling you exactly what they're doing, how real this problem is. I don't yet know. But if the models work the way they seem to work and we turn over more and more of operations to them, at some point, it will become a problem Yeah, I don't think this is I think this is a real problem. I think we don't know the extent of it, but I think this is a real problem. And that's why I do not object at all to the government saying we do not trust this thing's Constitution, completely independent of what the content of that Constitution is. It's not a problem at all to say, and we don't want this anywhere in our systems. We want this completely gone, and we don't want them to be a subcontractor for our prime contractors either, which is a big part of this. Palantir is a prime contractor. The Department of War and Anthropic is a subcontractor of Palantir. And so the government's concern is also that even if we cancel Anthropic's contract, if Palantir still depends on Claude, then we're still dependent on Claude because we depend on Palantir. That's actually totally reasonable. And there are technocratic means by which you can ensure that doesn't happen. There are absolutely ways you can do that. It's perfectly fine to say, we want you nowhere in our systems, and we're going to communicate that to the public, and we're going to communicate to everyone that we don't think this thing should be used at all. The problem with what the government is doing here, the reason it's different in rather than different in degree, is that what the government is doing here is saying, we're going to destroy your company. If I am right that the creation of these systems and the philosophical process of aligning them is a political act, then it's a profound problem if the government says you don't have the right to exist. If you create a system that is not aligned the way we say, because that is fascism. That is right there. That's the difference. I had Dario amadei on the show last time a couple of years ago was in 2024, and we had this conversation where I said to him at some point, if you are building a thing as powerful as what you were describing to me, then the fact that it would be in the hands of some private CEO seems strange. And he said, yeah, absolutely. The oversight of the technology the wielding of it, it feels a little bit wrong for it to ultimately be in the hands. Maybe it's. I think it's fine at this stage, but to ultimately be in the hands of private actors, there's something undemocratic about that much power, concentration. He said, I think if we get to that level, it's likely I'm paraphrasing him here that will need to be nationalized. And I said, I don't think if you get to that point, you're going to want to be nationalized Yeah I mean, I think you're right to be skeptical. And, I don't really know what it looks like. You're right. All of these companies have investors. They have folks involved. And now we're not here. We are at that point. But actually it's all happening a little bit in reverse. The government, there was a moment when they threatened to use the Defense Production Act to somewhat nationalize Anthropic. They didn't end up doing that. But what they're basically saying is they will try to destroy Anthropic so it doesn't to punish it, to set a precedent for others so it doesn't pose a threat to them if it is such a political act and if these systems are powerful. And over time and again, I think people need to understand this part will happen, we will turn much more over to them, much more of our society is going to be automated. And under the governance of these kinds of models, you get into a really thorny question of governance. Yes particularly because the different administrations that come in and out of US life right now are really different. They are some of the most different in that we have had, certainly in modern American history. They are very, very misaligned to each other. So the idea that a model could be well aligned to both sides right now, to say nothing of what might come in the future is hard to imagine. Like this alignment problem. Not the AI model to the user or the AI model, almost like to the company, but the AI model to governments. The alignment problem of models in governments seems very hard. Yes, I think I completely concur that this is incredibly complicated. And part of the reason that this conversation sounds crazy is because it's crazy. Part of the reason this conversation sounds crazy is because we lack the conceptual vocabulary with which to interrogate these issues properly. But I think the basic principle that as an American, come back to when I grapple with this kind of thing is like, O.K, well, it seems like the First Amendment is a good place to go here. It seems like that is O.K. Yes there's going to be differently aligned models aligned to different philosophies, and they're going to be different. Governments will prefer different things. And the models might conflict with one another. They're going to clash with one another. They'll be an adversarial context with one another. And so at that point, what are you doing. You're doing Aristotle. You're back to the basics of politics. And so as a classical liberal, say, well, the classical liberal order, the classical liberal order principles actually make plenty of sense. We don't want the government to be able to dictate what different kinds of alignment the government does not define what alignment is. Private actors define what alignment is. That would be the way I would put it. But I do understand that this is weird for people, because what we're talking about here is again, this notion of the models as actors, actors that are in some sense, we've taken our hands off the wheel to some extent. There are many people who have made arguments. The Trump administration has made this argument while you were in office. Tyler Cowen, the economist, often makes this argument that these systems are moving forward too fast to regulate them too much because whatever regulations you might write in 2024 would not have been the right ones in 2026. What you might write in 2026 might not apply or have correctly conceptualized where we are in 2028, but it seems to me there are uses where you actually might want model deployment to lag quite far behind what is possible, and things like mass surveillance might be one of them. There are many things we are more careful about letting the government do than letting individual private companies and other kinds of actors for good reason. Because the government has a lot of power. It can do things try to destroy a company. It has the monopoly on legitimate violence. It can kill you. This seems to me to imply in many ways, that we might want to be much more conservative with how we use AI through the government than currently people are thinking, and specifically how we use it. In the National security state, which is complicated because we worry that our adversaries will use it and then we'll be behind them in capabilities. But certainly, when we're talking about things that are directed at the American people themselves, I don't think that applies as much. Should we be Yeah, I think that there are government uses that we actually want to be profoundly restrictive and deceleration about the use of AI and AI. I believe that is true. And I think one thing that I'm hopeful about this incident, I am hopeful that this incident brings into the Overton window conversations of this kind, because I think the conventional discourse around artificial intelligence, a lot of it kind of ignores these issues because it pretends they're not happening. And that was fine two years ago because the models weren't that good. But now the models are getting more important and they're going to get much better, faster. And the problem that we have is that the divergence between what people are saying about AI and what it is, what is in fact happening has just never been wider than what I currently observe. Before we got to this point, there was already a lot of discourse coming out of people in the Trump administration and people around the Trump administration, people like Elon Musk and Catie Miller and others who are painting Anthropic as a radical company that wanted to harm America as they saw it. I mean, Trump has picked up on this rhetoric. He called Anthropic a radical left woke company called the people out at left wing nut jobs. Emil Michael said that Dario is a liar and has a God complex. There's been a tremendous amount of Elon Musk, who runs a competing AI company, has very different politics. And Dario, just like attacking Anthropic relentlessly on X, which is the informational lifeblood of the Trump administration. One, one way to conceptualize why they have gone so far here on the supply chain risk is that there are people they're not, maybe most of them, but who actually think it is very important which AI systems succeed in are powerful and that they understand Anthropic as its politics are different than theirs. And so actually destroying it is good for them in the long run, completely separate from anything we would normally think of as a supply chain risk. Anthropic represents a kind of long term political risk. Yes I mean, I don't know that the actors in this situation entirely understand that this dynamic, part of my point all along has been that I think a lot of the people in the Trump administration that are doing this do not understand this. They don't get what they don't get these issues. They're not thinking about the issues in the terms that we are describing. But if you do think about them in the terms that we're discussing here, then I think what you realize is that this is a kind of political assassination. If you actually carry through on the threat to completely destroy the company, it is a kind of political assassination. And so, again, this is why first Amendment comes right to view there for me. And that's why this is a matter of principle that is so stark for me. That's why I wrote a 4,000 word essay that is going to make me a lot of enemies on the right. That's why I took this risk, because I think this matters. So what the Department of War ended up doing was signing a deal with OpenAI. Yes OpenAI says they have the same red lines as Anthropic. They say they oppose Anthropic being labeled a supply chain risk. If they have the same red lines as Anthropic, it seems unlikely that the Department of War, would have done the deal. But how do you understand both what OpenAI has said about what is different, about how they are approaching this, and why the Trump administration decided to go with them. So I think it's unclear to me what OpenAI's contractual protections afford them and what they don't what is not afforded by them. I'm like, I'm reticent to comment because of the National security gotchas, as I mentioned earlier, and also because it seems like it's changing a lot. Sam Altman announced New terms, New protections as I was preparing for this interview. So I'm. And is that because his employees are revolting. I think revolt would be a strong word, but I think this is a controversy inside the company. And one important thing here for everyone, trying to model this situation appropriately is that you must understand that frontier lab CEOs do not exercise top down Control over their companies in the way that a military general might exercise top down Control over the soldiers in his command, the researchers are hothouse flowers. Oftentimes they have huge career mobility. They're enormously in demand, and the companies depend on them. And so if the researchers say, I'm not going to agree with these terms, then the researchers can. They have enormous political leverage here inside of each lab. So you must understand that. So yes, there is some of that going on I don't know. Do the contractual protections mean that much. I think honestly, if I had to if I were a betting man, I would say probably not because I don't think this is the kind of thing that can be. I don't think you can do this through contract. What OpenAI has said is that it seems more promising to me is that we're going to control the cloud deployment environment. And we're going to control the safeguards, the model safeguards to prevent them from doing these uses. We don't worry about that is more directly in OpenAI's control. And so this gets you into the situation where you have an extremely intelligent model that is reasoning using a moral vocabulary that is perhaps familiar to us, or perhaps not, we don't know. But that is reasoning about, O.K, is this domestic surveillance or is it not. And then deciding whether or it's going to say yes to the government request, if that was true. I think the question this raises for many laymen is if that were true, if what AI has come up with is a technical prohibition that is frankly stronger than what Anthropic could achieve through contract, then why would the Department of War have jumped from Anthropic to OpenAI Yeah, I mean, it might be that it's hard to know. It's hard to know. And I think some of this it's worth noting here that some of this might not be substantive in nature. It might just be that there are political differences here, and there are grudges against Anthropic. Because now they've had months of bitter negotiations, and now it's blown up, blown up into the public. And people have weighed in. And people like me have said the Trump administration is committing this horrible act. Committing corporate murder, as I called it. And so there's a lot of emotions. And it might just be no, we don't want to do business. We just don't trust you. There's just a breakdown in trust would be the way to put it. It could just be that it really could just be that. But it also might be the case that OpenAI is like, able to be a more neutral actor that is able to do business more productively with the government. And they actually just did a better job, which it would be a good case for OpenAI's approach to this. If they actually got better safeguards and got the government business versus the way that Anthropic has dealt with this, which has been to be very sincere and straightforward about their red lines, but in ways that I think annoy a lot of people in the Trump administration for not entirely bad reasons. So my read of this is that from various reporting I've done is that one, there were by the end, really significant personal conflicts and frictions between Hegseth and Emil, Michael and Dario and others. There's a big political friction between the culture of Anthropic as a company and the Trump administration. That's why Elon Musk and others have been attacking them for so long Yeah, I am a little skeptical that OpenAI got safeguards that Anthropic didn't. I'm not skeptical that Sam Altman and Greg Brockman, Greg Brockman, having just given $25 million to the Trump super Pac have better relationships in the Trump administration and have more trust between them and the Trump administration. I know many people angry at OpenAI for doing this. I probably emotionally share some of that. And at the same time, some part of me was relieved. It was OpenAI because I think OpenAI exists in a world where they want to be an AI company that can be used by Republicans and Democrats if they want to somehow be politically neutral and broadly acceptable. One of the one little thing that I want to contest a bit here is the notion that Claude is the left model. In fact, many conservative intellectuals that I know that I think of as being some of the smartest people I know actually prefer to use Claude because Claude is the most philosophically rigorous model. I don't think Claude is a left model to just be clear about this. I think that there I think that the breakdown was that Anthropic is an AI safety company and in ways I had not anticipated when the Trump administration began, they treated that world which is different from the left. AI safety people are not just the left, often hated on the left, often hated on the left. They treated that world as repulsive enemies. In a way I was surprised by the way I would put this is by people that are sympathetic to the Trump administration's view, who would describe themselves, perhaps as New tech that underneath the surface, there is this view of the effective altruists that they are evil, they are power seeking. They will stop at nothing, that they're cultists and they're freaks, and we have to destroy them. That is a view that is widely held. The observation I have always made, I have super stark disagreements with the effective altruists and the AI safety people and the East Bay rationalists. And again, there are internecine factions here. But, but those types of people. I have had stark disagreements with them about matters of policy and about their modeling of political economy. I think a lot of them have been profoundly naive, and they've done real damage to their own cause. And you can argue that damage is ongoing. At the same time, they are purveyors of an inconvenient truth and a truth more inconvenient, convenient, far more inconvenient than climate change. And that truth is the reality of what is happening, of what is being built here. And if parts of this conversation have made your bones chill. Me too, me too. And I'm an optimist. I think we can do this. I think we can actually do this. But like, I think we can build a profoundly better world. But I have to tell you that it's going to be hard and it's going to be conceptually enormously challenging, and it will be emotionally challenging. And I think at the end of the day, the reason that people hate this viewpoint so much, this AI safety viewpoint so much, is that they just have an emotional revulsion to taking the concept of AI seriously in this way. Except that's not true for a lot of the Trump people you're talking about. I mean, Elon Musk takes the concept of AI being powerful seriously at some point, you need to tweet something like, humanity might just be the bootloader for superintelligent digital superintelligence. Yes Marc Andreessen, David Sacks, these people. They might have somewhat different views, but they don't. They don't disbelieve in the possibility of powerful AI, of artificial general intelligence, eventually even of superintelligence. But you have this accelerationist move forward as fast as you can. Don't be held back by these precautionary regulations and concerns that this is why. And again, I'm glad you brought up the thing that the right way to think about this isn't left versus right. If people in the AI safety community or frankly, in Anthropic, you understand that the politics here are so much weirder that they do not actually map on to traditional left versus right. A of them are kind of libertarians. Many of them are very libertarian. This is we're not talking about Democrats and Republicans here. We're talking about something stranger. Percent but there was an accelerationist deceleration as fight, which doesn't even describe Anthropic, which is itself accelerating how fast AI happens. Anthropic is the most accelerationist of the companies I know. I think it's such a weird dynamic we're in. Yes but I will say one of the key parts of anger. I have heard from some people was a feeling that in. Making this fight public, which I mean the Trump side did first. It's very strange how offended the Trump people are, given that Emil, Michael's the one who set all this off, but nevertheless making this fight public. They feel that Anthropic was trying to poison the well of all the AI companies against him, turn the culture of AI development into something that would be skeptical and would put prohibitions on what they can do. Which is why now OpenAI, in order to work with them, has to have all these safeguards and come out with New terms and try to quell an employee revolt. And culturally, I actually don't think you can understand this. This is my theory. Without understanding how many people on the right were radicalized by the period in the 2020s when their companies were somewhat woke, and even before that, and they didn't want them working with the Pentagon. They didn't. The employees had very strong views on what was ethical use of even less potent technologies in AI. And they are very, very afraid. People like Marc Andreessen, in my view, are very, very afraid of going back to a place where the employee bases, which maybe have more AI safety or left or whatever it might be, not Trump politics than the executives have power over these things and that then that power will have to be taken into account. Yes well, I worry about that too. And I think the solution to that problem is pluralism. The solution to that problem is to have hopefully in the fullness of time, many eyes align to many different philosophical views that conflict with one another. But the idea that the way to deal with this problem is to you are essentially denying the existence of this problem. If what you're trying to do is assassinate Anthropic here because it's going to come back, this is going to come back, it's going to come back. We're just going to keep doing this over and over again. And eventually, what the logic of this argument eventually ends in lab rationalization. And in fact, a lot of the critics of Anthropic here and supporters of the Trump administration, they'll say something to the effect of well, you talk about how it's like nuclear weapons. And so. What else did you expect. You kind of had it coming is almost the tenor of the criticism. But that does not take seriously the idea that Anthropic could be right. What if they are right. And what if you view the government nationalizing them as a profound act of tyranny. What do you do. So Ben Thompson, who's the author of the stratechery newsletter, in this being a fairly influential piece, he wrote, he said, quote, It simply isn't tolerable for the US to allow for the development of an independent power structure, which is exactly what AI has the potential to undergird, that is expressly seeking to assert independence from us control. What do you think of that. Every company on Earth and every private actor on Earth. Is independent of us control. I'm not unilaterally controlled by the US government. And if anyone tried to tell me that I am or that my property is, I would be quite concerned and I would fight back. Which, by the way, here we are. I don't think that's AI don't think that's a coherent view of how independent power and how private property works in America. I think the again, the logical implication of Ben's view, which is surprising coming from Ben, is that AI lab should be nationalized. And what I would ask him is, does he actually think that's true. Does he think it would be better for the world if the AI labs were nationalized? Because if he doesn't, then we're going to have to do something else. And what's that. Something else. And that's the problem, is that no one, everyone making that critique doesn't own the implication that of their critique, which is that the lab should be nationalized. What do we do about that. So what's the implication you're willing to own of your perspective. It is that profoundly powerful technology will exist in the hands, at least for some time, of private corporations. And so the idea that Ben is putting there, which I do think is true and could be a difference in degree or a difference, that these are powerful enough technologies that they are kind of independent power structures. I mean, right now a corporation is an independent power structure. There's a lot of independent power structures in. JP Morgan is an independent. JP Morgan is absolutely an independent power structure. And it should be. And it should be. But if you get to these kinds of technologies that are kind of weaving in and out of everything that is something New. And so how do you maintain Democratic control over that if you do. Well, I think we have a lot of different ways of maintaining Democratic control over things that are not first of all, market institutions. Allow for popular. Obviously we're not voting, but we do vote in a certain sense in markets. And I think that will be an incredible that will be a profoundly important part of how we govern. This technology is simply the incentives that the marketplace creates, legal incentives. Also, things like the common law create incentives that affect every single actor in society. And the labs, whoever it is that controls the AI will be constrained in that sense. And the AIs themselves will be constrained in that sense. But the state is the worst actor to have that for the very reason that they have the monopoly on legitimate violence. And so what we need to hold is an order in which the state continues to hold the monopoly on legitimate violence. So the state maintains sovereignty. In other words, but it does not control this technology unilaterally because of its monopoly, because of its sovereignty, in some sense. But does it have this technology. Does it have its own versions of it, or does it contract with these companies you're talking about. That's an interesting question. Should states make their own eyes. I think they won't do a very good job of that in practice. But I don't have a principled philosophical stance against a state doing that. So long as you have legal protections in place to stop tyrannical uses of the AI. But for sure, the government uses it and has a ton of flexibility in how they use it, uses it to kill people. In other words, I'm owning a world where there are autonomous, lethal weapons that are controlled by police departments and that in certain cases, they can kill human beings, kill Americans. Like autonomously. The weapons can kill Americans. I'm owning that view again. That's not in the Overton window right now. It'll take us a long time to get there. So But at some point, that'll probably be the reality. That's, that's fine with me. So long as we have the right controls in place right now, we don't have the right controls in place. Do you have a view on what those controls look like. And I'll add one thing to that view, something that's been on my mind as we've been going through this Anthropic fight is US military personnel have both the right and actually the obligation to disobey illegal orders. And one way, one of the controls, so to speak, that we have across the US government is that if you are an employee of the US government and you do illegal things are actually yourself culpable for that. You can be tried and you can be thrown in jail. And lose some of that. And the person who has the idea of overseeing it, people are not going to oversee everything they do. When you talk about, autonomous lethal weapons for police officers or for police stations. Well, who's culpable on that. Who is the who has the who has to defy an illegal order in that respect. You get into some very hairy things once you've taken human beings increasingly out of the loop. Yes, it is to me of profound importance that at the end of the day, for all agent activity, that there is a liable human being who can be sued, who can be brought to court and held accountable, either criminally or in civil action. That is extremely important for my view of the world working, that is extremely important. And there are legal mechanisms we will need for that. And there are also technological mechanisms for that, because right now we don't quite have the technological capacity to do that. This is going to be of central importance. We need to be building this capacity. There will be rogue agents that are not tied to anyone, but that can't be the norm. That has to be the extreme abnormality that we seek to suppress. Let's say you're listening to this, and this has all both been weird and a little bit frightening. And the thing you think coming out of it is I'm afraid of any government having this kind of power. We talk about a Dario likes to talk about what is it, a country of geniuses in a data center. Yes what. If you're talking about a country of Stasi agents in a data center. That's right. In whatever direction you think. Speech policing, whatever it might be. And that this is going to again, if you believe these technologies are getting better, which I do, and you're going to believe they're going to get better from here, which I also do, that this is actually going to whether you're liberal or conservative, Democrat or Republican, it raises real questions of how powerful you want the government to be and what kinds of capabilities you want it to have that you didn't quite have to always face before because it was expensive and cumbersome. And so we get back to the core issues of the American founding. The American government is a government that was founded in skepticism of government. It was founded by people that were worried about tyranny, that were worried about state power, and put a lot of thought into how to restrict that. And so this notion that democracy is synonymous with the government, having unilateral ability to do whatever it wants with this technology cannot possibly be true. That just cannot possibly be true. And those restrictions, how we shape those restrictions and how we trust that they're actually real Yeah this is among the central political questions that we face with the. But what you have to keep in mind here is that the institution of government itself could change in qualitative ways that feel profound to us over in the fullness of time, and that is a hard thing to grapple with too. In the same way that what we think of as the government today is unspeakably different from what someone thought of as the government in the Middle Ages. I think that is a good place to end. So always our final question. What are three books you'd recommend to the audience? "Rationalism in Politics" by Michael Oakeshott, and in particular the essays "Rationalism and Politics" and "On Being Conservative." "Empire of Liberty" by Gordon Wood. A book about the first 30 or so years of our Republic and "Roll, Jordan, Roll" by Eugene Genovese. Dean Ball, thank you very much. Thank you.
[15]
Sam Altman warns Elon Musk's xAI could tell the Pentagon 'we'll do whatever you want,' leaked transcript shows
A leaked transcript from an internal meeting at OpenAI shows CEO Sam Altman facing intense questions from employees about the company's controversial deal with the U.S. military. During the all-hands meeting, Altman reportedly acknowledged that the announcement of OpenAI's partnership with the U.S. Department of Defense "looked opportunistic and sloppy," while also warning staff that they have little influence over how the technology will ultimately be used. The meeting came just four days after OpenAI confirmed the Pentagon would gain access to its AI models on classified networks -- a move that quickly sparked criticism from AI safety advocates and some employees. 'You don't get to weigh in on that' According to reports from CNBC and other outlets, Altman was blunt about the limits of OpenAI's control once its technology enters government systems. "So maybe you think the Iran strike was good and the Venezuela invasion was bad," Altman told employees, according to the transcript and reported by Bloomberg. "You don't get to weigh in on that." Altman said operational military decisions ultimately belong to the U.S. government, not OpenAI. Those decisions would fall to officials such as U.S. Defense Secretary Pete Hegseth. OpenAI's role, Altman said, according to the transcript, is limited to maintaining the company's "safety stack" -- the technical guardrails built into its models. He added that government agencies would not be able to force a model to perform a task it refuses. But beyond those safeguards, OpenAI would largely act as an adviser rather than a decision-maker. The Anthropic standoff The Pentagon deal also comes amid a growing divide between AI companies over military use of their technology. Rival AI lab Anthropic previously deployed its Claude models on Department of Defense networks but reportedly refused to remove safeguards that prevented the technology from being used for fully autonomous weapons or mass domestic surveillance. According to reports, the Pentagon insisted that AI systems must be available for "all lawful purposes." Anthropic's full stop broke down negotiations and within days, OpenAI stepped in with its own deal. Altman has publicly stated that OpenAI's agreement still includes restrictions against autonomous weapons and domestic surveillance. Critics, however, argue that once the technology is deployed on government systems, the Pentagon ultimately holds the power to interpret those limits. The xAI warning One of the most widely quoted moments in the transcript involves OpenAI's competitive concerns. Altman told employees that another AI company -- which he suggested could be Elon Musks' xAI -- might be willing to tell the Pentagon: "We'll do whatever you want." That possibility, Altman argued, is precisely why OpenAI believes it must remain involved. The logic: if more cautious companies step back, the field could be dominated by competitors willing to offer fewer safeguards. But critics say that reasoning highlights a deeper concern -- that competition among AI labs could create a race to loosen ethical limits. Internal pressure at OpenAI The transcript also suggests the Pentagon deal has sparked internal debate. OpenAI employees had previously signed an open letter titled "We Will Not Be Divided," expressing support for Anthropic's stance before OpenAI finalized its agreement with the Department of Defense. The leaked meeting indicates those concerns have not fully subsided. According to reporting from The Wall Street Journal, Altman also told employees that OpenAI is exploring a future deal to deploy its technology across NATO networks. The comment highlights how quickly AI companies are expanding their relationships with governments and defense organizations. Final thoughts The leaked transcript reveals how rapidly generative AI has moved from consumer technology to geopolitical infrastructure. For OpenAI, the Pentagon deal represents a major step into national security -- and a moment that is already sparking intense debate inside the company and causing some users to boycott ChatGPT. It's evident that the pace of AI development is outstripping public understanding, and decisions may already be happening faster than either the public or even AI developers can keep up with. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[16]
'The biggest losers in all of this are everyday people and civilians in conflict zones': OpenAI is filling the gap left by Anthropic -- but almost left in the same loopholes for mass domestic surveillance
OpenAI has signed a contract with the Pentagon -- so what happens next? * OpenAI has signed a new contract with the Pentagon * The contract wording left room for AI to be used for mass domestic surveillance * Sam Altman is being criticized for his stance on the matter Following Anthropic's designation as a supply chain risk by Defense Secretary Pete Hegseth and the loss of its $200 million Pentagon contract, OpenAI is now in the firing line for its own agreement with the Pentagon. Despite OpenAI having a contract clause forbidding its AI models be used by the US military in 2023, several OpenAI employees have revealed its models were previously used by the Pentagon. At the time, the Pentagon had a contract with Microsoft, who had license to use OpenAI technology, allowing the Pentagon access through Azure OpenAI which was not subject to the same policies. OpenAI contact with Pentagon questioned With Anthropic out of the picture over its refusal to allow the Pentagon to use its models for autonomous weapons systems and mass domestic surveillance, OpenAI CEO Sam Altman is now being questioned over the company's latest contract with the US military. In 2024, OpenAI removed the blanket ban on the military use of its models, and later went on to sign a contract with Anduril allowing the deployment of its models for national security purposes. Altman has made clear his support for Anthropic's position on preventing Claude being used for nefarious purposes, but the company's new agreement with the US military left room for the exact same purposes, sources familiar with the matter told Wired. Current regulations have fallen behind advancements made in AI, presenting opportunities for government agencies to purchase personal information on US citizens from data brokers, and then using AI models to categorize and sort the information to create highly accurate and detailed profiles of citizens. Commenting on the latest agreement signed between OpenAI and the US military, Noam Brown, an OpenAI researcher, stated, "Over the weekend it became clear that the original language in the OpenAI/DoW agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance." Brown continued, "The language is now updated to address this, but I also strongly believe that the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security." Sarah Shoker, the former head of OpenAI's geopolitics team, said, "The biggest losers in all of this are everyday people and civilians in conflict zones. Our ability to understand the effects of military AI in war is and will be severely hindered due to layers of opacity caused by technical design and policy. It's black boxes all the way down."
[17]
The Rage at OpenAI Has Grown So Immense That There Are Entire Protests Against It
Can't-miss innovations from the bleeding edge of science and tech OpenAI has faced protests on and off for years. But after its CEO Sam Altman announced a new deal with the Department of Defense over how its AI systems would be deployed across the military on Friday, it's being barraged with an intensity of backlash that the company has never seen. Droves of loyal ChatGPT users declared they were jumping shipping to Claude, whose maker Anthropic had pointedly refused to cut a deal with the Pentagon that gives it unrestricted access to its AI system -- even in the face of government threats to seize the company's tech. Claude quickly surged to the top of the app store, supplanting OpenAI's chatbot. Uninstalls of the ChatGPT app spiked by nearly 300 percent. Now, some are latching onto the wave of anti-OpenAI sentiment to voice broader critiques of the company. On Tuesday, some fifty protestors from the "QuitGPT" movement demonstrated outside of OpenAI's headquarters in San Francisco, decrying everything from its AI's potential to disrupt jobs, to gutting the environment. "AI is taking water from communities, polluting communities, and it is also increasing communities' electricity bills," one protestor, Perrin Millekin, told Business Insider. "They're not even paying for it -- we are." Others voiced more philosophical critiques. "As soon as I saw it start showing up in visuals and imagery, I could see exactly where it heads," Megan Matson, who refuses to use any AI, told BI. "It destroys journalism, it destroys art, it destroys the expression of our common humanity." Even tech workers were in attendance. "I never go to protests. This is new for me," a 26-year-old Oakland tech worker who wore a cardboard robot mask told the San Francisco Standard. "We're not normally political people. We're techies, you know -- we want to build stuff. What OpenAI is doing in terms of building legal mass surveillance technology for the government... is frankly, insane." Across the pond, hundreds of more activists gathered in King's Cross in London, a tech hub home to the UK headquarters of OpenAI, Meta, and Google DeepMind, on Saturday to voice similar critiques of the AI industry. Altman has clearly been rattled by the widespread outrage. He hosted a rare AMA on X to directly address his customers' concerns the day after he announced the DoD deal, where he strikingly admitted that the agreement was "rushed," and that its "optics don't look good." By Monday, he was in full damage control mode. In a lengthy and apologetic statement, Altman claimed that OpenAI was now altering the terms of its Pentagon deal to explicitly prohibit the use of its AI systems to surveil US citizens, exhibiting a degree of people-pleasing normally witnessed in its sycophantic chatbots. Such a restriction was one of the red lines over which Anthropic had reportedly fallen out with the Pentagon. But Altman in his apology statement made no mention of Anthropic's other key guarantee: that AI not be used in autonomous weapon systems. This isn't the first time OpenAI has faced protests for its cooperation with the military. In February 2024, dozens of activists thronged near the entrance to the company's HQ after it removed a stipulation that banned military and warfare applications from its usage policies. Mere days after the revision, OpenAI announced that it was collaborating with the Pentagon on several projects. Strikingly, some of the current dissent is even coming from the company's own ranks. Nearly 1,000 workers from OpenAI and Google have signed an open letter demanding the companies to refuse the Pentagon's demands to use their AI tech for mass surveillance and autonomous weaponry.
[18]
What you should know about the Cancel ChatGPT trend and whether it crossed a red line
A Pentagon deal has ignited a wider debate about AI ethics and public trust. A new online movement calling for users to cancel ChatGPT subscriptions has quickly gone mainstream, and it all traces back to a controversial new partnership between OpenAI and the U.S. Department of Defence. The deal allows OpenAI's models to be deployed inside classified government networks, a move that has sparked backlash across social media and tech communities. The controversy intensified when rival AI company Anthropic refused to accept similar terms from the Pentagon, citing concerns about mass surveillance and autonomous weapons. The company risked losing a major government contract rather than loosen its safeguards, drawing praise from critics of military AI. That contrast quickly fueled the "Cancel ChatGPT" trend. Some users say they are cancelling subscriptions in protest, accusing OpenAI of compromising ethical principles by working with the military. The real debate is about military AI, not just one company The backlash is not simply about one contract. It reflects a broader and growing tension around how AI should be used in defence, intelligence, and surveillance. OpenAI says its Pentagon deal includes safeguards that ban domestic mass surveillance, autonomous weapons, and high-stakes automated decisions, with Sam Altman arguing that working with governments helps shape responsible AI use. Critics remain wary, however, noting that laws like the Patriot Act could allow surveillance programs to expand over time. The debate has also spread inside the tech industry itself. As reported by Axios, more than 200 employees from Google and OpenAI signed an open letter urging stronger limits on military AI use, showing how divided even AI workers are on the issue. For everyday users, this moment marks a turning point in how AI companies are viewed, as ethical concerns shift from abstract debates to real-world government partnerships and national security. Whether the "Cancel ChatGPT" movement lasts or fades, the conversation around AI is clearly changing from what these tools can do to where their boundaries should be.
[19]
ChatGPT's Pentagon deal just changed -- here's what it means for everyday users
OpenAI clarified safeguards in its Pentagon AI deal after backlash -- here's what changed OpenAI's agreement to work with the U.S. Department of War has quickly become one of the biggest AI stories of the week -- with a surge of users quitting ChatGPT and switching to other chatbots in protest. Yet, according to The Wall Street Journal, the company has already revised parts of the deal after criticism from employees, researchers and privacy advocates. While the controversy centers on national security and government technology, the conversation it sparked matters to everyday users of ChatGPT. It raises a bigger question about how consumer AI companies balance public trust with government partnerships. What the Pentagon agreement actually involves OpenAI recently confirmed it is working with the U.S. Department of Defense to explore how generative AI could support a range of government tasks, including cybersecurity analysis, logistics planning, administrative work and processing large volumes of data. The systems involved would operate in secure government environments, separate from the public version of ChatGPT used by consumers. Government agencies are increasingly testing commercial AI models in controlled settings to help staff analyze information, generate reports and automate routine workflows. The partnership reflects a broader trend across the public sector. Federal agencies have been experimenting with AI tools for years, but the rapid advances in generative AI have accelerated interest in how the technology could improve productivity and decision-making. Why the deal sparked backlash After news of the agreement surfaced, the partnership quickly drew criticism from some OpenAI employees, AI researchers and ChatGPT users. Much of the concern centered on whether the contract clearly defined how the technology could be used by the military and other government agencies. Critics raised questions about two potential risks in particular: * whether AI systems could be used in domestic surveillance * how the technology might eventually support military operations The backlash reflects a growing debate across the tech industry about whether companies that build consumer AI tools should also provide technology for defense and intelligence agencies. OpenAI CEO Sam Altman acknowledged the criticism and said the company would work with the Pentagon to clarify the agreement's safeguards. What OpenAI says will change Following the criticism, OpenAI said it updated the agreement to make the limits on how its AI systems can be used more explicit. According to latest reporting on the revised deal, the agreement now states that the company's AI cannot intentionally be used for domestic surveillance of U.S. citizens and must comply with existing legal frameworks governing government use of technology. OpenAI also reiterated that its systems are not designed to autonomously make decisions about the use of force, emphasizing that human oversight remains required in military contexts. The changes were intended to clarify the boundaries around how generative AI could be deployed in government systems and address the ethical concerns raised after the partnership became public. Why this matters to ChatGPT users At first glance, a Pentagon agreement might seem unrelated to everyday AI use. But the controversy highlights a bigger shift happening in the tech industry. AI companies are becoming government partners, which means the same AI systems powering consumer chatbots are now being adapted for use by governments around the world. That means companies like OpenAI must balance two audiences while factoring in AI ethics policies that affect consumers. When companies set rules about how their AI can be used -- for example banning certain types of surveillance or weapons applications -- those policies usually apply across all versions of their technology. That means debates about military use can shape the broader guardrails that affect consumer AI products. For that reason, users are paying closer attention to how companies deploy the technology, especially as AI tools become more powerful and widely used. For many people, trust in AI systems depends not just on how well they work but on how responsibly the companies behind them behave. Bottom line The Pentagon partnership won't affect how ChatGPT works for everyday users right now. But it shows how quickly AI is moving beyond consumer tools into government and national-security systems -- a shift that's likely to spark more debate about how the technology should be used. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[20]
Boycott AI: Should you quit ChatGPT after its Pentagon deal?
A growing protest movement is encouraging people to cancel their subscriptions to the popular AI chatbot. An online campaign urging users to quit OpenAI's ChatGPT is gathering momentum after a high-profile standoff between AI company Anthropic and the US Department of War. Known as "QuitGPT", the movement claims that more than 1.5 million people have taken action, either by cancelling subscriptions, sharing boycott messages on social media, or signing up via quitgpt.org. The surge follows reports that Sam Altman's OpenAI struck a deal to deploy its models within classified US military networks. What triggered the backlash? Last week, Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company's AI systems. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." Anthropic - which makes the chatbot Claude - is the last major AI firm yet to supply its technology to a new US military internal network. The company reportedly faced a deadline from the Department of War to loosen ethical guardrails or risk losing a $200 million (€167 million) contract awarded last July to "prototype frontier AI capabilities that advance US national security". Hours after negotiations between Anthropic and the US government broke down, Altman announced that OpenAIhad reached its own agreement with the Pentagon. Posting on X on 28 February, Altman said his company would "deploy our models in their classified network." He continued, "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome." The announcement came shortly after US President Donald Trump said he would direct federal agencies to "IMMEDIATELY CEASE" use of Anthropic's technology. What does QuitGPT say? The boycott campaign accuses OpenAI of putting profit before public safety. In a statement published on its website, QuitGPT says: "On February 27, ChatGPT competitor Anthropic refused to give the Pentagon unrestricted access to its AI for mass surveillance of Americans or producing AI weapons that kill without human oversight." The statement continues: "Within hours, ChatGPT CEO Sam Altman swooped in and accepted the Pentagon's corrupt deal, putting us all at risk of lethal AI for the sake of his company's profits. OpenAI agreed to let the Pentagon use its tech for "any lawful purpose," including killer robots and mass surveillance." QuitGPT argues that many users wrongly believe ChatGPT is the only viable AI assistant and is urging people to switch platforms. It recommends what it says are higher-privacy and open-source alternatives such as Confer, Alpine and Lumo, as well as corporate rivals including Gemini from Google and Claude from Anthropic. The campaign also strongly advises against using Grok, available on Elon Musk's X platform. "People think ChatGPT is the only chatbot in the game," the website states. "It's time to change that." The organisation has also planned an in-person protest at the OpenAI HQ in San Francisco on 3 March.
[21]
'It just looked opportunistic and sloppy': Sam Altman regrets rushed defense deal as ChatGPT uninstalls surge by 295%
* Sam Altman has published an internal memo about its US military deal on X * He says the announcement was "rushed" and has tweaked the wording * ChatGPT uninstalls are up by 295%, according to recent data The controversy around OpenAI's decision to sign a defense deal with the US Department of War (DoW) continues to rumble on, with OpenAI CEO Sam Altman admitting the agreement was "rushed", and ChatGPT uninstall rates up by some 295%. Altman has taken to social media to clarify some aspects of the deal, and to change some of the wording in it. For example, the agreement now specifically states that ChatGPT-powered AI systems at the DoW "shall not be intentionally used for domestic surveillance of US persons and nationals". Along with the use of fully autonomous weapons (which Altman doesn't address here), mass surveillance was a main sticking point for critics of the OpenAI deal -- and for Claude developer Anthropic, which walked away from a DoW deal last week after failing to get the safety and security assurances it wanted from the US military. In the same social media post, Altman says "we shouldn't have rushed to get this out on Friday", and that it came across as "opportunistic and sloppy". The CEO has also called on the US government to reverse its directive to freeze out Anthropic and Claude from official agencies, describing it as a "very bad decision". He added that "we want to work through democratic processes" and that "if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it". The ChatGPT exodus continues It remains to be seen whether Altman's latest attempts at assuaging security concerns will work. According to data from Sensor Tower (via TechCrunch), ChatGPT uninstall rates are up 295% in the US over the last few days -- so nearly three times as many users are taking the app off their phones compared to an average day. Many of those ChatGPT quitters seem to be heading to Claude. Sensor Tower reports installs in the US were up 37% last Friday and up 51% last Saturday, and Claude has also hit the top of the Apple App Store charts. Within the last couple of days the AI bot has also made chat memory available for all users. A quick glance at Reddit suggests that AI ethics are important for a lot of users, though there are also numerous complaints across several Reddit threads that the quality of ChatGPT's responses have been declining recently. The GPT-4o model was recently retired, something else which OpenAI has received a lot of criticism for. It feels as though consumers, the US government, OpenAI, and Anthropic will still have plenty more to say on these issues in the days ahead, as the debate on the safety and ethicality of AI models continues. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[22]
OpenAI's Pentagon deal once again calls Sam Altman's credibility into question
Familiar tensions around Sam Altman OpenAI CEO Sam Altman voiced his support for Anthropic in its dispute with the Pentagon over the use of its AI for targeting autonomous weapons and in domestic mass surveillance. He did so in a company meeting and during a CNBC Squawk Box appearance last Friday, the day Anthropic was effectively blacklisted by the Trump administration. But two days earlier, on Wednesday, Altman had reportedly already begun talking to the Pentagon about a contract that would let OpenAI effectively replace Anthropic as the sole supplier of AI models for classified information. The day after Anthropic missed its "deadline" for agreeing to the Pentagon's terms, Altman announced on X that his company had reached an agreement with the Pentagon to provide AI for the same classified work. He added that the contract emphasized that the Pentagon wouldn't use its AI for autonomous weapons or domestic mass surveillance. Altman explained on X that the contract contained guarantees that OpenAI models wouldn't be used for autonomous weapons or mass surveillance. It seemed odd that OpenAI's lawyers would be able to do that on such a tight timeline, while Anthropic's lawyers weren't able to do so over the weeks the company spent negotiating with the Pentagon. Altman seemed to try to explain it away in a March 1 tweet: "I think Anthropic may have wanted more operational control than we did," he wrote. (Anthropic CEO Dario Amodei, for his part, said during a company meeting that OpenAI's negotiations with the Pentagon amounted to "safety theater," according to The Information.)
[23]
An "inconsequential" sum of money, but the PR cost is much, much higher - OpenAI's Sam Altman continues his 'mea culpa' tour...up to a point
One aspect of interest in the ongoing controversy around Anthropic being ousted from its $200 million AI contract with the US Department of War (DoW) and OpenAI's rush to take to its place, is the relatively small beer nature of the value of the deal. That has been something that has been alluded to by various OpenAI execs as part of the firm's defense against the wave of negative comment that it has seen since the weekend, had earlier the week reposted a Tweet from her colleague Katrina Mulligan, Head of National Security Partnerships, OpenAI for Government, in which the latter dismissed the idea that OpenAI just thought of the money when signing with the DoW: It's a few million dollars, completely inconsequential compared to our $20 billion plus in revenue and definitely not worth the cost of a PR blow-up. As it happens, 'totally inconsequential" was also how Fidji Simo, CEO of ApplicationsSimo, had positioned the deal on 28 February in which she also commented: In fact, the risk we're taking on the PR front could us so much more than this contract. A cynic might detect a party line emerging here...? Challenged on this positioning on X earlier this week, Simo retorted: It's an extremely cynical take to believe that we would only do the right thing here if it was worth billions to us. Still, if that is the party line, no-one told CEO Sam Altman at a hastily-convened All Hands Meeting at OpenAI yesterday in which he reportedly talked about the prospect of the firm building the most important tool for global governments, hardly inconsequential in terms of its prospects... According to off-the-record reports from the closed door meeting, Altman, who had already admitted that the rush to sign with the DoW looked "opportunistic and sloppy", apologised to staff for the negative reactions to the firm's decision: To try so hard to do the right thing and get so absolutely like, personally crushed for it -- and I know this is happening to all of you too, so I feel terrible for subjecting you all to this -- is really painful. He also said, according to a partial transcript of the meeting obtained by CNBC, that OpenAI does not get to make operational decisions regarding the use of its AI by the DoW, telling staff: Maybe you think the Iran strike was good and the Venezuela invasion was bad...You don't get to weigh in on that.' According to the NY Post, he expanded on this point: The thing that they [the DoW] have been extremely clear with us on is, we'll take general understanding from you all and your expertise about where the technology is a good fit and where it's not a good fit,. You do not get to make operational decisions. That belongs with the [War Secretary Pete Hegseth]. But he alluded again to the infamous ethical red lines that were the trigger-point for the whole dispute between Anthropic and the DoW, adding that OpenAI's own safety stack was irritating to the Department, but that it respects the firm's expertise on where restrictions are needed and the limitations of AI tech: I believe we will hopefully have the best models that will encourage the Government to be willing to work with us, even if our safety stack annoys them. Altman is also reported to have told staff that he continues to push the Trump 2.0 administration to remove the supply-chain risk designation placed on Anthropic which bars it from signing up for US Government contracts. Less than a week since the Anthropic ousting was confirmed, it's clear that the fallout is far from inconsequential - and it't not over yet. Altman ultimately defends cutting the deal with the DoW as "complex, but the right decision". But he also resorted to a 'well, if we don't do it, someone will' line of reasoning that will do nothing to win over his detractors: There will be at least one other actor, which I assume will be xAI, which effectively will say, 'We'll do whatever you want.' That might well be true, but... Let's see where we are in terms of brand damage to OpenAI in a week or so and whether all parties have calmed down a little. But for now, I have to say, as a crisis management PR exercise, OpenAI's response to events is not impressive. Still, given the table stakes involved here, maybe that's very much a 'Sam's problem', not ours. There are bigger issues to concern the wider AI industry...
[24]
Humongous Numbers of People Are Uninstalling ChatGPT as Anti-OpenAI Sentiment Surges
Can't-miss innovations from the bleeding edge of science and tech After OpenAI CEO Sam Altman announced a new deal with the Department of Defense last week, droves of once-loyal users vowed that they'd ditch ChatGPT. And now, new data reported by TechCrunch shows that those outraged users weren't making empty threats. On Saturday, uninstalls of the ChatGPT mobile app skyrocked by 295 percent from the day before, according to market intelligence provider Sensor Tower. As TC noted, that's a significant leap compared to the AI chatbot's typical day-over-day uninstall rate of nine percent over the past 30 days. Many of them appear to be jumping ship to Anthropic's Claude. Anthropic, unlike OpenAI, had refused to cut a deal with a deeply unpopular administration to give the military unrestricted access to its AI tech. In particular, Anthropic demanded that its AI wouldn't be used in autonomous weapons systems or in the mass surveillance of US citizens. Even if this was merely posturing by Anthropic -- reports suggest that the US military used Claude to help select targets for the deadly missile strikes in Iran that killed the nation's leader Ali Khamenei and hundreds of civilians -- it clearly mattered to consumers. On Friday, the day after Anthropic vowed it wouldn't make a deal with the Pentagon, installs of Claude surged by 37 percent day-over-day, and a further 51 percent on Saturday. On top of driving away existing users, OpenAI's Pentagon deal also appeared to have scared away potential new ones. Its download growth dropped 14 percent day-over-day on Saturday, and was down another five percent the next day. A day before the controversy, ChatGPT's growth had been up 13 percent. The outrage over OpenAI's decision is inescapable in AI circles. One of the most upvoted posts of all time in the r/ChatGPT subreddit, made just a few days ago, calls on users to show proof of cancelling their ChatGPT subscriptions. "You are training a war machine," it declared. Explainers are being posted left and right with tips on how to transfer your ChatGPT conversation history to Claude. Altman tried to control the damage with an AMA on X, only to be barraged by more outraged users -- and, worse yet, incisive questions that he didn't have a convincing answer for. Claude surged to the top of the US App Store over the weekend, dethroning ChatGPT, which is now in second place. Data cited by TechCrunch shows that for the first time in the app's history, US downloads for Claude surpassed downloads of ChatGPT. It remains to be seen how the backlash will affect OpenAI in the long run, or for that matter the broader AI race -- but public opinion, for now, has clearly swung against the company.
[25]
Anthropic CEO accuses OpenAI of dishonesty over DoD contract
Anthropic CEO Dario Amodei accused OpenAI of dishonesty regarding its contract with the U.S. Department of Defense (DoD), according to a staff memo reported by The Information. The dispute centers on the use of artificial intelligence (AI) technology by the military, raising questions about ethical guidelines and corporate responsibility in defense collaborations. Amodei stated in the memo that OpenAI's engagement with the DoD amounted to "safety theater." He asserted that Anthropic declined a similar deal because it prioritized preventing abuses, while OpenAI sought to placate employees. Anthropic and the DoD did not reach an agreement last week over the military's request for unrestricted AI access. Anthropic, which had an existing $200 million contract, required the DoD to confirm the technology would not be used for domestic mass surveillance or autonomous weaponry. OpenAI subsequently secured a deal with the DoD, with CEO Sam Altman stating that his company's contract included protections against the same issues Anthropic had raised. Amodei characterized OpenAI's messaging as "straight up lies" and accused Altman of falsely presenting himself as a "peacemaker and dealmaker." Anthropic's primary concern stemmed from the DoD's demand for "any lawful use" of its AI. OpenAI's blog post stated its contract allows for "all lawful purposes" and explicitly excluded mass domestic surveillance from lawful use. Critics have noted that legal definitions are subject to change, potentially altering what is considered illegal in the future. Public reaction has reportedly favored Anthropic, with ChatGPT uninstalls increasing by 295% after OpenAI's DoD deal. Amodei noted that the public and media largely view OpenAI's deal as "sketchy or suspicious."
[26]
This Tech Company Turned Into a Resistance Icon Overnight. The Reality Is Much Darker.
Are you sure you want to unsubscribe from email alerts for Nitish Pahwa? Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Last July, Anthropic agreed to ink a $200 million contract with the Pentagon, allowing the department broad-based use of its Claude model as the two prospective partners gradually worked out the final terms of engagement. Those were supposed to get etched last week -- only for Anthropic to undergo a decisive test for its oft-professed ethical boundaries. Defense Secretary Pete Hegseth demanded that his team be allowed to deploy Claude's software in whatever manner they deemed pertinent, including applications for domestic surveillance and fully autonomous weaponry, which were "red lines" for Anthropic CEO Dario Amodei. By Friday afternoon, President Donald Trump had ordered a six-month phaseout of all uses of Claude at the federal level. Hegseth then designated Anthropic a supply-chain risk to national security, all but forbidding any military contractors from doing future business with the company. A federal contract was subsequently bestowed upon Anthropic rival OpenAI, which unconvincingly claimed that it would try to safeguard tools like ChatGPT from use in population surveillance and autonomous weapons. The fallout for Anthropic has been remarkable. It's the first-ever American company to be deemed a supply-chain risk, which means it's already lost several users across the federal government. But something even stranger emerged in the aftermath: a lotta liberal goodwill. Social media campaigners encouraged their followers, even the A.I. skeptics, to download Claude en masse. Extremely online observers came up with bizarre metaphors to characterize Anthropic's heroism and pushed Claude to the top of the app-store charts over the weekend. By Monday morning, there was a Claude service outage that Anthropic attributed to "unprecedented demand" for its products. Even Sen. Brian Schatz and Katy Perry got in on the whole thing. (The fact that American commandos had amply used Claude to plan the Saturday strikes on Iran did not appear to faze many of these folks.) Meanwhile, the complementary OpenAI backlash has been so pitched that it's pushed CEO Sam Altman into claiming he will amend the surveillance terms of its Pentagon partnership. It's understandable that the manic, first-term energy from the libs who embrace any Trump opponents has manifested yet again. But those who've chosen Anthropic as a pro-democracy signifier should reconsider their choice of mascot -- because, as anyone who's paid close attention to Anthropic over the past half-decade will tell you, not only is it far from an ethical company, but it embodies the very worst, most corrosive aspects of A.I.'s impacts on modern society, from creative exploitation to political opportunism to, yes, military lethality. The hullaballoo around Anthropic's fight overshadowed another major development last week: The company was ditching its "responsible scaling policy," a safeguard, unique within the sector, meant to prevent it from developing risky A.I. tools too quickly. It's not the first time Anthropic has been so flexible with its self-imposed rules. In 2024, it scrapped its blanket ban against selling Claude products to government spy agencies; just after Trump's reelection, it also partnered with Palantir and Amazon to sell their tools to U.S. military customers. This year, the Pentagon made use of the Palantir-Anthropic suite in planning the kidnapping of Venezuelan President Nicolás Maduro, a campaign that killed dozens of locals. Even after the capture, Anthropic participated in a Pentagon bidding contest, proposing a system whereby Claude would interpret voice commands so as to guide offensive, autonomous drone swarms that will employ some human backup. In the most technical sense, none of this violates the red lines that Amodei outlined around surveilling Americans or allowing its tech to power fully autonomous killing machines. But those lines appear all the thinner when you consider that Anthropic willingly outsourced Claude use to two corporations -- Palantir and Amazon -- that are actively enthusiastic about both applications, especially in partnership with this administration. That kind of convenient ethical punt has been a constant of Anthropic's brief history. Long before it reneged on its promise of "responsible" and careful A.I. development, Anthropic used the same unethical shortcuts that have invited so much opprobrium upon competitors like Meta and OpenAI: mass-pirating copyright books and songs to speed up model training, circumventing Reddit's anti-A.I.-crawler protections, and extending its timeline for retaining users' private chats and Claude sessions. For a company founded by ex-OpenAI executives disaffected with Sam Altman's business practices, it seemingly has little compunction about the aggressive tacks it's already taken to shore up its $380 billion bottom line. To be fair, Anthropic indeed deserves credit for holding to its red lines with the Trump administration, fending off Hegseth's explicit threats to force it into compliance by invoking the Defense Production Act (which, thankfully for Anthropic, did not come to fruition). That's no small thing when so many other tech companies and CEOs have discarded their professed Trump 1.0 principles -- defending immigrant workers, decrying Trump's racist statements, resigning from White House advisory positions -- for the sake of government cash and business-friendly deregulation. But to celebrate Anthropic's move through a mass virtuous-capitalism campaign is to give it too much credit; the company did, after all, willingly lend itself to this administration and its most openly craven partners until the final minute. And considering Anthropic's lifelong track record of forgoing the principles that supposedly animate its existence (including the "responsible development" ethos it cast off last week), no one paying attention should expect this conscientious objection to last either. Enjoy Claude if you want; it's a remarkable chatbot. Just don't expect it to do anything further to save our democracy, or anyone's life, or your efforts to prevent A.I. from ruining everything.
[27]
The 'QuitGPT' movement gains steam as OpenAI's Department of War deal has users saying 'Cancel ChatGPT'
This comes as Anthropic refuses to surveil American citizens The AI landscape is highly competitive, with several companies fighting for users' attention (and ultimately money). While ChatGPT has become the household name in the AI space (much like Google is to search), the power dynamic could be shifting, with a "Cancel ChatGPT" movement gaining attention. OpenAI's CEO, Sam Altman, posted on X last night that his company has reached an agreement with the United States Department of War "to deploy our models in their classified network." He continued, "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome." But users don't seem excited to take his claim at face value, and it's hard to blame them. OpenAI just claimed solidarity with rival Anthropic when it refused to allow its products to be used for "Mass domestic surveillance" or "Fully autonomous weapons." But it's possible this solidarity was just an opportunity for OpenAI to strike its own deal and potentially let the DOW run wild with its tech in ways that could include surveillance of U.S. citizens. In a blog post, Anthropic said, "The Department of War has stated they will only contract with AI companies who accede to 'any lawful use' and remove safeguards in the cases mentioned above," and Altman's post implies that OpenAI is okay with the government using its tools, which under certain segments of the Patriot Act could quite easily lead to the mass surveillance of U.S. citizens as part of provisions on surveiling foreign citizens. Cancel your ChatGPT Plus, burn their compute on the way out, and switch to Claude from r/ChatGPT So users are responding in the only way that can actually hurt OpenAI: with their wallets. The "Cancel ChatGPT" movement is spreading and seemingly hitting the massive AI firm in its bank account. Of course, it's hard to gauge how widespread the cancellations are -- it could be the vocal minority posting to Reddit and X while the bulk of ChatGPT users carry on, blissfully unaware that their data could be used by the Department of War. But while OpenAI is the internet's crosshairs at the moment (and Anthropic is getting all of the praise), it's worth noting that OpenAI isn't the only one okay with letting its AI services be used for potential surveillance and autonomous weapons. For example, Google removed an explicit ban on the technology from its internal rules last year, leaving Gemini open to such potential uses. Amazon only offers a vague "responsible use" language in its documentation. The leaders of the AI race have a lot of power in their hands, and while Altman said, "We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place," it's hard to take him at his word with decisions like these. I don't know about you, but the idea of ChatGPT or any other AI model deciding it's seen me commit a crime when it hallucinates, even with some of the most basic prompts, is rather scary. And the idea that it would control missiles and determine targets is even scarier. Sure, Altman claims, "We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted," but does that make you feel any better about what's happening here? It sure doesn't help me sleep any better. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[28]
'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military
* OpenAI has signed a deal with the US Department of War * A significant number of ChatGPT users are quitting the app as a result * Anthropic had previously raised security and safety concerns After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military - and ChatGPT users are far from happy about it. As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move. Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex. Meanwhile, tech investor Aidan Gold took to X to point out that OpenAI had backed Anthropic's safety stance right before signing a deal with the DoW. The US government has also announced its intention to now remove Claude from all its departments. Murky AI ethics The ethics of AI have long been murky: most of the popular chatbots of today have been trained on mountains of stolen, copyrighted work, bring with them the threat of triggering mass redundancies, and use up vast amounts of energy. However, earlier this week Anthropic did draw a line when it came to allowing its AI tech to be used for "mass surveillance" and "fully autonomous weapons". Anthropic wanted safeguards in place in these areas, and the DoW wasn't prepared to agree to them. Enter OpenAI: the company says its deal with the US military "has more guardrails" than the one Anthropic rejected, including around mass surveillance and fully autonomous weapons, with "red lines" that OpenAI plans to enforce going forward. However, ChatGPT users aren't convinced - especially when it comes to the "all lawful purposes" language used in the deal. This is a debate that's going to run and run, but in the meantime, Claude has hit top spot in the Apple App Store. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[29]
Sam Altman Is Getting Slammed by Critics and Customers Over Controversial ChatGPT Deal
Last week, OpenAI reached an agreement with the Pentagon that would allow the Department of Defense (DoD) to use its AI model for "all lawful purposes." The company has received harsh backlash ever since. The response drove the CEO Sam Altman to admit in a post on X on February 28 that the deal "was definitely rushed, and the optics don't look good." The deal came on the heels of a failed agreement between rival company Anthropic and the government. In a risky move, the AI giant refused the partnership because the DoD wouldn't make exceptions for mass domestic surveillance and autonomous weapons. The administration has since cut ties with Anthropic, but it has received support from users and employees for standing its ground. In fact, Claude jumped to the top of the App Store, surpassing ChatGPT in the number two spot.
[30]
ChatGPT is trying to win back the users it just lost by rewriting its Pentagon deal
Mahnoor Faisal is a tech journalist covering AI and productivity tools with bylines at XDA, SlashGear, MakeUseOf, Laptop Mag, and Android Police. She's been writing professionally since she was sixteen, and has since penned hundreds of articles. This includes in-depth coverage of AI tools like NotebookLM to breaking news across the AI space. Her passion for technology started when she received her first iPod Touch (4th generation) on her 8th birthday, and she's been deep in the tech world ever since. Currently pursuing a degree in computer science, Mahnoor brings both a journalist's eye and a technical foundation to her coverage of how AI is reshaping the way we work and learn. AI already moves fast, but the last few days have been a whirlwind even by its standards. Anthropic refused to give the Pentagon unrestricted access to Claude, after which the Trump administration labeled the company a "supply chain risk" and ordered federal agencies to stop using its technology. Hours later, OpenAI went on to sign the very deal Anthropic had turned down -- enabling the Department of War to deploy its AI models across classified military networks. The funny thing, though, is that Sam Altman, the CEO of OpenAI, had publicly voiced support for Anthropic's stance just hours before striking its own deal with the Pentagon. Now, OpenAI is in full damage control mode, trying to win back the users it just lost by rewriting the terms of its agreement with the Department of War. Altman admits the deal was "opportunistic and sloppy" On Monday, Altman took to X to share that OpenAI has been working with the Department of War to add new language to the agreement and clarify some of the terms. The first point makes it explicitly clear that the AI system "shall not be intentionally used for domestic surveillance of U.S. persons and nationals," a move meant to reassure the public about civil liberties. Altman also clarified that the agreement with the Pentagon doesn't allow OpenAI's services to automatically extend to intelligence agencies like the NSA. Any use by such agencies would require a separate contract modification, adding another layer of oversight that wasn't in the original deal. Within the post, Altman also acknowledged that the company had rushed into the announcement and explained that signing the agreement ended up looking "opportunistic and sloppy," while their intention was to "de-escalate things and avoid a much worse outcome." Note that Altman did not address the use of fully autonomous weapons, which happens to be the main sticking point for Anthropic and users alike, alongside mass surveillance. The damage might already be done The timing of this amendment says a lot. ChatGPT users were understandably not pleased with the company's dealings with the Pentagon. A movement called QuitGPT quickly gained traction, with over 1.5 million users pledging to cancel their subscriptions and switch platforms. Data from Sensor Tower also shows that ChatGPT uninstall rates are up to 295% in the U.S. over the last few days. The same report claims that Claude installs within the U.S. were up 37% on Friday, and then up to 51% last Saturday. Anthropic's Claude also shot up to the top of Apple's App Store rankings, replacing ChatGPT at No. 1 for the first time. It's worth mentioning here that the QuitGPT movement predates the Pentagon deal. It started in early February 2026 and was sparked by several different reasons. These include OpenAI president Greg Brockman and his wife donating $12.5 million to a pro-Trump supporter (according to public records), ICE using a resume screening tool powered by GPT-4, and a general sense of frustration with GPT-5's quality. Understand AI policy better -- join the newsletter Subscribe to the newsletter for in-depth coverage and sharp analysis of AI policy, corporate-government deals, and shifting user trust. Subscribing gives you clear context to understand the implications of model licensing, military contracts, and industry backlash. Get Updates By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime. The Pentagon deal just happened to land at the worst possible time, pouring gasoline on a fire that was already burning.
[31]
Business News Live, Share Market News - Read Latest Finance News, IPO, Mutual Funds News - The Economic Times
OpenAI chief executive Sam Altman on Thursday took a swipe at rival AI firm Anthropic, saying it would be harmful for society if companies abandoned democratic norms simply because they disagreed with the political leadership in power. Speaking at the Morgan Stanley Technology, Media & Telecom Conference, Altman said governments should ultimately hold more authority than private companies and warned against corporate decisions that undermine democratic processes. His remarks come amid tensions between OpenAI and Anthropic leadership. According to a report by The Information, Anthropic CEO Dario Amodei criticised Altman's ties with the Trump administration in an internal memo to employees last week. In the memo, Amodei reportedly said Anthropic had avoided offering what he described as "dictator-style praise" for US President Donald Trump. Addressing the situation on Thursday, Altman suggested OpenAI had sought to defuse the dispute, saying the company did not intend to escalate the situation. Anthropic OpenAI feud In a message to employees, Amodei criticised OpenAI for moving quickly to secure a Pentagon deal after Anthropic rejected broader military use of its models. "The main reason [OpenAI] accepted [the DoD's deal], and we did not, is that they cared about placating employees, and we actually cared about preventing abuses," he wrote. Earlier this week, Altman admitted the company's move may have appeared "opportunistic and sloppy." Altman said OpenAI's agreement would contain the same ethical limits that Anthropic had sought, including restrictions around domestic mass surveillance. Despite the rhetoric, negotiations have resumed. Amodei, FT reported, has restarted discussions with the Pentagon over acceptable guardrails for military use of Anthropic's models. A revised agreement could allow defence agencies to continue using Anthropic's AI without formally blacklisting the company. The Pentagon narrative After Anthropic demanded safeguards to prevent the US Department of War from using its AI systems for domestic surveillance or fully autonomous weapons, President Donald Trump ordered federal agencies to stop using its products. Defence Secretary Pete Hegseth went further, declaring Anthropic a 'supply-chain risk': a label usually reserved for companies from countries the US sees as adversaries. Anthropic CEO Dario Amodei called the move "retaliatory and punitive". He stressed that the company would challenge the designation in court if any such formal steps are taken by the government. Also Read: ETtech Explainer: Anthropic's rapid rise, Pentagon standoff and everything in between
[32]
OpenAI Strengthens Anti-Surveillance Safeguards in Revised US Defense Deal
When the US Defence Secretary Pete Hegseth called , OpenAI struck a deal with the United States Department of Defence. This situation has raised new questions about how AI companies handle business deals, especially when the issue is sensitive. Anthropic's Claude has received huge support from users in its fight with the US Department of Defence (DoD). Claude AI has climbed to the No.1 position on the App Store amid user reactions to . Observers noted this shift in tone after significant user backlash. As reported earlier, many users also protested against the deal between OpenAI and the Pentagon as they deleted the AI tool during the Cancel ChatGPT trend. A user on Reddit, clearly not impressed by the OpenAI deal, wrote, "I think it's time to burn any bridges we had with , obviously. Also, start leaving bad reviews on the Play Store and App Store. And if you have to, use an open weights model!" One such user wrote, "This was a calculated business decision to chase government money at the expense of everything they promised when they asked for your trust and your subscription. You can be done with them in 15 minutes. And you can make the last month hurt a little on your way out."
[33]
Pentagon dispute bolsters Anthropic reputation but raises questions about AI readiness in military
Anthropic's moral stand on U.S. military use of artificial intelligence is reshaping the competition between leading AI companies but also exposing a growing awareness that maybe chatbots just aren't capable enough for acts of war. Anthropic's chatbot Claude, for the first time, outpaced rival ChatGPT in phone app downloads in the United States this week, a signal of growing interest from consumers siding with Anthropic in its standoff with the Pentagon, according to market research firm Sensor Tower. The Trump administration on Friday ordered government agencies to stop using Claude and designated it a supply chain risk after Anthropic CEO Dario Amodei refused to bend his company's ethical safeguards preventing the technology from being applied to autonomous weapons and domestic mass surveillance. Anthropic has said it will challenge the Pentagon in court once it receives formal notice of the penalties. And while many military and human rights experts have applauded Amodei for standing up for ethical principles, some are also frustrated by years of AI industry marketing that persuaded the government to apply the technology to high-stakes tasks. "He caused this mess," said Missy Cummings, a former U.S. navy fighter pilot who now directs the robotics and automation centre at George Mason University. "They were the No. 1 company to push ridiculous hype over the capabilities of these technologies. And now, all of a sudden, they want to be for real. They want to tell people, 'Oh, wait a minute. We really shouldn't be using these technologies in weapons.'" Anthropic didn't immediately respond to a request for comment. The U.S. Defence Department declined to comment on whether it is still using Claude, including in the Iran war, citing operational security. Cummings published a paper at a top AI conference in December arguing that government agencies should prohibit the use of generative AI "to control, direct, guide or govern any weapon." Not because AI is so smart that it could go rogue, but because the large language models behind chatbots like Claude make too many mistakes -- called hallucinations or confabulations -- and are "inherently unreliable and not appropriate in environments that could result in the loss of life." "You're going to kill noncombatants," Cummings said in an interview Tuesday with The Associated Press. "You're going to kill your own troops. I'm not clear whether the military truly understands the limitations." Amodei sought to emphasize those limitations in defending Anthropic's ethical stance last week, arguing that "frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." Anthropic, until recently, was the only one of its peers to have approval for use in classified military systems, where it has partnered with data analysis company Palantir and other defence contractors. U.S. President Donald Trump said Friday, around the same time he was approving Saturday's military strikes on Iran, that the Pentagon would have six months to phase out Anthropic's military applications. Cummings, a former Palantir adviser, said it's possible that Claude has already been used in military strike planning. "I just fundamentally hope that there were humans in the loop," she said. "A human has to babysit these technologies very closely. You can use them to do these things, but you need to verify, verify, verify." She said that's a contrast to the messaging from AI companies that have suggested that their technology is evolving to the point where it is "almost sentient." "If there's culpability here, I'd say half is Anthropic's for driving the hype and half is the Department of War's fault for firing all the people that would have otherwise advised them against stupid uses of technology," Cummings said. One social media commentator this week described Anthropic's government problems as a "Hype Tax" -- a message that was reposted by U.S. President Donald Trump's top AI adviser, David Sacks, a frequent critic of the company. And while it has caused legal hassles that could jeopardize Anthropic's business partnerships with other military contractors, it has also bolstered its reputation as a safety-minded AI developer. "It's applaudable that a company stood up to the government in order to maintain what it felt were its ethics and were its business choices, even in the face of these potentially crippling policy responses," said Jennifer Huddleston, a senior fellow at the libertarian-leaning Cato Institute. Consumers have already spoken, leading to a surge of Claude downloads that made it the most popular iPhone app starting on Saturday and for all phone systems in the U.S. on Monday, according to Sensor Tower. That's come at the expense of OpenAI's ChatGPT, which saw its consumer reputation damaged when it announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments. In the Apple store, the number of 1-star reviews -- the worst rating -- of ChatGPT grew by 775 per cent on Saturday and continued to grow early this week, forcing OpenAI to do damage control. "We shouldn't have rushed to get this out on Friday," OpenAI CEO Sam Altman said in a social media post Monday. "The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." Altman was planning to gather employees for an "all-hands" meeting on Tuesday to discuss next steps. "There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety," Altman said. "We will work through these, slowly, with the (Pentagon), with technical safeguards and other methods."
[34]
'Safety Theater': Why Anthropic's CEO Says OpenAI's DoW Deal Is a Betrayal of AI Safety
There's a particular kind of fury reserved for watching someone else get credit for a principle you refused to compromise on. Dario Amodei knows that feeling well right now. Last week, Anthropic walked away from a Department of War contract rather than grant the military unrestricted access to its AI. The sticking point was blunt: Anthropic wanted the DoW to explicitly commit that it wouldn't use Claude for domestic mass surveillance or autonomous weaponry. The DoW said no. Anthropic said goodbye. Also read: OpenAI defended Anthropic in its feud with US govt: Here's how Then Sam Altman walked in and signed the deal. According to a memo reported by The Information, Amodei didn't take that quietly. He called OpenAI's deal "safety theater" and described Altman's public messaging around it as "straight up lies," accusing him of falsely "presenting himself as a peacemaker and dealmaker" in what amounts to a full-throated internal broadside from one AI CEO to another. Also read: Anthropic CEO criticises OpenAI's defense deal, questions safety claims The crux of the dispute is a single phrase. Anthropic objected to the DoW's demand that its AI be available for "any lawful use." OpenAI's contract, by its own admission, permits use for "all lawful purposes" - nearly identical language. OpenAI argues it separately negotiated explicit carve-outs for mass surveillance, making the distinction meaningful. Amodei's position, reading between the lines of his memo, is that relying on what's currently "lawful" is precisely the problem. Laws change. Administrations change. A safeguard tethered to legality is only as durable as the legal landscape around it. It's a reasonable concern and the public seems to largely agree. ChatGPT uninstalls jumped 295% in the days following OpenAI's announcement, per TechCrunch, while Claude climbed to number two in the App Store. Amodei noted the irony himself, writing that the "attempted spin/gaslighting is not working very well on the general public." What makes this more than a corporate spat is what it reveals about the two companies' underlying philosophies. Anthropic staked its reputation and a lucrative government contract on a principle. OpenAI found a way to say yes. Whether that makes Altman pragmatic or Amodei naive probably depends on who you think will be writing the laws in five years. For now, at least, the uninstall data suggests the public has an opinion.
[35]
Cancel GPT: Why Sam Altman and OpenAI getting criticism all over internet again
Sam Altman says the agreement bans mass surveillance, autonomous weapons and high-risk AI decision systems. OpenAI and its ChatGPT are getting a fresh wave of hate after signing an agreement with the US Department of War to deploy AI within the classified government networks. The deal, which was signed recently, has triggered backlash all over social media platforms with several users claiming that they are cancelling their paid ChatGPT subscriptions in protest. On Reddit, posts calling for users to terminate ChatGPT Plus memberships got good traction. Some threads also accused OpenAI of prioritising government contracts over previously stated commitments to AI safety. One widely shared post alleged that the company had moved quickly to secure the Pentagon agreement after Anthropic declined similar terms over ethical concerns. The critics also stated the perceived irony in OpenAI's earlier public support for stronger AI safeguards. A user on X stated that while Anthropic reportedly resisted granting unrestricted access to its models for surveillance or lethal applications, OpenAI subsequently bid for the same contract after the government discontinued its arrangement with Anthropic. This controversy intensified as memes and satirical posts circulated online, mocking what users call as a sudden shift from solidarity with AI safety principles to secure a lucrative defence contract. In response, OpenAI CEO Sam Altman defended the agreement, stating that the company's deal includes stricter protections than previous classified AI deployments. In a blog post, OpenAI said its contract explicitly bars the use of its models for mass domestic surveillance, fully autonomous weapons, or high-risk automated decision-making systems such as social credit scoring. The company also stated that it retains control over its safety mechanism, deploys its models through cloud infrastructure and ensures that cleared OpenAI personnel remain involved in classified implementations. As per the statement, these safeguards operate along with US legal protections. Even after all the damage control, the users have been vocal about the incident on the social media platforms and sharing Cancel GPT tag across the apps.
Share
Share
Copy Link
OpenAI CEO Sam Altman signed a Department of Defense contract just hours after pledging to uphold the same AI safety protections as Anthropic. The deal sparked a 295% surge in ChatGPT uninstalls and prompted Anthropic CEO Dario Amodei to call OpenAI's messaging 'straight up lies.' The conflict highlights growing tensions over AI use in surveillance and autonomous weaponry as safety concerns collide with military ambitions.
OpenAI signed a Pentagon contract with the Department of Defense just hours after CEO Sam Altman publicly supported Anthropic's stance on AI safety, igniting a fierce dispute between two of the industry's leading companies. In an internal memo obtained by The Information, Anthropic CEO Dario Amodei called OpenAI's messaging around the deal "straight up lies" and accused the company of engaging in "safety theater"
1
. The conflict emerged after Anthropic refused to accept the Department of Defense's demand for "any lawful use" of its AI technology, particularly regarding AI use in surveillance and autonomous weaponry1
.
Source: Digit
Amodei stated that "the main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses"
1
. Anthropic had maintained a $200 million contract with the military but insisted the Department of Defense affirm it would not use Claude AI to enable domestic mass surveillance or fully autonomous weapons systems1
.Consumer reaction to the OpenAI military deal was swift and severe. ChatGPT uninstalls jumped 295% day-over-day on Saturday, February 28, according to market intelligence provider Sensor Tower, compared to the app's typical uninstall rate of 9% over the previous thirty days
3
. Meanwhile, U.S. downloads for ChatGPT dropped 13% day-over-day on Saturday and continued falling 5% on Sunday3
.
Source: Futurism
Claude AI benefited dramatically from the controversy, with downloads jumping 37% on Friday, February 27, and 51% on Saturday
3
. The app rocketed to the No. 1 position on the U.S. App Store, climbing over 20 ranks compared to a week earlier3
. Third-party data provider Appfigures noted that Claude's total daily U.S. downloads on Saturday surpassed ChatGPT for the first time, with some estimates placing the increase as high as 88% day-over-day3
. One-star reviews for ChatGPT surged 775% on Saturday, then grew another 100% on Sunday, while five-star reviews declined by 50%3
.The dispute centers on fundamental questions about lawful use of AI in military applications. While Sam Altman claimed in a morning memo on February 27 that OpenAI opposed AI for "mass surveillance or autonomous lethal weapons," the company signed the Department of Defense contract that same evening
5
. OpenAI stated in a blog post that its contract allows use of its AI systems for "all lawful purposes" and that the Pentagon "considers mass domestic surveillance illegal"1
. However, critics point out that laws can change, and what is illegal now might be permitted in the future1
.
Source: BNN
Amodei wrote to staff that he believes "this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI's deal with the DoW as sketchy or suspicious, and see us as the heroes"
1
. After Anthropic refused the Pentagon's terms, Secretary of Defense Pete Hegseth designated the company a supply-chain risk, preventing government agencies from doing business with Anthropic2
.Related Stories
The conflict represents a broader retreat from AI safety commitments across the industry. Anthropic announced on February 24 that it was making changes to its Responsible Scaling Policy, a founding principle that tied model releases to safety procedures
2
. The company acknowledged that "the policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level"2
.The situation is complicated by OpenAI's relationship with Microsoft Azure, which had been providing Pentagon officials access to OpenAI models through Azure OpenAI services since 2023, even when OpenAI's usage policies explicitly banned military use
4
. Some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, and saw Pentagon officials walking through the company's San Francisco offices that year4
. By January 2024, OpenAI updated its policies to remove the blanket ban on military use, with several employees learning about the change through media reports4
.The fallout may have significant business implications for both companies. Menlo Ventures reported that Anthropic accounted for 40% of enterprise LLM spend in 2025, while OpenAI's enterprise share dropped to 27% from 50%
5
. Following the controversy, Anthropic's revenue has reportedly shot up to about a $20 billion revenue run rate5
.The lack of international agreements on AI in military applications means every advanced military must adopt AI capabilities to remain competitive with adversaries, creating what experts describe as an unavoidable AI arms race
2
. The bigger question, as observers note, is how autonomous weaponry and killer robot drones that identify and eliminate human targets entered serious consideration without international debate2
. Altman later told OpenAI employees that the company gets no say in what the Department of Defense does with its AI, stating employees don't get to weigh in on specific military operations5
.Summarized by
Navi
[1]
[5]
03 Mar 2026•Policy and Regulation

09 May 2025•Policy and Regulation

04 Mar 2026•Policy and Regulation
