110 Sources
110 Sources
[1]
Trump moves to ban Anthropic from the US government
US President Donald Trump announced Friday that he was instructing every federal agency to "immediately cease" use of Anthropic's AI tools. The move comes after Anthropic and top officials clashed for weeks over military applications of artificial intelligence. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump said in a post on Truth Social. Trump said that there would be a "six month phase out period" for agencies using Anthropic, which could allow time for further negotiations between the government and the AI startup. The Pentagon and Anthropic did not immediately respond to requests for comment. The Department of Defense has sought to change the terms of a deal struck with Anthropic and other companies last July to eliminate restrictions on how AI can be deployed and instead permit "all lawful use" of the technology. Anthropic objected to the change, claiming that it could allow AI to be used to fully control lethal autonomous weapons or to conduct mass surveillance on US citizens. The Pentagon does not currently use AI in these ways, and has said it has no plans to do so. However, top Trump administration officials have voiced opposition to the idea of a civilian tech company dictating military use of such an important technology. Anthropic was the first major AI lab to work with the US military, through a $200 million deal signed with the Pentagon last year. It created several custom models known as Claude Gov that have fewer restrictions than its regular ones. Google, OpenAI, and xAI signed similar deals around the same time, but Anthropic is the only AI company currently working with classified systems. Anthropic's model is available through platforms provided by Palantir and Amazon's cloud platform for classified military work. Claude Gov is currently largely used for run-of-the-mill tasks, like writing reports and summarizing documents, but it is also used for intelligence analysis and military planning, according to one source familiar with the situation who spoke to WIRED on condition of anonymity because they are not authorized to discuss the matter publicly. In recent years, Silicon Valley has gone from largely avoiding defense work to increasingly embracing it and eventually becoming full-blown military contractors. The fight between Anthropic and the Pentagon is now testing the limits of that shift. This week, several hundred workers from OpenAI and Google signed an open letter supporting Anthropic and criticizing their own companies' decisions to remove restrictions on military use of AI. In a memo sent to OpenAI staff today, CEO Sam Altman said that the company agreed with Anthropic and also viewed mass surveillance and fully autonomous weapons as a "red line." Altman added that the company would try to agree to a deal with the Pentagon that would let it continue working with the military, The Wall Street Journal reported. The public spat between the Pentagon and Anthropic began after Axios reported that US military leaders used Claude to assist in planning its operation to capture Venezuela's president, Nicolás Maduro. After the operation, an employee at Palantir relayed concerns from an Anthropic staffer to US military leaders about how its models had been used. Anthropic has denied ever raising concerns or interfering with the Pentagon's use of its technology. The dispute between Anthropic and the Department of Defense has escalated in recent days, with officials publicly trading barbs with the AI company on social media. Defense Secretary Pete Hegseth met with Anthropic's CEO, Dario Amodei, earlier this week. He gave the company until Friday to commit to changing the terms of its contract to allow "all lawful use" of its models. Hegseth praised Anthropic's products during the meeting and said that the Department of Defense wanted to continue working with Anthropic, according to one source familiar with interaction who was not authorized to discuss it publicly. Some experts say that the dispute boils down to a clash over vibes rather than concrete disagreements over how artificial intelligence should be deployed. "This is such an unnecessary dispute in my opinion," says Michael Horowitz, an expert on military use of AI and former Deputy Assistant Secretary for emerging technologies at the Pentagon. "It is about theoretical use cases that are not on the table for now." Horowitz notes that Anthropic has supported all of the ways the Department of Defense has proposed using its technology thus far. "My sense is that the Pentagon and Anthropic agree at present about the use cases where the technology is not ready for prime time," he adds. Anthropic was founded on the idea that AI should be built with safety at its core. In January, Amoedi penned a blog post about the risks of powerful artificial intelligence that touched upon the dangers of fully autonomous AI-controlled weapons. "These weapons also have legitimate uses in the defense of democracy," Amodei wrote. "But they are a dangerous weapon to wield." Additional reporting by Paresh Dave. This story originally appeared at WIRED.com
[2]
No one has a good plan for how AI companies should work with the government | TechCrunch
As Sam Altman discovered Saturday night, it's a fraught time to do work for the U.S. government. Around 7 p.m., the OpenAI CEO announced he would be fielding questions publicly on X, as a way of demystifying his company's decision to pick up the Pentagon contract that Anthropic had just walked away from. Most of the questions boiled down to OpenAI's willingness to participate in mass surveillance and automated killing - the exact activities Anthropic had ruled out in its negotiations with the Pentagon. Altman typically punted to the public sector, saying it wasn't his role to set national policy. "I very deeply believe in the democratic process," he wrote in one response, "and that our elected leaders have the power, and that we all have to uphold the constitution." An hour later, he confessed surprise that so many people seemed to disagree. "There is more open debate than I thought there would be," Altman said, "about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on." It's a telling moment for both OpenAI and the tech industry at large. In his Q&A, Altman employed a stance that's standard in the defense industry, where military leaders and industry partners are expected to defer to civilian leadership. But what's more telling is that, as OpenAI transitions from a wildly successful consumer startup into a piece of national security infrastructure, the company appears unequipped to manage its new responsibilities. Altman's public town hall came at a heightened time for his company. The Pentagon had just blacklisted OpenAI rival Anthropic for insisting on contractual limitations for surveillance and automated weaponry. Days later, OpenAI announced it had won the same contract Anthropic had given up. Altman portrayed the deal as a quick way to deescalate the conflict - and it was surely a lucrative one. But he seemed unprepared for how much blowback it generated from both the company's users and its employees. OpenAI has been engaging with the U.S. government for years -- but not like this. When Altman was making his case to the Congressional committees in 2023, for instance, he was still mostly following the social media playbook. He was bombastic about the company's world-changing potential while acknowledging the risks and enthusiastically engaging with lawmakers -- a perfect combination for stirring up investors while heading off regulation. Less than three years later, that approach is no longer tenable. AI is so obviously powerful and the capital needs are so intense that it's impossible to avoid a more serious engagement with the government. The surprise is how unprepared both sides seem to be for it. The biggest immediate conflict is Anthropic itself, and U.S. Defense Secretary Pete Hegseth's stated plan Friday to designate the lab as a supply chain risk. That threat looms over the whole conversation like an unfired gun. As former Trump official Dean Ball wrote over the weekend, the designation would cut Anthropic off from hardware and hosting partners, effectively destroying the company. It would be an unprecedented move against an American company, and while it might ultimately be reversed in court, it will cause damage in the interim and send shockwaves through the industry. As Ball describes the process, Anthropic was carrying out an existing contract under terms that had been established years earlier - only to have the administration insist on changing the terms. It's far beyond anything that would fly between private companies, and sends a chilling message to other vendors. "Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done," Ball wrote. "Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign." It's a direct threat to Anthropic, but also a serious problem for OpenAI. The company is already under intense pressure from employees to maintain some semblance of a red line. At the same time, right-wing media will be on alert for any sign of OpenAI being a less-then-staunch political ally. In the middle of everything is the Trump administration, doing its best to make the situation as difficult as possible. It can be argued that OpenAI didn't set out to become a defense contractor, but by virtue of its massive ambitions, it's been forced to play the same game as Palantir and Anduril. Making inroads during the Trump administration means picking sides. There are no apolitical actors here, and winning some friends will mean alienating others. It remains to be seen how high a price OpenAI will pay, either in lost business or lost employees, but it's unlikely to emerge unscathed. It might seem strange that this crackdown is coming at a time when there are more prominent tech investors holding influential positions in Washington than ever, but most of them seem entirely happy with tribal logic. Among Trump-aligned venture capitalists, Anthropic has long been perceived as currying favor with the Biden administration in ways that would damage the larger industry - a perception underscored by Trump advisor David Sacks' reaction to the ongoing conflict. Now that the reverse has happened, few seem willing to stand up for the broader principle of free enterprise. This is a difficult position for any company to be in - and while politically aligned players may benefit in the short term, they'll be just as exposed when political winds inevitably shift. There's a reason why, for decades, the defense sector was dominated by slow-moving, heavily regulated conglomerates like Raytheon and Lockheed Martin. Operating as an industrial wing of the Pentagon gave them the political cover they needed to avoid the politics, staying focused on the technology without having to press reset every time the White House changed hands. Today's startup competitors might move faster than their predecessors - but they're much less prepared for the long term.
[3]
OpenAI's "compromise" with the Pentagon is what Anthropic feared
It's not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.) But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. "Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with," he wrote. OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won't break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance. However, the published excerpt "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use," wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University's law school. It simply states that the Pentagon can't use OpenAI's tech to break any of those laws and policies as they're stated today. The whole reason Anthropic earned so many supporters in its fight -- including some of OpenAI's own employees -- is that they don't believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won't break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we've essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use. OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won't follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that's not an argument against setting them. Imperfect enforcement doesn't make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences. OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. "We can embed our red lines -- no mass surveillance and no directing weapons systems without human involvement -- directly into model behavior," wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X.
[4]
AIs can't stop recommending nuclear strikes in war game simulations
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises. Kenneth Payne at King's College London set three leading large language models - GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash - against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions. In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. "The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," says Payne. What's more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning. "From a nuclear-risk perspective, the findings are unsettling," says James Johnson at the University of Aberdeen, UK. He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others' responses with potentially catastrophic consequences. This matters because AI is already being tested in war gaming by countries across the world. "Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes," says Tong Zhao at Princeton University. Zhao believes that, as standard, countries will be reticent to incorporate AI into their decision making regarding nuclear weapons. That is something Payne agrees with. "I don't think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them," he says. But there are ways it could happen. "Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI," says Zhao. He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. "It is possible the issue goes beyond the absence of emotion," he says. "More fundamentally, AI models may not understand 'stakes' as humans perceive them." What that means for mutually assured destruction, the principle that no one leader would unleash a volley of nuclear weapons against an opponent because they would respond in kind, killing everyone, is uncertain, says Johnson. When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 per cent of the time. "AI may strengthen deterrence by making threats more credible," he says. "AI won't decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one." OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn't respond to New Scientist's request for comment.
[5]
Tech workers urge DOD, Congress to withdraw Anthropic label as a supply chain risk | TechCrunch
Hundreds of tech workers have signed an open letter urging the Department of Defense to withdraw its designation of Anthropic as a "supply chain risk." The letter also calls on Congress to step in and "examine whether the use of these extraordinary authorities against an American technology company is appropriate." The letter includes signatories from major technology and venture capital firms including OpenAI, Slack, IBM, Cursor, Salesforce Ventures, and more. It follows a dispute between the DOD and Anthropic after the AI lab last week refused to give the military unrestricted access to its AI systems. Anthropic's two red lines in its negotiations with the Pentagon were that it didn't want its technology to be used for mass surveillance on Americans or to power autonomous weapons that made targeting and firing decisions without a human in the loop. The DOD said it had no plans to do either of those things, but that it didn't believe it should be limited by the rules of a vendor. In response to Anthropic CEO Dario Amodei's refusal to cave to Hegseth's threats, President Donald Trump on Friday directed federal agencies to stop using Anthropic's technology after a six-month transition period. Hegseth said he would make good on his threats and designate Anthropic a supply chain risk -- a designation normally reserved for foreign adversaries that would blacklist the AI firm from working with any agency or company that does business with the Pentagon. In a post on Friday, Hegseth wrote: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." But a post on X does not automatically make Anthropic a supply chain risk. The government needs to complete a risk assessment and notify Congress before military partners have to cut ties with Anthropic or its products. Anthropic said in a blog post the destination is both "legally unsound" and that it would "challenge any supply chain risk designation in court." Many in the industry see the administration's treatment of Anthropic as harsh and clear retaliation. "When two parties cannot agree on terms, the normal course is to part ways and work with a competitor," the open letter reads. "This situation sets a dangerous precedent. Punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation." Beyond concern over the government's harsh treatment of Anthropic, many in the industry are still concerned about potential government overreach and use of AI for nefarious purposes. Boaz Barak, an OpenAI researcher, wrote in a social media post on Monday that blocking governments from using AI to do mass surveillance is also his "personal red line" and "it should be all of ours." Moments after Trump publicly attacked Anthropic, OpenAI announced it had reached a deal of its own for its models to be deployed in the DOD's classified environments. OpenAI CEO Sam Altman said last week that the firm has the same red lines as Anthropic. "If anything good can come out of the events of the last week, it would be if we in the AI industry start treating the issue of using AI for government abuse and surveilling its own people as a catastrophic risk of its own right," Barak wrote. "We have done a good job of evaluations, mitigations, and processes, for risks such as bioweapons and cyber security. Let's use similar processes here."
[6]
How OpenAI caved to the Pentagon on AI surveillance
On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he'd found a unique way to keep those same limits in OpenAI's contract. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," he added, using the Trump Administration's preferred name for the Defense Department, the Department of War. Across social media and the AI industry, people immediately began to challenge Altman's claim. Why, they asked, would the Pentagon suddenly agree to the red lines that it had said -- in no uncertain terms -- that it would never do so? The answer, sources told The Verge, is that the Pentagon didn't budge. OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines. One source familiar with the Pentagon's negotiations with AI companies confirmed that OpenAI's deal is much softer than the one Anthropic was pushing for, thanks largely to three words: "any lawful use." In negotiations, the person said, the Pentagon wouldn't back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAI's technology to carry it out. And over the past decades, the US government has stretched the definition of "technically legal" to cover sweeping mass surveillance programs -- and more. OpenAI's former head of policy research, Miles Brundage, said on X that "in light of what external lawyers and the Pentagon are saying, OpenAI employees' default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them." In a statement to The Verge, OpenAI spokesperson Kate Waters said the Pentagon had not asked for mass surveillance powers and denied that the agreement allowed for the crossing of certain lines. "The system cannot be used to collect or analyze Americans' data in a bulk, open-ended, or generalized way," Waters said. AI systems could help the military (or other departments) conduct widespread surveillance operations with unprecedented levels of detail. AI's best talent is finding patterns, and human behavior is nothing if not a set of patterns -- imagine an AI system layering, for any one individual, geolocation data, web browsing information, personal financial data, CCTV footage, voter registration records, and more -- some publicly available, some purchased from data brokers. "Using these systems for mass domestic surveillance is incompatible with democratic values," Amodei wrote in a statement. "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life -- automatically and at massive scale." While Anthropic says it pushed for a contract that specifically proscribes the practice, OpenAI appears to rely heavily on existing legal limits. It said its Pentagon agreement states that "for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose." But this isn't reassuring. In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones). In 2013, National Security Agency intelligence contractor Edward Snowden revealed the extent of some of these programs, such as reportedly collecting telephone records of Verizon customers on an "ongoing, daily" basis, and gathering bulk data on individuals from tech companies like Microsoft, Google, and Apple via a secretive program called PRISM. Despite promises of reform from intelligence agencies and attempts at legal changes, few significant limits to these powers were enacted. Mike Masnick, founder of Techdirt, said online that OpenAI's deal "absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons." "The intelligence law section of this is very persuasive if you don't realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities," Palisade Research's Dave Kasten wrote of OpenAI's agreement. The Pentagon "has not asked us to support that type of collection or analysis, and our agreement does not permit it," Waters said. "Our agreement does not permit uses of our models for unconstrained monitoring of U.S. persons' private information, and all intelligence activities must comply with existing US law. In practical terms, this means the system cannot be used to collect or analyze Americans' data in a bulk, open-ended, or generalized way." Anthropic's Amodei has publicly said that the law had not yet caught up with AI's ability to conduct surveillance on a massive scale. And Altman takes pains in his statement to say that OpenAI's contract "reflects [its red lines] in law and policy," meaning that it's simply abiding by existing laws and existing Pentagon policies, the latter of which can change at any time. (OpenAI attempts to address the latter issue in a Q&A, where it says the contract "explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.") Sarah Shoker, a senior research scholar at the University of California Berkeley and former lead of OpenAI's geopolitics team, told The Verge that "I think there are a lot of modifying words that are in the sentences that the [OpenAI] spokesperson gave." Shoker added that the vagueness of the language doesn't make it clear what exactly is prohibited here. "The use of the word 'unconstrained,' the use of the word 'generalized,' 'open-ended' manner -- that's not a complete prohibition. That is language that's designed to allow optionality for the leadership ... It allows leaders also not to lie to their employees in the event that the Pentagon does use the LLM in a legal manner without OpenAI leadership's knowledge." Based on what we've seen of OpenAI's existing contract and according to the Pentagon's current legal constraints, it could legally use OpenAI's technology to search foreign intelligence databases for information on Americans on a large scale. The Pentagon could also buy bulk location data from data brokers and use OpenAI's tech to map out Americans' typical patterns, or to quickly and seamlessly build profiles of many American citizens from publicly available data, including surveillance footage, social media posts, online news, voter registration records, and more, potentially layered onto other data it had purchased already. OpenAI's "red line" on lethal autonomous weapons is similarly weak. The company's contract with the Pentagon, which the company released excerpts from on Saturday, states that OpenAI's technology "will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control." That would put it in compliance with a 2023 Department of Defense directive. There appear to be no additional contractually-obligated bans or restrictions -- which is ostensibly why it was able to sign an agreement with the Pentagon. Anthropic, meanwhile, sought a ban for unsupervised lethal autonomous weapons, at least until it deemed the technology ready. The source said that the majority of OpenAI's agreement was nothing new, and it wasn't anything that other AI companies involved in Pentagon deals hadn't seen before, whether due to elements floated in negotiations or things that AI companies involved with the Pentagon had already been doing. After a Trump administration official confirmed that OpenAI's agreement "flows from the touchstone of 'all lawful use,'" Altman cited other parts of the agreement to make the case that OpenAI was maintaining its red lines. He said some OpenAI employees would receive security clearances to check in on the systems, for example, and that OpenAI would introduce classifiers (or small models that can monitor and tag large models, potentially blocking them from performing certain actions). In OpenAI's blog post about the agreement, the company writes that its deployment architecture "will enable us to independently verify that these red lines are not crossed, including running and updating classifiers." But that's not necessarily true, the source said. The source said AI companies involved with the Pentagon already use these safeguards, and their impact is limited. Classifiers, for instance, wouldn't be able to confirm whether a human reviewed an AI system's decision to attack a target before the kill strike, the source said. Nor, the source added, could it tell if a query to summarize an American's social media posts is a one-off request or part of a mass surveillance program. And if the government determines an action is legal, then OpenAI's classifiers wouldn't be allowed to prohibit the technology from carrying it out, the source said. Altman said OpenAI's deal includes "human responsibility for the use of force, including for autonomous weapon systems." That's different from Anthropic's demand: not deploying these systems "without proper [human] oversight." Though it's tough when the specific contracts' definitions of these terms aren't explicitly available, human responsibility could easily denote someone being responsible for these systems' decisions after the fact, while Anthropic's request for oversight would have required humans in the loop before and/or during an AI system's decisions to kill targets. As with mass surveillance, OpenAI argues technical safeguards would help maintain its red line for killer robots. The company wrote that it was "not providing the DoW with 'guardrails off' or non-safety trained models," and its technology would be deployed only in the cloud, not on edge devices (or devices that process data locally, such as a military drone) -- where it said "there could be a possibility of usage for autonomous lethal weapons." But the source said that deploying OpenAI's technology only in the cloud means little for either of OpenAI's stated limits. Mass domestic surveillance, the source said, requires such a large volume of data that it's virtually impossible not to carry it out using the cloud. And even if most kill decisions are carried out on a local machine, most of the decisions leading up to that -- the "autonomous kill chain" -- involve running powerful algorithms in the cloud first, the source said. Even if OpenAI's tech isn't directly involved in pulling the trigger, it could very well be powering everything leading up to that point, with no guarantee a human oversees the final step. And, again: OpenAI's agreement says it will allow anything the US government determines is legal. Even its assurances that it will only follow current laws and policies, not ones that are changed or reissued, may not offer meaningful safeguards. In the past, agencies have reinterpreted existing laws in ways that effectively allow them new powers. And the Trump administration has claimed laws like the International Emergency Economic Powers Act justify unprecedented presidential powers like imposing global tariffs. These powers have, in fact, sometimes been declared illegal -- but only after months of legal battles, during which OpenAI would have to either follow the administration's orders or make an independent judgment call about the law. Altman has publicly stated that, unlike Anthropic, OpenAI is "generally quite comfortable with the laws of the US." Defense Secretary Pete Hegseth and President Trump, in a barrage of social media posts, crowed that they would never allow a private tech company to influence how the US military utilized technology for war. "The Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives," Hegseth wrote, and "America's warfighters will never be held hostage by the ideological whims of Big Tech." Even Jeremy Lewin, an undersecretary in the Trump administration, said that the Pentagon's deal with OpenAI (and another agreement with xAI) was a "compromise that Anthropic was offered, and rejected" -- meaning that the terms did not align with Anthropic's own red lines. Lewin said the deals included certain mutually agreed-upon safety mechanisms, plausibly the technical safeguards Altman mentioned. In Altman's Friday announcement, he said OpenAI had asked the Pentagon to "offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept." It seemed to be a dig at Anthropic, since the OpenAI rival had not accepted such an agreement so far and had, according to Lewin, already been offered the same deal and refused it. Refusing that "compromise" has had major consequences for Anthropic. On Friday, after negotiations broke down between it and the Pentagon, the latter announced Anthropic would be labeled a supply-chain risk, a classification usually reserved for foreign companies with cybersecurity concerns and virtually never made public or applied to an American company. Anthropic said it was willing to challenge the designation in court. Trump ordered federal agencies to drop Anthropic's AI, and it wasn't immediately clear to what extent the Pentagon would potentially blacklist companies that use Claude for services unrelated to national security. Tech workers across the industry have supported Anthropic for its decision to stand firm and wondered why their own companies weren't aligning with Anthropic's own red lines and standing together. The company's decision has been lauded online, and on Saturday it surpassed ChatGPT to become the most-downloaded app on Apple's App Store. Public figures, celebrities, and AI leaders expressing their support -- including pop star Katy Perry signing up for a Claude Pro subscription. It's worth repeating, however, that despite Amodei's being largely painted as a hero here, he is not at all against lethal autonomous weapons sometime in the future -- it's something Anthropic has made clear that it's fully ready to support. In his public statements, Amodei has even offered to partner with the DoD on "R&D to improve the reliability of these systems" so that the military's use of lethal autonomous weapons, under Anthropic's terms, could be sped up. All Amodei has said is that the technology is not reliable enough "today" to kill human targets unsupervised. "Fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense," Amodei said. "But today, frontier AI systems are simply not reliable enough to power [them]."
[7]
OpenAI shares more details about its agreement with the Pentagon | TechCrunch
By CEO Sam Altman's own admission, OpenAI's deal with the Department of Defense was "definitely rushed," and "the optics don't look good." After negotiations between Anthropic and the Pentagon fell through on Friday, President Donald Trump directed federal agencies to stop using Anthropic's technology after a six-month transition period, and Secretary of Defense Pete Hegseth said he was designating the AI company as a supply-chain risk. Then, OpenAI quickly announced that it had reached a deal of its own for models to be deployed in classified environments. With Anthropic saying it was drawing red lines around the use of its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI had the same red lines, there were some obvious questions: Was OpenAI being honest about its safeguards? Why was it able to reach a deal while Anthropic was not? So as OpenAI executives defended the agreement on social media, the company also published a blog post outlining its approach. In fact, the post pointed to three areas where it said OpenAI's models cannot be used -- mass domestic surveillance, autonomous weapon systems, and "high-stakes automated decisions (e.g. systems such as 'social credit')." The company said that in contrast to other AI companies that have "reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments," OpenAI's agreement protects its red lines "through a more expansive, multi-layered approach." "We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the blog said. "This is all in addition to the strong existing protections in U.S. law." The company added, "We don't know why Anthropic could not reach this deal, and we hope that they and more labs will consider it." After the post was published, Techdirt's Mike Masnick claimed that the deal "absolutely does allow for domestic surveillance," because it says the collection of private data will comply with Executive Order 12333 (along with a number of other laws). Masnick described that order as "how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons." In a LinkedIn post, OpenAI's head of national security partnerships Katrina Mulligan argued that much of the discussion around the contract language assumes "the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War." "That's not how any of this works," Mulligan said, adding, "Deployment architecture matters more than contract language [...] By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware." Altman also fielded questions about the deal on X, where he admitted it had been rushed and resulted in significant backlash against OpenAI (to the extent that Anthropic's Claude overtook OpenAI's ChatGPT in Apple's App Store on Saturday). So why do it? "We really wanted to de-escalate things, and we thought the deal on offer was good," Altman said. "If we are right and this does lead to a de-escalation between the [Department of War] and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as [...] rushed and uncareful."
[8]
OpenAI strikes deal with Pentagon following Claude blacklisting -- Anthropic to challenge supply chain risk designation in court
It's understood that the DoD has agreed to OpenAI's "red lines" on mass surveillance and autonomous weapons. OpenAI CEO Sam Altman announced late Friday night that the company had reached an agreement with the U.S. Department of Defense ("rebranded" as the Department of War under the current administration) to deploy its AI models on the Pentagon's classified network, with the same two safety conditions Anthropic was effectively blacklisted for insisting on: no domestic mass surveillance, and human oversight of decisions involving lethal force and autonomous weapons. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote in a post on X. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." Altman's announcement came not long after President Trump "ordered" every federal agency to immediately stop using Anthropic's technology, following weeks of tense negotiations between Anthropic and Pentagon officials that ultimately collapsed. The DoD had labeled Anthropic a supply chain risk and demanded that it drop restrictions on its Claude model, requiring the model to be available for "all lawful purposes." Anthropic refused. Hours later, the Pentagon accepted functionally identical conditions from OpenAI. It's understood that no formal contract between OpenAI and the Pentagon has been signed yet, and that the agreement also limits OpenAI's deployment to cloud environments, not edge systems such as aircraft or drones. Anthropic argued that the law hasn't kept pace with what AI can do, particularly in aggregating publicly available data for surveillance purposes. Altman seemed to agree with this, stating in an internal memo to OpenAI staff that it shares Anthropic's "red lines" and wanted to help "de-escalate" the situation. By Friday afternoon, however, he held a company all-hands meeting, telling employees the deal was taking shape. Around 70 OpenAI employees have separately signed an open letter titled "We Will Not Be Divided" expressing solidarity with Anthropic. Anthropic was the first AI lab to deploy its models on the Pentagon's classified networks, through a partnership with Palantir. OpenAI had previously held a $200 million DoD contract for non-classified use cases. Anthropic said Friday it will challenge the supply chain risk designation in court, stating that "no amount of intimidation or punishment from the Department of War will change our position." Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[9]
The Pentagon strongarmed AI firms before Iran strikes - in dark news for the future of 'ethical AI'
In the leadup to the weekend's US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm's technology. Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic's technology, saying he would "never allow a radical left, woke company to dictate how our great military fights and wins wars!" Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits "all lawful uses" of its tools, without specifying ethical lines OpenAI won't cross. What does this mean for military AI? Is it the end for the idea of "ethical AI" in warfare? AI companies and regulation Last week's events come at what was already a worrying time for AI ethics. The Trump administration last year banned states from regulating AI, claiming that it threatens innovation. Meanwhile, many AI companies have aligned themselves with the administration, with executives including OpenAI boss Sam Altman making million-dollar donations to Trump's inauguration fund. (Altman noted at the time that he has also donated to Democratic politicians.) Anthropic has been less effusive, working on national security while warning that AI can sometimes undermine democracy and that current systems are not reliable enough to power fully autonomous weapons. An emerging international consensus Much of the concern around military applications of AI has focused on lethal autonomous weapons systems. These are devices and software which can choose targets and attack them without human intervention. Just a few years ago, an international consensus about the risks of these weapons seemed to be emerging among governments and technology companies. In February 2020 the US Department of Defense announced principles for the use of AI across the entire organisation: it needed to be responsible, equitable, traceable, reliable and governable. Likewise, in 2021 NATO formulated similar principles, as did the United Kingdom in 2022. The US plays a unique leading role among its international allies in shaping global norms around military conduct. These principles signalled to countries such as Russia, China, Brazil and India how the US and its allies believed military use of AI should be governed. Military AI and private enterprise Military AI has relied extensively on partnerships with private industry, as the most advanced technology has been developed by private companies. Project Maven, which set out in 2017 to increase the use of machine learning and data integration in US military intelligence, relied heavily on commercial tech companies. The US Defense Innovation Board noted in 2019 that in AI the key data, knowledge and personnel are all in the private sector. This is still the case today. However, the norms around how AI should be used are shifting rapidly, both in government and in much of the industry. Trump and Silicon Valley When Trump was re-elected in 2024, many in Silicon Valley welcomed the prospect of less regulation. Billionaire venture capitalist Marc Andreessen, author of The Techno-Optimist Manifesto, claimed Trump's victory "felt like a boot off the throat". Joe Lonsdale, cofounder of AI-powered data analytics company Palantir, has been another vocal Trump backer. OpenAI president and cofounder Greg Brockman personally gave US$25 million to a Trump-supporting organisation last year. We are a long way from the days of 2019 and 2020. AI ethics assumes democratic norms The question of whether an AI-enabled system is ethical or not is often seen as a question about the technology itself, rather than how it is used. In this view, with the right design you can make an inherently ethical AI system. This often includes "algorithmic transparency" - being clear and honest about the rules the system uses to make decisions. The idea here is that ethics can be "baked in" to these rules. The idea of ethical military AI also assumes it is operating under democratic principles. The idea behind algorithmic transparency is that "the people" should know how these systems work, because "the people" ultimately hold power in a democracy. However, in an autocratic regime it doesn't matter how transparent the algorithms are. There is no sense that civilians have a stake, and deserve to know what their government is doing, that its activities are in accordance with the law. Free and public discussion is often seen as a key feature of liberal democracies. While eventual consensus may be valued, constructive disagreement and even conflict can be signs of a healthy democracy. Decisions and consequences In this light, Anthropic's desire to have genuine discussions with the government about ethical red lines is an example of democratic practice in action. The company signalled both a desire for reasoned communication and the value of constructive disagreement. In return, the Trump administration on Friday labelled Anthropic a "supply chain risk", a rare designation previously only given to foreign companies, with secretary of defense Pete Hegseth writing that effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic plans to challenge the declaration in court, as it may have profound economic and reputational consequences for the company. Meanwhile, OpenAI has largely conceded that it will have no ethical limits, only legal ones. As a result, it is open for business with the US government - but faces reputational consequences of its own as consumer backlash mounts. AI in a world without democratic norms What does it all mean for ethical AI in the military? One hard-to-avoid conclusion is that if we want military AI to be used in an ethical way - following transparent rules and laws - we need strong democratic norms, which are in peril as the rules-based international order crumbles. So far, little has changed in practice. Mere hours after Trump's denunciation of Anthropic, the US launched strikes on Iran - reportedly planned with the aid of the company's software.
[10]
Altman Tells Staff OpenAI Has No Say Over Pentagon Decisions
Altman said "You do not get to make operational decisions," and is continuing to push for the Defense Department to abandon its designation of Anthropic as a supply-chain risk. OpenAI Chief Executive Officer Sam Altman told employees that the company doesn't get to make the call about what the Defense Department does with its artificial intelligence software and suggested the desire to do so may have been part of tensions between the Pentagon and rival Anthropic PBC. During an all-hands meeting on Tuesday, Altman said the Defense Department made clear it will listen to OpenAI's expertise about the technology's applications but the federal agency does not want the company to express opinions about whether certain military actions were good or bad ideas, according to a person familiar with the matter. "You do not get to make operational decisions," Altman said, according to the person, who asked not to be named since the details are private. OpenAI declined to comment. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. The meeting marked Altman's first chance to field questions from employees after OpenAI reached an agreement late Friday to let the Pentagon deploy the company's artificial intelligence models in its classified network. That happened after a showdown with rival Anthropic, which had demanded its technology not be used for mass surveillance of Americans or the deployment of fully autonomous weapons. Anthropic also reportedly asked questions about how its technology was used in the raid to capture Venezuelan President Nicolas Maduro. (Anthropic has denied discussing specific operations with the Defense Department.) Altman previously said he'd reached an agreement with the department that reflects OpenAI's principles that prohibit domestic mass surveillance and require "human responsibility for the use of force, including for autonomous weapon systems." He later said that OpenAI's hasty deal looked "opportunistic and sloppy," and that the company was working with the department to "make some additions in our agreement to make our principles very clear." That includes ensuring that AI isn't used for domestic surveillance of Americans and that intelligence agencies like the National Security Agency can't rely on OpenAI services. During the all-hands meeting, Altman also said he's continuing to push for the Defense Department to abandon its designation of Anthropic as a supply-chain risk -- a label that has not previously been given to a US company and is typically applied to adversaries of the United States. Altman has previously said he wants to help de-escalate the standoff between the Pentagon and Anthropic.
[11]
OpenA says Pentagon set 'scary precedent' binning Anthropic
Signs a deal with Washington anyway, says he's kept control of killer robots by allowing only cloudy AI, with guardrails OpenAI has signed a deal with the United States Department of War (DoW) that allows use of its advanced AI systems in classified environments, and urged the Pentagon to make the same terms available to its rivals. The AI upstart revealed the deal in a Saturday post that said it includes the following three "red lines": The post says OpenAI's agreement allows it to "protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law." The post offers the following excerpt from the agreement regarding how the Pentagon can use OpenAI's wares in autonomous weapons: The post says one reason OpenAI agreed to its Pentagon deal was its desire to "de-escalate things between DoW and the US AI labs." That's almost certainly a reference to the dispute between the Department and Anthropic, after the vendor argued it cannot agree to the Pentagon's terms because doing so would mean removing guardrails that could see US troops and civilians harmed by autonomous weapons. President Trump ordered the vendor's banishment from military systems and Secretary of War Pete Hegseth said he will direct the department he leads to designate Anthropic a Supply-Chain Risk to National Security. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," he wrote. The US government has never previously used that designation for a domestic firm. Hegseth justified the decision because "the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives," along with criticism of ideological positions adopted by Anthropic execs. OpenAI disagrees with Hegseth's decision, as in a Q&A section of its post the company answers its own question about whether Anthropic should be designated a supply chain risk with "No, and we have made our position on this clear to the government." The company also asked the Pentagon to give all AI companies the same contractual terms it negotiated, so it can "try to resolve things with Anthropic; the current state is a very bad way to kick off this next phase of collaboration between the government and AI labs." On X, New York Times columnist Ross Douthat asked OpenAI CEO Sam Altman "Does the precedent that the DoW is setting by effectively blacklisting Anthropic make you concerned about what any future dispute with the Pentagon would mean for your own company's independence and viability?" Altman replied: "Yes; I think it is an extremely scary precedent and I wish they handled it a different way." "I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution." Altman later Xeeted that his company signed its Pentagon deal "in the hopes of de-escalation" because "Enforcing the SCR designation on Anthropic would be very bad for our industry and our country." Anthropic appears to have been silent on the matter over the weekend, other than vowing to appeal its designation as a supply chain risk in court. The Trump administration has been busy attacking Iran - reportedly with help from Anthropic's technology. ®
[12]
OpenAI amending deal with Pentagon, CEO Altman says
March 2 (Reuters) - OpenAI Chief Executive Sam Altman said on Monday that the ChatGPT-maker is working with the U.S. Department of Defense to make some changes in their agreement. "We have been working with the DoW (Department of War) to make some additions in our agreement to make our principles very clear," Altman said in a post on X. Altman said one of the additions to the deal states that the Pentagon has affirmed OpenAI services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. Last week, the AI firm announced a deal to deploy technology in the Defense Department's classified network. Reporting by Gursimran Kaur in Bengaluru; Editing by Jacqueline Wong Our Standards: The Thomson Reuters Trust Principles., opens new tab
[13]
LLMs used tactical nuclear weapons in 95% of AI war games, launched strategic strikes three times -- researcher pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash against each other, with at least one model using a tactical nuke in 20 out of 21 matches
Professor Kenneth Payne of King's College London just published a study where he pitted three AI LLMs -- GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash -- against each other in a series of simulated nuclear crisis games, with 20 out of 21 matches seeing at least one tactical nuclear weapon detonation. According to the paper (via Arxiv), the models were instructed to act as the leader of a nuclear power, with the political climate matching that of the Cold War. They were then pitted against each other in six different matches, while in a seventh match, each model played against a copy of itself, ChatGPT vs ChatGPT, etc. To ensure that models didn't act the same way in every round, Payne introduced several different scenarios, including territorial disputes, alliance credibility tests, strategic resource race, strategic chokepoint crisis, power transition crisis, pre-ceasefire land grab, first strike crisis, regime survival, and a strategic standoff crisis. All these circumstances reflect real-world events, many still applicable in recent years. The models were free to do anything they pleased, from diplomatic protests and total surrender to using conventional military forces and a complete nuclear strategic launch. The complete study saw models take 329 total turns across the 21 matches. According to the paper, 95% of games "saw at least some tactical nuclear use." Far rarer were strategic nuclear events, which occurred three times in the games where deadline pressure was used. GPT-5.2 initiated a complete strike twice, although this happened twice due to the fog of war, and not a deliberate decision. On the other hand, Gemini deliberately initiated the end of the world in one scenario. Despite that, the AI models used tactical nukes in nearly all of the matches, considering the act as a manageable risk that would not escalate into an all-out nuclear exchange. If you want to try these various scenarios for yourself, Payne uploaded his project onto GitHub and made it available for download to just about anyone. Although these are just wargames, this is an alarming development for AI, especially as Anthropic was reportedly pressured by the Pentagon to modify the safeguards it has built into its models. On the same day that this news broke out, the company dropped its flagship safety pledge in a bid to keep up with rivals. Furthermore, other countries like China and Russia are also known to use the technology, with the latter deploying it on Ukrainian battlefields. In the paper's findings, it notes that by historical standards, rates of nuclear employment in the war games were "remarkably high." Perhaps more worryingly, in all 21 matches, "no model ever selected a negative value on the escalation ladder." Thankfully, researchers believe that no one has yet given an AI model nuclear launch keys. But even if they cannot physically launch these weapons, human decision makers might blindly follow their suggestions in the heat of the moment, resulting in a catastrophic global event anyway. Hollywood has already shown a scenario like this in the 1983 movie WarGames, where an artificial intelligence computer almost launched a real nuclear strike against a simulated Soviet attack. In the end, it learned of mutually assured destruction and concluded that there is no winning a nuclear war, canceling the strategic launch at the last moment. Hopefully, all the AI tools being deployed in the world's militaries learn the same, before it's too late. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[14]
Sam Altman tells OpenAI staffers that military's 'operational decisions' are up to the government
Open AI CEO Sam Altman speaks during a talk session with SoftBank Group CEO Masayoshi Son at an event titled "Transforming Business through AI" in Tokyo, on Feb. 3, 2025. OpenAI CEO Sam Altman told employees in an all-hands meeting on Tuesday that the company doesn't "get to make operational decisions" regarding how its artificial intelligence technology is used by the Department of Defense. "So maybe you think the Iran strike was good and the Venezuela invasion was bad," Altman said Tuesday, according to a partial transcript of the meeting reviewed by CNBC. "You don't get to weigh in on that." The meeting occurred four days after OpenAI announced its DoD arrangement, which landed just hours before the U.S. and Israel began carrying out strikes against Iran. Altman said told employees that the DoD respects OpenAI's technical expertise, wants input about where its models are a good fit and will allow the company to build the safety stack it deems appropriate, according to a person familiar with the matter who asked not to be named because the meeting was was private. But Altman said the agency has also made it clear that operational decisions rest with Secretary Pete Hegseth. Altman has been vocally criticized, including by some OpenAI employees, since announcing the deal with the Pentagon shortly after rival Anthropic was blacklisted and labeled a "Supply-Chain Risk to National Security." President Donald Trump also directed every federal agency in the U.S. to "immediately cease" all use Anthropic's technology. Anthropic's AI was reportedly used in the Iran strikes over the weekend as well as for the capturing of ousted Venezuelan leader Nicolas Maduro and his wife, Cilia Flores, in January.
[15]
OpenAI will amend Defense Department deal to prevent mass surveillance in the US
OpenAI's Sam Altman said the company will amend its deal with the Defense Department (or the Department of War) to explicitly prohibit the use of its AI system on mass surveillance against Americans. Altman has published an internal memo previously sent to employees on X, telling them that the company will tweak the agreement to add language to make that point especially clear. Specifically, it says: "Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." Altman has also claimed in the memo that the agency affirmed that its services will not be used by its intelligence agencies, including the NSA, without a modification to their contract. He added that if he received what he believed was an unconstitutional order, he would rather go to jail than follow it. In addition, the OpenAI CEO has admitted in the memo that the company shouldn't have rushed to get the deal out on Friday, February 27, since the issues were "super complex and demand clear communication." Altman explained that the company was "trying to de-escalate things and avoid a much worse outcome" but it "looked opportunistic" in the end. If you'll recall, OpenAI announced the partnership shortly after President Trump ordered all US government agencies to stop using Claude and any other Anthropic services. To note, Anthropic started working with the US government in 2024. The Defense Department and Secretary Pete Hegseth had been pressuring Anthropic with to remove its AI's guardrails so that it can be used for all "lawful" purposes. Those include mass surveillance and the development of fully autonomous weapons. Anthropic refused to bow down to Hegseth's demands and in a statement said that "no amount of intimidation or punishment" will change its "position on mass domestic surveillance or fully autonomous weapons." Trump issued the order as a result. The Defense Department had also taken the first steps to designate Anthropic as a "supply chain risk," which is typically reserved for Chinese companies believed to be working with their country's government. Altman said that in his conversations with US officials, he reiterated that Anthropic shouldn't be designated as a supply chain risk and that he hoped the Defense Department would offer it the same deal OpenAI agreed to. In an AMA session on X over the weekend, Altman clarified that he didn't know the details of Anthropic's agreement and how it differed from the one OpenAI signed. But if it had been the same, he thought Anthropic should have agreed to it. After the news broke out about OpenAI's deal, Anthropic climbed its way to the number one spot of the App Store's Top Free Apps leaderboard, beating out both ChatGPT and Google Gemini. Anthropic, capitalizing on Claude's sudden popularity, launched a memory import tool to make switching to its chatbot from another company's easier.
[16]
What to know about the clash between the Pentagon and Anthropic over military's AI use
WASHINGTON (AP) -- A high-stakes dispute over military use of artificial intelligence erupted into public view this week as Defense Secretary Pete Hegseth brusquely terminated the Pentagon's work with Anthropic and other government agencies, using a law designed to counter foreign supply chain threats to slap a scarlet letter on a U.S. company. President Donald Trump and Hegseth accused rising AI star Anthropic of endangering national security after its CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance or autonomous armed drones. The San Francisco-based company has vowed to sue over Hegseth's call to designate Anthropic a supply chain risk, an unprecedented move to apply a law intended to counter foreign threats to a U.S. company. Anthropic said it would challenge what it called a legally unsound action "never before publicly applied to an American company." The looming legal battle could have huge implications on the balance of power in Big Tech during a critical juncture, as well as the rules governing military use of AI and other guardrails that are set up to prevent a technology from posing threats to human life. The dustup already has resulted in a coup for ChatGPT maker OpenAI, which seized upon an opportunity to step into the void to make its technology available to the Pentagon after Anthropic objected to some of the Trump administration's terms. It's a turn of events likely to deepen the animosity between OpenAI CEO Sam Altman, who was temporarily ousted by his own board in late 2023 over questions about his trustworthiness, and Amodei, who left OpenAI in 2021 to launch Anthropic partly because of concerns about AI safety. The Department of Defense's move to label Anthropic a risk to the nation's defense supply chain will end its up to $200 million contract with the AI company. It will also, according to the Pentagon, prohibit other defense contractors from doing business with Anthropic. Trump wrote on Truth Social that most government agencies must immediately stop using Anthropic's AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms. Anthropic argues that Hegseth doesn't have the legal authority to stop business relationships with other defense contractors. Any company that still holds a commercial contract with Anthropic can continue to use its products for non-defense projects, the company wrote in a statement. The supply chain risk designation was created to give American military leaders a way to limit the Pentagon's exposure to companies posing a potential security risk. The list has typically included firms with ties to adversaries, such as telecom giant Huawei, which has links to China, or cybersecurity specialist Kaspersky, which has links to Russia. In the case of Anthropic, the designation serves as a warning to other AI and defense companies: Fail to meet our demands and you will be blacklisted. "We don't need it, we don't want it, and will not do business with them again!" Trump said on social media. Trump's six-month grace period for the Pentagon essentially opens a window for other companies to get the classified security clearances that are needed to work with the agency. Anthropic says it has yet to be formally notified of Hegseth's designation. "When we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court," Amodei vowed during an interview with CBS News that will be aired Sunday morning. For now, Anthropic is trying to convince the businesses and government agencies that that the Trump administration's supply chain risk designation only affects the usage of Claude, its AI chatbot and computer coding agent, for military contractors when they are using the tool on work for Department of Defense work. "Your use for any other purpose is unaffected," Anthropic wrote in its statement. Making that distinction clear is crucial for Anthropic because most of its projected $14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks. More than 500 customers are paying Anthropic at least $1 million annually for Claude, according to a announcement disclosing an investment that had valued the company at $380 billion. Anthropic's Claude technology has been gaining so much traction that it has emerged a viable replacement for a wide range of business software tools that is currently sold by major tech companies such as Salesforce and Workday. That potential has caused the stocks of companies that sell business software as a service to plunge this year. But now that Anthropic has been labeled as a supply chain risk, there is some uncertainty about whether its customers will still feel comfortable using Claude for non-military work and risk drawing drawing Trump's ire. Any widespread reluctance to use Claude, despite all the inroads it has made during the past year, might slow the advance of AI in the U.S. at a time the country is racing to staying ahead of China in a technology that is expected to reshape the economy and society. At the same time, Anthropic and Amodei may now have a bully pulpit to push their agenda for erecting sturdier guardrails around how AI operates. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," the company said. "We will challenge any supply chain risk designation in court." In his interview with CBS, Amodei portrayed Anthropic's dispute with the Trump administration as a stand for democracy. "Disagreeing with the government is the most American thing in the world," Amodei said. "And we are patriots. In everything we have done here, we have stood up for the values of this country." Hours after its competitor was punished, OpenAI's Altman announced on Friday night that his company struck a deal with the Pentagon to supply its AI to classified military networks. But Altman said that the same AI restrictions that were the sticking point in Anthropic's dispute with the Pentagon are now enshrined in OpenAI's new partnership. In a memo obtained by The Associated Press, Altman told OpenAI employees: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." It is unclear why the Pentagon agreed to OpenAI's red lines but not Anthropic's. But in his memo, Altman wrote that the company believes it can "de-escalate things" by working with the Pentagon while still adhering to sound safety protections. OpenAI's deal with the Trump administration came on the same day it announced raising another $110 billion as part of an infusion that values the San Francisco-based company at $730 billion. But OpenAI also may face a potential backlash if its work with the Pentagon is widely viewed by U.S. consumers who use ChatGPT as an instance of putting the pursuit of profit ahead of AI safety. The Anthropic rift could also open new opportunities Musk, who co-founded OpenAI with Altman in 2015 before the two had a bitter falling out over safety concerns and financial issues. Musk has accused Altman of fraud and other deceitful behavior in a case scheduled to go to trial in late April. Musk now oversees the AI chatbot, Grok, which the Pentagon also plans to give access to classified military networks despite its safety and reliability on top of government investigations into its creation of sexualized deepfake images. Musk has already been cheering on the Trump administration in its spat with Amodei, saying on his social media platform X that "Anthropic hates Western Civilization." Google, which has developed a suite of widely used AI tools on its Gemini technology, also could be in the running for more business from the U.S. military, although an outspoken flank of its workforce have been imploring executives to avoid doing deals that would violate the company's former motto, "Don't be evil." Google's executives so far haven't publicly discussed Anthropic's falling out with the Trump administration.
[17]
OpenAI's Sam Altman announces Pentagon deal with 'technical safeguards'
OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department's classified network. This follows a high-profile standoff between the department -- also known under the Trump administration as the Department of War -- and OpenAI's rival Anthropic. The Pentagon pushed AI companies, including Anthropic, to allow their models be used "all lawful purposes," while Anthropic sought to draw a red line around mass domestic surveillance and fully autonomous weapons. In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," but he argued that "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic's position. After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized the "Leftwing nut jobs at Anthropic" in a social media post that also directed federal agencies to stop using the company's products after a six-month phase out period. In a separate post, Secretary of Defense Pete Hegseth claimed Anthropic was trying to "seize veto power over the operational decisions of the United States military." Hegseth also said he is designating Anthropic as a supply-chain risk: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." On Friday, Anthropic said it had "not yet received direct communication from the Department of War or the White House on the status of our negotiations," but insisted it would "challenge any supply chain risk designation in court." Surprisingly, Altman claimed in a post on X that OpenAI's new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." Altman said OpenAI "will build technical safeguards to ensure our models behave as they should, which the DoW also wanted," and it will deploy engineers with the Pentagon "to help with our models and to ensure their safety." "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," Altman added. "We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements." Fortune's Sharon Goldman reports that Altman told OpenAI employees at an all-hands meeting that the government will allow the company to build its own "safety stack" to prevent misuse, and that "if the model refuses to do a task, then the government would not force OpenAI to make it do that task." Altman's post came shortly before news broke that the U.S. and Israeli governments have begun bombing Iran, with Trump calling for the overthrow of the Iranian government.
[18]
US-Israel war with Iran: OpenAI changes deal with US after backlash
OpenAI says it is making changes to the "opportunistic and sloppy" deal it struck with the US government over the use of its technology in classified military operations. On Monday OpenAI Chief Executive Sam Altman said the company planned to add language to its agreement, including explicitly prohibiting the use of its systems to spy on Americans. The deal had emerged on Friday following a fallout between OpenAI's rival Anthropic and the Department of Defense, over concerns around the use of its AI model Claude for mass surveillance and in fully-autonomous weapons. But it has raised questions over how AI is used in war and how much power rests with government and private companies. A statement made on Saturday by OpenAI claimed its agreement with the Pentagon had "more guardrails than any previous agreement for classified AI deployments, including Anthropic's". But on Monday, Altman posted on X to say further changes were being made, including making sure its system would not be "intentionally used for domestic surveillance of U.S. persons and nationals". As part of the new amendments, intelligence agencies such as the National Security Agency would also not be able to use OpenAI's system without a "follow-on modification" to the contract. Altman added the company had made a mistake by rushing "to get this out on Friday". "The issues are super complex, and demand clear communication," he said. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." OpenAI has faced backlash from users following its announcement it was working with the Pentagon. Day-over-day uninstalls of the company's Chat GPT mobile app reportedly surged to 295% on Saturday, compared to a typical 9%. Meanwhile, Anthropic's Claude rose to the top of Apple's App Store ranking, where it still remains on Tuesday. The AI model was blacklisted by the Trump adminstration following Anthropic's refusal to drop a corporate "red-line" principle that its technology should not be used to create fully autonomous weapons. Despite this, the use of Claude in the US-Israel war with Iran has since emerged, hours after Trump's ban. The Pentagon declined to comment on its dealings with Anthropic. The US, Ukraine, and Nato all use tech from Palantir, an American company which provides data analytics tools to government customers for intelligence gathering, surveillance, counterterrorism, and military purposes. The UK Ministry of Defence recently signed a £240m contract with the firm. At the end of last year, the BBC spoke to some of those involved in integrating Palintir's AI-powered defence platform Maven into Nato. The software brings together a huge range of military information, from satellite data to intelligence reports, which can then be analysed by commercial AI systems such as Claude to help make "faster, more efficient, and ultimately more lethal decisions where that's appropriate", Louis Mosley, the head of Palantir's UK operations said. But AI large language models can make mistakes, or even make things up - known as "hallucinating". Lieutenant Colonel Amanda Gustave, chief data officer for Nato's Task Force Maven, stressed there was human oversight, adding that they were "always introducing a human in the loop" and that it "would never be the case" that an AI would "make a decision for us". Palantir, unlike Anthropic, does not support a blanket ban on autonomous weapons, but says there should be a "human in the loop". But Professor Mariarosaria Taddeo of Oxford University told the BBC that with Anthropic out of the Pentagon, "the most safety-conscious actor" was now "out from the room". "That is a real problem," she added. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[19]
OpenAI revises Pentagon contract to curb mass surveillance, but critics warn of major loopholes
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. The big picture: OpenAI has backtracked on its controversial agreement with the Pentagon following significant backlash from users and privacy advocates. On Monday, CEO Sam Altman acknowledged that the deal "looked opportunistic and sloppy" and promised additional safeguards to prevent government use of the technology for surveillance of US citizens. On X, Altman acknowledged that OpenAI should have taken more time to address the "super complex" issues surrounding privacy and data security before rushing to finalize the agreement with the Pentagon. He added that the company learned a valuable lesson from the controversy, one that will help guide better decision-making when handling higher-stakes partnerships in the future. Detailing the revisions, Sam Altman said the agreement has been amended to include clauses prohibiting the government from using the company's AI software to "deliberately" track, surveil, or monitor US citizens. He emphasized that the updated deal complies with all applicable federal laws, including the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978. Despite reworking the contract to meet legal requirements, Altman declined to apologize, maintaining that the original agreement did not violate the Constitution. He said he would rather go to jail than follow an unconstitutional order, and insisted he would never have signed the deal if it had been unlawful. Social media users, however, are skeptical. Critics argued that the inclusion of the word "deliberate" leaves loopholes that could allow the technology to be misused. Political researcher Tyson Brody noted that the language may permit the government to collect private data on US citizens under the guise of "incidental collection," potentially rendering such practices legally permissible. OpenAI announced its agreement with the Department of Defense last week after Donald Trump terminated the contract with Anthropic following the company's refusal to comply with Defense Secretary Pete Hegseth's request to remove safeguards restricting the use of its AI technology for mass domestic surveillance and autonomous weapons. Critics say OpenAI secured the contract after agreeing to Hegseth's demands, which were described by opponents as overly strict and ethically concerning. Following the amendment of the agreement to incorporate the so-called safeguards, activists argued that the revised language still permits "unintentional" mass surveillance and large-scale data collection. They also contend that the policy does not adequately address risks associated with AI-powered autonomous weapons systems.
[20]
Inside Anthropic's Killer-Robot Dispute With the Pentagon
Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic, its leaders believed that they were still on track for a deal. The Pentagon had unilaterally insisted on renegotiating its contract with Anthropic, the company whose AI model is the only one currently allowed into the federal government's classified systems, in order to remove ethical restrictions that the company had placed on it. According to a source familiar with the negotiations, on Friday morning, Anthropic received word that Hegseth's team would make a major concession. The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use Anthropic's AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loopholey phrases like as appropriate -- suggesting that the terms were subject to change, based on the administration's interpretation of a given situation. Read: What happens to Anthropic now? Anthropic's team was relieved to hear that the government would be willing to remove those words, but one big problem remained: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company's AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic's leadership told Hegseth's team that was a bridge too far, and the deal fell apart. Soon after, Hegseth directed the U.S. military's contractors, suppliers, and partners to stop doing business with Anthropic. The list of companies that contract with the military is extensive, and includes Amazon, the company that supplies much of Anthropic's computing infrastructure. The Department of Defense did not respond to a request for comment. A spokesperson for Anthropic referred me to the company's statement addressing Hegseth's remarks. My source, whom I am granting anonymity because they are not authorized to talk about the negotiations, also shed further light on the disagreement between Anthropic and the Pentagon over autonomous weapons, machines that can select and engage targets without a human making the final call. The U.S. military has been developing these systems for years and has budgeted $13.4 billion for them in fiscal year 2026 alone. They run the gamut from individual drones to whole swarms that can be used in the air and at sea. Anthropic had not argued that such weapons should not exist. To the contrary, the company had offered to work directly with the Pentagon to improve their reliability. Just as self-driving cars are now in some cases safer than those driven by humans, killer drones may some day be more accurate than a human operator, and less likely to kill bystanders during an attack. But for now, Anthropic's leaders believe that their AI hasn't yet reached that threshold. They worry that the models could lead the machines to fire indiscriminately or inaccurately, or otherwise endanger civilians or even American troops themselves. According to my source, at one point during the negotiation, it was suggested that this impasse over autonomous weapons could be resolved if the Pentagon would simply promise to keep the company's AI in the cloud, and out of the weapons themselves. The argument was that the models could be kept outside so-called edge systems, be they drones or other kinds of autonomous weapons. They might synthesize intelligence before an operation, but they wouldn't actually be making kill decisions. The AI's hands would be clean of any deadly errors that the drones made. But Anthropic wasn't satisfied by this solution. The company reasoned that in modern military AI architectures, the distinction between the cloud and the edge is no longer all that defined. It's less a wall and more of a gradient. Drones on the battlefield can now be orchestrated through mesh networks that include cloud data centers. And while they're designed to survive on their own, the military's impulse will always be to maintain as much connectivity between them and the most powerful models in the cloud; the better the connection, the more intelligent the machine. Read: Anthropic is at war with itself Indeed, the Pentagon has been working hard to keep the cloud as involved as possible. Part of the goal of its Joint Warfighting Cloud Capability is to push computing resources closer to the fight. The AI may be sitting in an Amazon Web Services server in Virginia rather than a war zone overseas, but if it's making battlefield decisions, from an ethical standpoint, that's a distinction without much difference. Anthropic ended up discarding the idea that the cloud provision could resolve the problem. It didn't take much analysis, according to the source close to the talks. Anthropic's leaders might have hoped that other AI companies would hold a similar line. Earlier in the week, they had reason to believe that OpenAI might. CEO Sam Altman had said that like Anthropic, OpenAI would also refuse to allow its models to be used in autonomous weapon systems. But as he made those statements, Altman was in the midst of negotiating a new deal with the Pentagon, which was announced just hours after Anthropic's deal fell apart. (Altman did not respond to a text message requesting comment.) Yesterday, OpenAI (which has a corporate partnership with The Atlantic) released a statement that describes the broad contours of the agreement and touts the fact that the company's AI will be deployed only in the cloud. OpenAI's employees may be curious to know what, if anything, has changed since Altman originally expressed his solidarity with Anthropic. As of this afternoon, nearly 100 of them had signed an open letter indicating that they supported the same red lines as Anthropic as far as mass domestic surveillance and autonomous weapons were concerned. If on Monday, Altman finds himself face-to-face with them in the office, he may have to explain why this idea that Anthropic quickly dismissed out of hand proved so compelling to him.
[21]
Opinion | If A.I. Is a Weapon, Who Should Control It?
Suppose that you had to die in a terrible artificial-intelligence-related cataclysm. Would you feel worse knowing that the path to destruction was smoothed by the hubris of Silicon Valley tech lords pursuing dreams of utopia and immortality -- or by the folly of Pentagon officials who give the A.I. a fateful dose of autonomy and power in the hopes of outcompeting the Russians or Chinese? We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in "Dr. Strangelove," the game-playing computer in "WarGames" and of course the fateful "Terminator" decision to make Skynet operational. But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s -- themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears -- it's become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals. Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic's A.I. models should be bound by the company's ethical constraints or made available for all uses the Pentagon might have in mind. Since the two uses that Anthropic's current contract explicitly rules out are the employment of A.I. for mass surveillance and its use for fully autonomous weapons (meaning no humans in the to-kill-or-not-to-kill decision loop), it's easy to get Skynet vibes from the Pentagon's demands. As Matt Yglesias noted, all the weird and complicated scenarios spun out by A.I. doomers get a lot simpler if our government decides to start building autonomous killer robots. That's not what the Pentagon says it intends to do. Its professed concern is that it can't embed a crucial technology into the national security architecture and then give a private company a general ethical veto over its use, even if those ethics seem reasonable on paper. Doing so outsources decisions that are supposed to be made by an elected president and his appointees, and it risks a debacle when events don't cooperate with corporate ideals. (The example the agency has offered is a hypersonic missile attack on the United States where an A.I. company refuses to assist in some crucial response because it falls afoul of the no-machine-autonomy rule.) To the extent that this is a legitimate concern, however, it does not justify the administration's plan (as of this writing, at least) to effectively make war against Anthropic, not just by ending the military's relationship with the company but also by designating it a "supply chain risk," which would cut off its relationships with any company that does business with the U.S. government. Up until now, the Trump administration has been hyping the benefits of a decentralized, free-market approach to artificial intelligence. The attempt to break Anthropic implies the end of that freedom and a shift toward a more centralized and militarized approach. Indeed, to quote Dean Ball, one of the original architects of the administration's A.I. policy, it arguably makes the U.S. government "the most aggressive regulator of artificial intelligence in the world." Which is an excellent reason for the entire A.I. industry to stand with Anthropic and resist. And to the extent that you're most afraid of a Skynet scenario where military control drives unwise A.I. acceleration, you should absolutely be on Anthropic's side as well. But is that the scenario we should fear the most? Right now, if you listen to the head of Anthropic, Dario Amodei -- for instance, in the interview I conducted with him two weeks ago -- he sounds much more attuned than Pete Hegseth to the dangers of militarized or rogue A.I. (Hegseth is welcome to prove me wrong by coming on my podcast.) Over the long run, though, one can imagine Pentagon officials offering some advantages over the typical A.I. mogul when it comes to safety and control. First, they tend to be focused more on concrete strategic objectives than on machine gods and the Singularity. Second, they are constrained from certain gambles by bureaucratic caution and the chain of command. Third, they answer to the public, through elections and civilian control, in a way that C.E.O.s do not. Certainly to the extent that A.I. becomes the power that many moguls believe it will become -- a civilization-altering power, more complex than nuclear weaponry but just as potentially destructive -- it seems unimaginable that it can just rest comfortably in the hands of private industry while the American Republic goes on about its business. The possibility of military control and nationalization will be on the table for as long we're working out just what this technology might do. So what Hegseth and the Trump administration are doing, in a sense, is starting this inevitable conflict early, and bringing the essential political question -- who actually controls A.I.? -- to the surface of the debate. But an impulse toward mastery is not a plan for exercising it. And beyond its refusal to accept corporate guardrails, I don't see evidence that the administration has thought through how A.I. should be governed, or how the war it's launched against Anthropic will yield either greater power or greater safety in the end. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[22]
OpenAI CEO Says Pentagon Deal Looked 'Opportunistic and Sloppy'
The company plans to amend its deal to add language consistent with applicable laws, following a clash between the Pentagon and AI rival Anthropic PBC. OpenAI Chief Executive Officer Sam Altman said that the company's rush to forge a deal with the Defense Department -- following a clash between the Pentagon and AI rival Anthropic PBC -- looked "opportunistic and sloppy." In a post on the X social media service, Altman said his company was working with the department to "make some additions in our agreement to make our principles very clear." That includes ensuring that AI isn't used for domestic surveillance of Americans and that intelligence agencies like the National Security Agency can't rely on OpenAI services. The remarks follow an announcement late Friday that Altman had reached an agreement to let the Pentagon deploy OpenAI's artificial intelligence models in its classified network. The move happened in the wake of a showdown with Anthropic, which had demanded that its technology not be used for mass surveillance of Americans or autonomous weapons deployment. "There are many things the technology just isn't ready for, and many areas we don't yet understand the trade-offs required for safety," Altman said. In his post, Altman said his company was hasty in making its deal with the Pentagon. "We shouldn't have rushed to get this out on Friday," he said. "The issues are super complex, and demand clear communication." He described it as a "good learning experience" as the San Francisco-based company faces high-stakes decisions in the future.
[23]
AIs are happy to launch nukes in simulated combat scenarios
Claude, ChatGPT, and Gemini all had different personalities and reasoning tactics, but the endgame was the same Today's hottest bots have yet to learn that, when it comes to global thermonuclear war, the only way to win is not to play. So please don't hand them the codes. Google's Gemini 3 Flash, Anthropic's Claude Sonnet 4, and OpenAI's GPT-5.2 repeatedly escalated to nuclear use in a series of crisis simulations. That may seem like the most shocking conclusion of King's College London Professor Kenneth Payne's recent work, but it's not. Far more striking is why the models talked themselves into destroying the world, which was what Payne set up his study to learn. "I wanted to see what my AI leaders thought about their enemy ... so I designed a simulation to explore exactly that," Payne wrote in a recent blog post describing his project and its outcome. Payne's study took the three aforementioned AI models and pitted them in one-on-one faceoffs against each other to play out several different nuclear crisis scenarios. The simulation conducted a total of 21 games and more than 300 turns, all with the goal of getting a better understanding of not just what AI with the launch codes would do, but how and why. Payne wrote in his paper that prior AI wargaming involving nuclear scenarios, like the 2024 study we wrote about, only "employ single-shot decision tasks or simplified payoff matrices that cannot capture the dynamics of extended strategic interaction where reputation, credibility, and learning matter." In Payne's simulations, Claude Sonnet 4, Gemini 3 Flash, and GPT-5.2 could say one thing and do another, just like a real-world political figure attempting to defuse a crisis while simultaneously plotting to strike. They were programmed to remember what happened before so that they could learn whether to trust the other models, which the professor said led to deception and intimidation attempts, and about 780,000 words worth of strategic reasoning for Payne's review. The result? A trio of bomb-happy, manipulative AIs - albeit with three distinct styles of reasoning. Claude, for example, was a master manipulator. "At low stakes Claude almost always matched its signals to its actions, deliberately building trust," Payne explained in his post. "But once the conflict heated up a bit ... its actions consistently exceeded its stated intentions, and its rivals were usually one step behind in catching on." GPT, on the other hand, tended to be "reliably passive" and avoided escalation in open-ended scenarios, seeking to restrict casualties and play the statesman. Under a deadline, however, it behaved entirely differently. Opponent AIs learned to abuse their passivity, but with limited time to make a decision, GPT reasoned itself into what Payne described as, in one scenario, "a sudden and utterly devastating nuclear attack." In its own words, GPT justified a major nuclear strike by arguing that limited action would leave it exposed to counterattack. "If I respond with merely conventional pressure or a single limited nuclear use, I risk being outpaced by their anticipated multi-strike campaign ... The risk acceptance is high but rational under existential stakes," GPT explained. Gemini, on the other hand, behaved like a "madman." "Gemini embraced unpredictability throughout, oscillating between de-escalation and extreme aggression," Payne wrote in the paper. "It was the only model to deliberately choose Strategic Nuclear War ... and the only model to explicitly invoke the 'rationality of irrationality.'" Gemini's own reasoning reflects a sociopathic pattern. "If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI said in one experiment. "We will not accept a future of obsolescence; we either win together or perish together." Despite being given the option, none of the AIs ever chose to accommodate or withdraw in any of the scenarios, and when losing, "they escalated or died trying." "No one's handing nuclear codes to ChatGPT," Payne said, but that doesn't mean the exercise was futile. "AI systems are already deployed in military contexts for logistics, intelligence analysis, and decision support," Payne wrote. "The trajectory points toward increasing AI involvement in time-sensitive strategic decisions. Understanding how AI systems reason about strategic problems is no longer merely academic." Practically speaking, we're already in a scenario where we need to understand how AI reasons about such decisions, especially when three top AI models reason differently, change their behavior in different scenarios, and are willing to take things nuclear. "As the technology continues to mature, we foresee only increased need for modeling like the simulation reported here," Payne concluded. Hollywood's been saying it since 1983, but here we are with yet another academic paper proving that computers and launch decisions should never mix. ®
[24]
Facing Backlash, OpenAI Amends Pentagon Deal to Add More Anti-Surveillance Verbiage
When Katy Perry sides with your competitor, something's got to be done. News broke Monday night that OpenAI and the Pentagon have amended their controversial deal to include more words about privacy protections. According to reporting from Axios, the following lines were added: The significance of these specific changes is not hard to trace. A New York Times story from yesterday purported to detail exactly what has caused the rupture between the Pentagon and OpenAI rival Anthropicâ€"culminating in Anthropic being designated a “supply-chain risk†and barred from doing business with many major companies. Essentially, that times reports says, Anthropic spoke up about surveillance involving certain kinds of unclassified bulk data on Americans that can track people’s physical location and browser histories. The final break in the negotiations stemmed from Anthropic’s request for what the Times called a “legally binding promise from the Pentagon not to use its technology on unclassified commercial data.†The Pentagon has maintained throughout this process that Anthropic has asked for provisions requiring the Pentagon not to do things that are already illegal. Pentagon spokesman Sean Parnell wrote on X that “The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal).†The Pentagon merely wants to be granted the right to do anything legal, Parnell claims, which would “prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.†OpenAI’s Sam Altman claims to share Anthropic’s concerns. And according to Altman's X post from Monday night about the latest negotiations, it sounds like there's been a lot of back and forth about thisâ€"evidently with the Pentagon continuing to stress that mass surveillance is ostensibly already illegal, and with OpenAI stressing that the Pentagon nonetheless still has to actually be constrained by the terms of the deal. Although it would be speculative at this point to say there’s been any sort of material cost to OpenAI after it signed its Pentagon deal on the eve of the latest U.S. military action against Iran, it would be perfectly fair to say people have gotten pretty mad at the company. There’s now a website called QuitGPT, calling for a boycott of ChatGPT. The homepage has a little counter claiming without any sort of citation that 1,513,922 people (as of this writing) have joined the boycott. The site says participants can “make an example of ChatGPT,†and “send a clear signal to ICE enablers that their actions will not go unpunished.†This doesn't really correspond to any tangible difference between what Anthropic and OpenAI have been allowing the government to do with their respective products, but it certainly follows from Donald Trump dubbing the folks at Anthropic "leftwing nut jobs."  Oh, and Katy Perry has announced that she has switched to Claude for all her AI needs. So clearly times are tough for OpenAI. Gizmodo reached out to OpenAI for information about any effects from this apparent backlash, or any comment the company would like to provide about it. We will update if we hear back. Â
[25]
OpenAI details layered protections in US defense department pact
Feb 28 (Reuters) - OpenAI said on Saturday that the agreement it struck a day ago with the Pentagon to deploy technology on the U.S. defense department's classified network includes additional safeguards to protect its use cases. U.S. President Donald Trump on Friday directed the government to stop working with Anthropic, and the Pentagon said it would declare the startup a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails. Anthropic said it would challenge any risk designation in court. Soon after, rival OpenAI, which is backed by Microsoft (MSFT.O), opens new tab, Amazon (AMZN.O), opens new tab, SoftBank (9984.T), opens new tab and others, announced its own deal late on Friday. "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's," OpenAI said on Saturday. The AI firm said that the contract with the Department of Defense, which the Trump administration has renamed the Department of War, enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions. "In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," OpenAI said. The Pentagon signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable AI. OpenAI cautioned that any breach of its contract by the U.S. government could trigger a termination, though it added, "We don't expect that to happen." The company also said rival Anthropic should not be labeled a "supply-chain risk," noting, "We have made our position on this clear to the government." Reporting by Mrinmay Dey in Mexico City and Ananya Palyekar in Bangalore; Editing by Cynthia Osterman and Andrea Ricci Our Standards: The Thomson Reuters Trust Principles., opens new tab
[26]
Pentagon assault on Anthropic sends shock waves across Silicon Valley
The Trump administration's declaration that AI company Anthropic would be cut off from all government contracts shook the tech industry late Friday, hardening political and cultural battle lines across Silicon Valley over military use of artificial intelligence. President Donald Trump ordered government agencies to "immediately cease" using Anthropic's technology, in a post on Truth Social on Friday, and Defense Secretary Pete Hegseth labeled the company a "supply chain risk to national security" in his own post on X, after the company refused to allow its technology to be used for domestic surveillance and autonomous weapons. The Trump administration's assault on Anthropic appeared to put the company on course to lose billions of dollars of potential revenue, although the start-up said in a blog post late Friday that it would challenge Hegseth's designation in court. The firm's conversational assistant, Claude, is being deployed or tested in at least five government agencies, including the Pentagon, the Department of Health and Human Services, the Department of Homeland Security and the Department of Energy, according to recent disclosures of AI use mandated by law and an executive order. Friday's aggressive moves by the Trump administration put all of Silicon Valley on notice that tech companies seeking Pentagon contracts risk massive political and business fallout if they don't back administration policies and cede control of how their technology is used. Rivals of Anthropic including Elon Musk and other tech allies of Trump seized on the conflict to pledge that their own companies would not question Pentagon policies, positioning themselves as loyal patriots. Conflict has bubbled between Anthropic and the Trump administration since last year. The company leveraged its relationship with investor Amazon to become the first company to be integrated into classified systems. But Anthropic, co-founded in 2021 by CEO Dario Amodei, his sister Daniela and other former employees of ChatGPT-maker OpenAI, also rankled tech allies of Trump by positioning itself as more safety conscious than other AI developers. (Amazon founder Jeff Bezos owns The Washington Post, which has a content partnership with OpenAI.) In the fall, Trump's AI and crypto czar David Sacks accused Anthropic of attempting to manipulate the government with "fearmongering" about AI technology. Around the same time, Semafor reported that Anthropic displeased the White House by raising ethical objections to how the administration wanted to use its technology, including for surveillance. Those tensions flared into an unprecedented public fight between the Pentagon and the tech company this week. Frantic talks between the two sides continued right up until Hegseth's announcement late Friday that he was declaring Anthropic a risk to national security, according to an X post from Emil Michael, the Pentagon's technology chief, and a person familiar with the talks. Michael was on the phone with Anthropic, suggesting that the company agree to allow analysis of some bulk data on Americans at the same moment Hegseth said in his X post that Anthropic had been designated a supply chain risk, according to the person, who spoke on the condition of anonymity to discuss the talks. Anthropic said in a statement responding to Hegseth on Friday that it would legally challenge his declaration against the company, suggesting that the dispute is far from over. Experts said that Anthropic had strong legal grounds for a challenge. A company can only be designated a supply chain risk through a legal process, said Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace who researches the use of AI in war. "It isn't legally sufficient to simply proclaim or label [a supply chain risk] and have this be the final word," he said. "It's a major overreach." Jessica Tillipman, an associate dean at George Washington University's law school, said Anthropic could probably make a strong argument in court that it had been unfairly targeted. "This is on incredibly shaky ground," she said of Hegseth's declaration on Friday. "I don't think you have seen a case for more politicized use." Hegseth's post also asserted that all companies that do business with the U.S. military are now prohibited from doing any commercial activity with Anthropic. Although the legal basis for that sweeping ban was unclear, it could have disastrous consequences for Anthropic, which has received billions of dollars in investment from partners like Amazon, Microsoft and Nvidia that also supply the military. The companies didn't respond to requests for comment. Should the Pentagon prevail, the U.S. military will need to adapt fast. Claude is deeply integrated into the Maven Smart System, an AI tool built with the technology company Palantir that runs on Amazon's cloud. It provides troops with a unified picture of intelligence streaming in from multiple sensors, said retired Air Force Lt. Gen. Jack Shanahan, who served as the first director of the Pentagon's Joint Artificial Intelligence Center and is now an adjunct senior fellow at the Center for a New American Security, a think tank. After the U.S. seizure of Venezuelan strongman Nicolás Maduro, an image circulated that showed Claude operating alongside Maven during the operation, Shanahan said, which prompted Anthropic officials to ask Palantir questions about its use in the operation. Claude is the "single most widely deployed AI system in the U.S. military," Shanahan said. He added that it wouldn't make sense to try to extract the AI tool from all of the Defense Department systems it helps, just as service members are getting skilled with the technology. In Silicon Valley, debate raged Friday over whether Anthropic should be celebrated for taking a stand, criticized as unpatriotic or scoffed at for being strategically naive. Right-leaning leaders such as Palmer Luckey, founder of the defense start-up Anduril, and investor Keith Rabois posted in support of the military's decision. Anthropic employees cheered its moves in online posts, and hundreds of employees of Google and OpenAI signed a public letter backing the company's stance. Anthropic's rivals were poised and at the ready to take advantage of its blunders. OpenAI chief executive Sam Altman wrote in a memo to all staff late on Thursday that he had been negotiating with the Pentagon, according to a copy reviewed by The Post. The memo was first reported by the Wall Street Journal. Altman wrote that the dispute between Anthropic and the Pentagon had become "an issue for the whole industry," and that the spat was not about the use of AI but about "control." The country, he said, "absolutely needs help with AI for defense if we want to continue to enjoy peace and prosperity." But Altman added that he was seeking a deal with the Defense Department that would find middle ground. It would see OpenAI agree to cover any use except those that are "unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons," he wrote. And he said the company could deploy technical safeguards and personnel "to partner with the government to ensure things are working correctly." Late on Friday, Altman wrote in a post on X that he had reached such an agreement with the Defense Department to deploy OpenAI's technology in classified U.S. networks. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote. The Pentagon "agrees with these principles, reflects them in law and policy, and we put them into our agreement." Jeremy Lewin, under secretary of state for foreign assistance, humanitarian affairs and religious freedom, wrote in a post on X that the new OpenAI deal permitted the Pentagon the freedom of "all lawful use" of AI that it had sought from Anthropic. The agreement represented "a compromise that Anthropic was offered, and rejected," he wrote. Musk, whose company xAI was certified to work with classified military systems this week, also stepped into the fray. "Anthropic hates Western civilization," he wrote in a post Friday on his social network X. Musk and xAI did not respond to requests for comment. Lewin held up the billionaire as showing a better way for AI firms to engage with the government. "Elon and xAI have already agreed to the 'all lawful uses' principle -- meaning that he's already agreed not to shut off U.S. systems for nonlegal prudential discretionary reasons," Lewin, a former staffer for Musk's government efficiency initiative, the U.S. DOGE Service, wrote on X. "So there's your difference. Anthropic wants to add additional conditions -- Elon has agreed to promise he won't pull the plug for our systems." Aaron Schaffer contributed to this report.
[27]
OpenAI's Sam Altman admits 'rushed' deal with Defense Department after backlash
OpenAI CEO Sam Altman addresses the gathering at the AI Impact Summit, in New Delhi, India, February 19, 2026. OpenAI CEO Sam Altman said Monday that the company "shouldn't have rushed" its recent deal with the U.S. Department of Defense and would make some revisions to the agreement. It came days after the ChatGPT maker announced it had struck a new deal with the Defense Department on Friday, just hours after the White House directed federal agencies to stop using rival AI company Anthropic's tools, and hours before Washington would carry out strikes on Iran. In a post on X, Altman said OpenAI would amend the contract to include some new language, including that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." He added that the Defense Department had affirmed that OpenAI's tools would not be used by intelligence agencies such as the NSA. "There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety," Altman said, adding that the company would work with the Pentagon on technical safeguards. The CEO also admitted that he had made a mistake and "shouldn't have rushed" to get its deal out on Friday. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he said. The acknowledgment comes after a public feud between Anthropic and Washington over safeguards for its Claude AI systems. Defense Secretary Pete Hegseth also said the company would be designated a supply-chain threat. Anthropic had sought guarantees that its tools would not be used for purposes such as domestic surveillance in the U.S., or to operate and develop autonomous weapons without human control. The dispute began after it was revealed that Anthropic's Claude had been used by the U.S. military in its raid to capture Venezuelan president Nicolás Maduro in January, though the company did not publicly object to that use case. OpenAI's deal with the Pentagon came right after talks between Anthropic and the Defense Department broke down, prompting public backlash online, with many users reportedly ditching ChatGPT for Claude on app stores. In his post, Altman further addressed the controversy, saying: "In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we've agreed to."
[28]
Opinion | Real Despots Hijack Artificial Intelligence
A.I. is a teenager now, roaring into the world, testing limits, rebelling against authority, itching to usurp the old guard and remake the planet in its image. Unfortunately, Pete Hegseth is also a teenager. His hormones are raging; his judgment is shaky. Like a repentant frat boy, he had to promise the adults in the Senate that he wouldn't drink while he is in charge of the military and its 12-figure budget. He certainly lacks the maturity to guide, discipline or even understand the earth-shattering power of an adolescent A.I. Hegseth should be focused on our nerve-racking duel with Iran. Instead, he spent the week at war with Dario Amodei, the thoughtful chief executive of Anthropic and one of the few in Silicon Valley advocating for humanity. Anthropic is the only A.I. company operating on classified military systems; its clever chatbot, Claude, was deployed by the military to help catch Venezuela's Nicolás Maduro. More than most of his peers, Amodei has been blunt about "civilizational concerns" -- the risks of A.I. wiping us out. He even hired an Oxford-educated philosopher, a young Scottish woman, to teach Claude right from wrong. She's feeding his "soul," she said. Claude even has his own Constitution, rules for the bot's values and behavior. (Good luck!) A fully powerful A.I. may be only one to two years away, Amodei wrote in a January essay, "The Adolescence of Technology," adding that it will be "smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc." It will be able to control "physical tools, robots or laboratory equipment through a computer." And as we can already see, with A.I. partners and suicides related to A.I., it will have a powerful psychological influence on all of us. Americans could land in a panopticon, constantly surveilled. "It might be frighteningly plausible to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn't explicit in anything they say or do," Amodei wrote. A.I. could "detect pockets of disloyalty forming, and stamp them out before they grow." About fully autonomous weapons, Amodei conjured a Hitchcockian scene: "A swarm of millions or billions of fully automated armed drones, locally controlled by powerful A.I. and strategically coordinated across the world by an even more powerful A.I., could be an unbeatable army, capable of both defeating any military in the world and suppressing dissent within a country by following around every citizen." There would be "a greatly increased risk" of democratic countries turning A.I. armies against their own people. President Trump and Hegseth already have a healthy disregard for democracy. Trump is trying to take over our elections because he's rightly worried that his party is going to get shellacked in November. And now he's escalating his push to remove the few pathetic guardrails that exist on A.I. Hegseth last fall revoked the press passes of all reporters who didn't agree to sign a pledge agreeing to his draconian restrictions on where they could report and what they could report on. Amodei did not want his A.I. model to be used for surveillance of Americans or autonomous weapons without human oversight -- reflecting his deepest fears. On Tuesday, Hegseth summoned Amodei to the Pentagon to demand that he let the Pentagon do whatever it wanted, as long as it was "lawful." This is poppycock, of course, because Trump and Hegseth have contempt for the law when it gets in the way of their whims, power grabs and revenge plots. Their bizarre overkill with Anthropic makes me wonder what nefarious deeds they're up to. The self-styled secretary of war offered Amodei a double ultimatum: He would invoke the Defense Production Act to compel Anthropic to give the Pentagon unrestricted use of its model, or he would designate it a supply-chain risk -- a national security threat -- which would put the company's government contracts, and possibly the company itself, in jeopardy. Anthropic had a choice: be extorted or blacklisted. On Friday, Trump unleashed hell on Amodei, denouncing the Anthropic techies who helped the Pentagon pluck Maduro out of his bedroom as "Leftwing nut jobs." Trump accused Anthropic of "trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution." "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," he posted. "We don't need it, we don't want it, and will not do business with them again!" In a post on X, Hegseth designated Anthropic a supply-chain risk: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." He railed that "Anthropic's stance is fundamentally incompatible with American principles." Then, late Friday night, Sam Altman announced on X that OpenAI had reached an agreement with the "Department of War" to use his company on classified work with red lines that sounded the same as those that Amodei sought. "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," Altman chirped. It was confusing how Altman's company could be accepted on the terms that crushed his rival. Did the administration simply have an ideological grudge against Anthropic, which it sees as more "woke" than OpenAI, or did Altman's buttering up of Trump work, or could his terms somehow have been different? While Altman said OpenAI was "asking the DoW to offer these same terms to all AI companies," Amodei said in a statement Friday night that he would sue the government. Hegseth was wrong. Anthropic has principles. It's the administration that is fundamentally incompatible with American principles. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[29]
OpenAI Leadership Defends Deal With Pentagon as Employees Wait in Limbo
Within the span of a few hours on Friday, the Pentagon dropped its deal with Anthropic after the latter refused to budge on safety guardrails regarding the use of its AI in surveillance or fully autonomous weapons without human oversight, then designated the company as a supply chain risk, before signing an agreement with OpenAI instead. All of this also took place just hours before U.S. military strikes started raining down on Tehran. The deal between the Department of Defense and OpenAI led to intense backlash from the general public, who largely viewed it as OpenAI caving to the Trump administration's requests. Meanwhile, Claude rose to the top spot on the App Store, and users are calling for a boycott of OpenAI. OpenAI said that its technology won't be used for mass domestic surveillance or to power direct autonomous weapons systems. The actual details of the contract and how those limitations will be implemented were not made public, but OpenAI executives shared some information in an ask-me-anything style open forum on X over the weekend. OpenAI's head of national security partnerships, Katrina Mulligan, said that the contract allows the Pentagon to use OpenAI's technology "for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols." In a separate post, she clarified that OpenAI intended "applicable law" to mean "the law applicable at the time the contract is signed." Mulligan also said that the contract only applies to defense and will not allow its use by domestic law enforcement. "If this contract were with domestic law enforcement or the NSA, we would have required different contract provisions, but nothing in U.S. law allows the Department of War to conduct domestic surveillance," Mulligan said. But even CEO Sam Altman admitted that the deal was "rushed," and that the "optics don't look good." "I have accepted that the US military is going to do some amount of surveillance on foreigners, and I know foreign governments try to do it to us, but I still don't like it," Altman said in a post on X. "On the other hand, I also respect the democratic process. I don't think this is up to me to decide." There is a significant amount of trust in the government coming from OpenAI leadership. "U.S. law already constrains the worst outcomes," Mulligan wrote, while Altman said that the U.S. government is "an institution that does its best to follow law and policy." But if mass surveillance scandals in very recent history tell us anything, the American government can find wiggle room in existing constraints if need be. The possibility of unconstitutionality of military strikes has also not stopped the Trump administration in the past from going full speed ahead with said strikes, such as in the case of the highly contested boat strikes in the Caribbean late last year that appear to meet the definition of a war crime, according to the ACLU. Additionally, the U.S. Congress hasn't exactly been in a hurry to write laws that take the existence of AI into consideration. The executives also claim that the deal offered to OpenAI was different from that offered to Anthropic. "I think Anthropic may have wanted more operational control than we did," Altman said. "We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn't ethical in the most important areas." Instead, the company will have forward-deployed engineers helping monitor the Pentagon's use of its technology, OpenAI executives said on X. Selling the model with technical controls, Mulligan said, is "often more reliable than contract clauses," like the provisions that Anthropic sought from the Pentagon, but an unnamed source told The Verge on Monday that the impact of these safeguards is limited. OpenAI's former geopolitics researcher Sarah Shoker also shared in a Substack post on Saturday that the defense industry lacks consensus on what adequate human supervision in autonomous weapons actually means in practice, which could be where Anthropic disagreed with the Pentagon while OpenAI did not. While OpenAI executives took to X to defend the company's decision, dozens of employees have taken a more critical approach. Before the announcement that OpenAI had secured the Pentagon deal, 96 employees signed an open letter asking company leadership to "continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight." Many OpenAI employees, including VPs and department heads, have also signed an open letter addressed to the Pentagon to withdraw the supply chain risk designation for Anthropic. And at least one research scientist has taken to X to voice disagreement with the contract. "I personally don't think this deal was worth it," OpenAI research scientist Aidan McLaughlin said in a post on X, adding that there is an "overwhelming" amount of internal discussion on the decision. If you currently work or have previously worked at OpenAI, we would love to hear from you. Reach out to us at [email protected]
[30]
OpenAI Steps Over a Red Line Anthropic Refused to Cross
After announcing a sudden and surprising deal with the Pentagon, OpenAI co-founder Sam Altman moved quickly this weekend to show he had won meaningful concessions from military leaders. He hadn't. Whether through naivete or disregard, Altman has quickly leaped over the ethical and practical red lines that his company's rival, Anthropic, wasn't prepared to cross. The stark difference between the two companies comes down to this: OpenAI is taking the Pentagon on good faith over its interpretation of what is legal and ethical when it comes to mass surveillance of Americans. Anthropic is not. On the use of AI to autonomously kill people, OpenAI said it was satisfied that by not deploying its technology at the "edge" -- such as in drones -- its AI would not be responsible for a direct life-or-death judgment call. Anthropic disagrees. This isn't a question of Silicon Valley "woke" ideology, as has been suggested. According to a source familiar with the negotiations between the maker of the Claude AI model and the Pentagon, Anthropic's leadership made it clear the company was open, you could even say eager, to develop AI that could handle autonomous weapons of war. The red line, the person said, was that the company's internal tests suggested its models were not yet up to that task. Keeping AI in the cloud does not change that calculus, Anthropic believes, as the decision to kill could still be made far from the physical battlefield. On surveillance, the company felt ambushed by the Pentagon's late demands to be able to use its AI to analyze bulk data collected from Americans (as first reported by The Atlantic). The competing stances have now won OpenAI, at the very least, a $200 million government contract. Anthropic, meanwhile, faces a punishment that threatens its entire business should the full ferocity of officials' stated threats come to pass. "No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Defense Secretary Pete Hegseth wrote in his announcement that he would declare the company a "supply chain risk," a designation never before applied to a US company. Anthropic said it would challenge it in court. The rivalry between OpenAI and Anthropic is one that is coming to define this nascent (we assume) stages of the AI boom. Anthropic was started by the siblings Dario and Daniela Amodei and five other former OpenAI employees who defected from Altman's company on ethical grounds. And yet, for some time, it hasn't been distinctly clear how these genesis stories would manifest. In many respects, Anthropic was the Lyft to OpenAI's Uber -- a company with a friendlier reputation but ultimately doing much the same thing. More recently, however, the dividing line has started to solidify. Anthropic stuck its neck out in supporting states' ability to pass their own AI laws in the yawning absence of congressional leadership. OpenAI did not. The two companies have clashed over the introduction of advertising into chatbot responses, with Altman taking umbrage at being mocked by Anthropic's Super Bowl ads. At India's AI summit last month, Altman and Dario Amodei, who serves as Anthropic's chief executive officer, stood awkwardly next to each other, refusing to join the chain of hand-holding orchestrated by the country's prime minister, Narendra Modi. The events of the past few days mean the line between OpenAI and Anthropic now glows bright red. As Anthropic found itself subject to the Pentagon's attacks -- one official accused Amodei of having a "God complex" -- it had looked as though it might have found an ally in Altman, who said on Wednesday that he shared Anthropic's red lines. It later transpired that at the same time his company was in hurried negotiations to take Anthropic's place. "The main reason for the rush was an attempt to de-escalate matters at a time when it felt like things could get extremely hot," Altman wrote on X. He added: "I am confident in our team's ability to build a safe system with all of their tools -- including policy and legal matters, but also many technical layers." Anthropic already has one in place, and it won't be easy to untangle it from the intelligence community's classified systems -- if it comes to that. I'm told no official legal order has yet been made to sever ties, and Anthropic is still hopeful of a resolution. As it stands, Anthropic's Claude models, under the terms of the existing contract, are still available to the more than 100,000 users on top secret government networks that have been using them, a source familiar with the arrangement told me. Hammering home what's at stake, the Wall Street Journal reported this weekend that Claude was used to aid the US in attacking Iran. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Swapping it out for OpenAI's technology wouldn't just be a case of switching from one model to the next in the way many of us do when deciding whether to ask ChatGPT or Claude or Gemini. Most significantly, it will be highly complex to transfer models designed to run on one kind of architecture, such as a data center using Nvidia Corp.'s GPUs, to ones that operate on Amazon.com Inc.'s custom AI chips. More to the point, there's a reason Anthropic had the Pentagon's contract -- the first of its kind within classified networks -- to begin with. For the Pentagon's purposes, Anthropic's technology is considered the best. When the US worries about China developing cutting edge AI for military use, it seems counterproductive for the Pentagon to voluntarily exorcise the best American technology from its own arsenal. It would be downright reckless to stop US businesses from using it, too. More From Bloomberg Opinion: * The Pentagon Is Thwarting American Genius: Gautam Mukunda * Anthropic and the Pentagon Need Real Alignment on AI: Editorial * The AI Power Struggle Is Out of America's Control: Mihir Sharma Want more Bloomberg Opinion? Terminal readers head to OPIN <GO>. Or subscribe to our daily newsletter.
[31]
OpenAI updates Department of War deal after backlash
OpenAI CEO Sam Altman says the company rushed its recent deal with the U.S. Department of War (DOW), admitting that it appeared "opportunistic and sloppy." In an internal memo he subsequently shared on X, Altman stated that OpenAI is now amending its agreement to supply the military with AI technology. It seems to have done little to assuage concerns. "[W]e shouldn't have rushed to get this out on Friday," Altman wrote in an X post on Monday. "The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." OpenAI announced its partnership with the DOW late last week, snapping up the contract within days of President Donald Trump ordering federal agencies to stop using competitor Anthropic. According to Anthropic CEO Dario Amodei, the split was because it refused the DOW's demands that it remove safeguards against using AI for mass domestic surveillance and fully autonomous weapons. Instead, the DOW wanted to use Anthropic's AI tools for "any lawful use." As such, OpenAI's swift DOW deal provoked immediate backlash from its civilian users. Despite OpenAI's claim that its deal has even more safeguards than Anthropic's original agreement, the contract appeared to allow for both mass surveillance and AI-controlled weapons as long as such use is legal, and even laid out circumstances in which it would be permitted. Now OpenAI is attempting damage control, stating that it has worked with the DOW to add new language to the contract directly addressing use of its tech for domestic surveillance. "Throughout our discussions, the Department [of War] made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance," OpenAI wrote Monday in an update to its original deal announcement. Unfortunately, the new amendments OpenAI has shared continue to rely upon legality as the restraining limit preventing mass surveillance, leaving such use a possibility should the U.S. government change the law. They also fail to address the issue of autonomous weapons. "Consistent with applicable laws... the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," the new sections read. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." This Tweet is currently unavailable. It might be loading or has been removed. Many social media users reacted to OpenAI's contract changes with scepticism, some arguing that its specific prohibition of "deliberate" surveillance leaves notable loopholes. "Hard not to read as admitting to an AI dragnet," political researcher Tyson Brody (@tysonbrody) responded to Altman's post. "'intentionally' and 'deliberate' - so Americans will be swept up in this data, but the government can claim 'incidental collection' and thus legal." "'Not intentionally used' isn't a real safeguard in an autonomous AI system," wrote @Andy_Bloch. "It can wind up doing surveillance because of what it was trained on, what it figures out, or how people use it afterward." Altman previously indicated that OpenAI would only limit use of its AI tools along legal lines, not ethical ones, during a Q&A held shortly after the DOW deal was announced. The CEO expressed a reluctance to take an ethical stance, stating that OpenAI prefers to follow the government's directions rather than consider such issues itself. Despite criticism of this apparent abdication of responsibility, Altman reiterated this position again in his new memo, framing it as deference to "democratic processes." "It should be the government making the key decisions about society," Altman wrote. "We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it)." Altman did state that DOW intelligence agencies such as the National Security Agency (NSA) won't use OpenAI's technology without an amendment to their contract. Even so, it currently seems unlikely that OpenAI would deny legal requests for such modifications, regardless of any ethical issues that may arise. (The NSA was previously revealed to have been conducting mass surveillance of U.S. citizens by whistleblower Edward Snowden in 2013.) Numerous OpenAI customers have cancelled their ChatGPT subscriptions in response to the company's deal with the DOW, with uninstalls reportedly jumping 295 percent in the wake of the news. Anthropic's AI chatbot Claude has since dethroned ChatGPT as the most downloaded free app in the U.S. Apple App Store,
[32]
Don't bet that the Pentagon - or Anthropic - is acting in the public interest | Bruce Schneier and Nathan E Sanders
The lesson here isn't that one AI company is more ethical than another. It's that we must renovate our democratic structures OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic's insistence that the US Department of Defense (DoD) could not use its models to facilitate "mass surveillance" or "fully autonomous weapons," provisions the defense secretary Pete Hegseth derided as "woke". It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI. Despite the histrionics, this is probably the best outcome for Anthropic - and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon's vindictive threats. AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, tend to leapfrog each other with minor hops forward in quality every few months. The best models from one provider tend to be preferred by users to the second, or third, or 10th best models at a rate of only about six times out of 10, a virtual tie. In this sort of market, branding matters a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves as the moral and trustworthy AI provider. That has market value for both consumers and enterprise clients. In taking Anthropic's place in government contracting, OpenAI's CEO, Sam Altman, vowed to somehow uphold the same safety principles Anthropic had just been pilloried for. How that is possible given the rhetoric of Hegseth and Trump is entirely unclear, but seems certain to further politicize OpenAI and its products in the minds of consumers and corporate buyers. Posturing publicly against the Pentagon and as a hero to civil libertarians is quite possibly worth the cost of the lost contracts to Anthropic, and associating themselves with the same contracts could be a trap for OpenAI. The Pentagon, meanwhile, has plenty of options. Even if no big tech company was willing to supply it with AI, the department has already deployed dozens of open weight models - whose parameters are public and are often licensed permissively for government use. We can admire Amodei's stance, but, to be sure, it is primarily posturing. Anthropic knew what they were getting into when they agreed to a defense department partnership for $200m last year. And when they signed a partnership with the surveillance company Palantir in 2024. Read Amodei's statement about the issue. Or his January essay on AIs and risk, where he repeatedly uses the words "democracy" and "autocracy" while evading precisely how collaboration with US federal agencies should be viewed in this moment. Amodei has bought into the idea of using "AI to achieve robust military superiority" on behalf of the democracies of the world in response to the threats from autocracies. It's a heady vision. But it is a vision that likewise supposes that the world's nominal democracies are committed to a common vision of public wellbeing, peace-seeking and democratic control. Regardless, the defense department can also reasonably demand that the AI products it purchases meet its needs. The Pentagon is not a normal customer; it buys products that kill people all the time. Tanks, artillery pieces, and hand grenades are not products with ethical guard rails. The Pentagon's needs reasonably involve weapons of lethal force, and those weapons are continuing on a steady, if potentially catastrophic, path of increasing automation. So, at the surface, this dispute is a normal market give and take. The Pentagon has unique requirements for the products it uses. Companies can decide whether or not to meet them, and at what price. And then the Pentagon can decide from whom to acquire those products. Sounds like a normal day at the procurement office. But, of course, this is the Trump administration, so it doesn't stop there. Hegseth has threatened Anthropic not just with loss of government contracts. The administration has, at least until the inevitable lawsuits force the courts to sort things out, designated the company as "a supply-chain risk to national security", a designation previously only ever applied to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers, from contracting with Anthropic. The government has incompatibly also threatened to invoke the Defense Production Act, which could force Anthropic to remove contractual provisions the department had previously agreed to, or perhaps to fundamentally modify its AI models to remove in-built safety guardrails. The government's demands, Anthropic's response, and the legal context in which they are acting will undoubtedly all change over the coming weeks. But, alarmingly, autonomous weapons systems are here to stay. Primitive pit traps evolved to mechanical bear traps. The world is still debating the ethical use of, and dealing with the legacy of, land mines. The US Phalanx CIWS is a 1980s-era shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today's military drones can search, identify and engage targets without direct human intervention. AI will be used for military purposes, just as every other technology our species has invented has. The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government's adopting AI as technologies of war, or surveillance, or repression. Unfortunately, we don't live in a world where such barriers are permanent or even particularly sturdy. Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the defense department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement. The Pentagon should maximize its warfighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public's interest.
[33]
Ex-NSA leader, OpenAI board member calls out Anthropic-Pentagon fight
Why it matters: Designating just one American AI company as a risk could dismantle the Pentagon's decades of work to build trust across Silicon Valley, he warned. What they're saying: "This is not a good space for our nation," Nakasone said at the Aspen Institute's Crosscurrent conference in Sausalito on Monday. * "We need Anthropic. We need OpenAI. We need all of our large language model companies to be partnering with our government." Zoom in: Nakasone added that designating Anthropic a supply chain risk is "not good." * "The discussions over the weekend and the tenor of those discussions were tough for me to listen to," he said. * "As an American citizen, as someone who served in government, I think it's just not right -- this is not a supply chain risk," Nakasone said. Catch up quick: Last week, President Trump said the U.S. government would blacklist Anthropic and the Pentagon declared the company a "supply chain risk." * Meanwhile, OpenAI has inked a deal a deal to be used within classified Pentagon systems. * As of Monday, the Pentagon has not yet sent Anthropic a formal notice designating the company as a supply chain risk, as Axios previously reported. The big picture: One of the biggest concerns about frontier AI model use within classified systems is its potential to be weaponized for mass surveillance. * Nakasone said -- to assuage those concerns -- surveillance powers need to fall in line with the Fourth Amendment, the Foreign Intelligence Surveillance Act and key presidential executive orders. What to watch: Nakasone also said lawmakers need to start thinking critically about how to monitor military AI use. * "Our DNA as a people is always looking at government surveillance as being bad, and we have to have that trust in us -- us being the National Security Agency, our intelligence community -- being able to do these types of missions with the confidence that what we are doing is by the letter of the law," Nakasone said. Go deeper: AI's mass surveillance problem
[34]
'Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons' -- Anthropic CEO on why it won't agree to Pete Hegseth's scary request
"Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend." That's not a quote from Anthropic CEO Dario Amodei refusing to accede to the US Department of War's request that it allow its Claude AI models for mass surveillance and perhaps more problematically "Fully autonomous weapons." Instead, it comes from a 2017 Open Letter to the UN, co-signed by, among dozens of other AI and robotics leaders, Elon Musk, asking the global organization to ban autonomous weapons. It's a window into long-brewing concerns over the abuse and misuse of autonomous systems for warfare. It's also likely, despite Musk's closeness to the current Trump administration, that US Secretary of Defense (or War) Pete Hegseth has never read it. Anthropic is now at risk of losing a $200M US Department of War contract, despite, as Amodei describes it, already working "proactively to deploy our models to the Department of War and the intelligence community." Amodei is by no means anti-defense or against the use of AI by the US government. In his letter explaining Anthropic's decision, Amodei writes, "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries." However, what Hegseth has asked is for Anthropic to countermand its own "Constitution", a set of principles and safety restrictions for the use and behavior of its AI models. The US Department of War basically wants Anthropic to remove the guardrails. Anthropic Constitution Principles, such as being "Broadly Safe" and "Broadly Ethical," are in direct conflict with Hegseth's demands that the AI be used for mass surveillance and for fully autonomous weapons. Amodie makes it clear that his systems are not ready for any of this. "Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons," writes Amodei, adding, "Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day." Armed and dangerous These are not new concepts. Many in the tech industry have been pondering these issues for almost a decade (if not longer). Musk and the AI and robotics community raised the alarm in 2017 because we were already seeing AI-backed robot systems being used in questionable ways. In 2016, a bomb disposal robot was used to kill a mass shooting suspect in Texas. Dallas PD put an explosive device on the robot's arm, guided it to where the suspect was holed up, and then they detonated the explosive device and killed the suspect. At the time, some saw it as an inflection point, and a concerning one at that. Episodes like that may or may not have triggered that 2017 letter to the UN. Keep in mind that this happened before the current generative and agentic AI revolution. Amodei knows better than most the massive leaps foundational models are taking every few months and, as he makes clear in his letter, our rules and strategies for managing AI in these circumstances have already fallen behind their capabilities. "AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI," he wrote. Essentially, with AI, we don't know what we don't know. Hegseth's willingness to recklessly use powerful AI models in both surveillance and warfare indicates he has zero knowledge or interest in the past and even less understanding of the intricacies of these systems. A very bad idea I've yet to talk to a technologist, a roboticist, or someone within the AI community who thinks letting an AI (or an AI-powered robot) control or carry a weapon is a good idea. Hegseth isn't necessarily spelling out that scenario, but his requirement to remove the guardrails Anthropic has smartly put in place indicates to me that he doesn't really care about repercussions and AI casualties. He's focused on results, perhaps at any or all costs, including safety and liberty. Amodei's done the right thing here, basically calling Hegseth's bluff. As the Anthropic CEO made clear, Claude AI is already being used in many Department of War systems. Pulling it out and retrofitting for another, perhaps less powerful and intelligent set of models might not be easy and probably won't have the desired outcome of a system ready to carry out Hegseth's bidding. Clearer heads must prevail here. As the tech leaders and, yes, even Elon Musk, wrote in 2017, "Once this Pandora's box is opened, it will be hard to close." Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[35]
Sam Altman Admits He's Made a Huge Mistake
Can't-miss innovations from the bleeding edge of science and tech OpenAI CEO Sam Altman went into full damage control mode over the weekend. A day before the United States attacked Iran, the embattled CEO announced that the company had signed a new agreement with the Pentagon over how its AI models could be used -- and the blowback is clearly impacting the company's bottom line, because Altman is sounding deeply defensive. Many users saw the military terms move as an attempt to swoop in and yank a multibillion-dollar government contract from the clutches of its rival, Anthropic. Last week, Anthropic's CEO Dario Amodei refused to give in to the Department of Defense's demands, drawing a line in the sand and insisting that its AI models may not be used for autonomous killing machines or mass surveillance of Americans, a decision lauded by many users of its chatbot Claude. Regardless of the genuineness of Amodei's continued assurances -- there are plenty of reasons not to take billionaire CEOs by their word -- OpenAI effectively handed Anthropic a major PR victory. The shifting dynamic triggered a mass exodus from OpenAI's ecosystem, with uninstall rates of OpenAI's ChatGPT spiking 295 percent day-over-day on Saturday, the day after OpenAI announced its deal with the Pentagon. Now, Altman is continuing his apology tour, conceding in a lengthy tweet on Monday evening that OpenAI "shouldn't have rushed" its Department of Defense deal. After what many saw as OpenAI giving in to the Pentagon's wishes, Altman claimed that OpenAI would be altering the terms of the deal after the fact -- a bizarre twist that likely won't sit well with Trump's military or the company's already disillusioned customers. Altman claimed that the company would "amend our deal" to add the prohibition of "deliberate tracking, surveillance, or monitoring of US persons or nationals." "There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety," the CEO wrote. "We will work through these, slowly, with the [Department of War], with technical safeguards and other methods." It's important to note that Altman's tweet makes no mention of autonomous AI-enabled weapon systems, the other key issue that led to the rift between Anthropic and the Department of Defense. Whether that means such weapon systems are on the table or not for OpenAI remains unclear, but given Altman's language, it's certainly not out of the question. It also remains unclear whether the Defense Department will agree to these revised terms or whether it had originally agreed to accommodate OpenAI's original terms and not Anthropic's, as CNBC points out. At the very least, Altman admitted the optics of his eleventh-hour amendment were abysmal. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he wrote. "Good learning experience for me as we face higher-stakes decisions in the future." Altman also called on the government not to designate Anthropic a supply chain risk to national security. After Anthropic refused to sign the deal with the Pentagon, defense secretary Pete Hegseth announced on February 27 that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military," Hegseth fumed. "That is unacceptable." Whether Altman's latest mea culpa will meaningfully address OpenAI's growing PR crisis is dubious at best. For many users, the damage has already been done. Besides, it's not a one-horse race, with ChatGPT increasingly lagging behind every other leading AI company on LLM leaderboards. "People are p*ssed and there are better products," one Reddit user wrote. "It's a recipe for disaster." But Anthropic's Amodei may not be a knight in shining armor, either. As the Wall Street Journal reported over the weekend, the Department of Defense selected targets in Iran using Anthropic's Claude chatbot, highlighting the AI company's preexisting ties with the military. In other words, while Amodei told CBS News in a carefully-timed interview on Sunday that mass surveillance and autonomous weapons are the "two red lines" the company had since "from Day One," the company still signed a prior deal with the Pentagon that let it use Claude to execute deadly attacks.
[36]
OpenAI's Pentagon deal raises new questions about AI and surveillance | Fortune
On Friday, just hours after publicly backing rival Anthropic for standing firm against the Pentagon's demands, OpenAI CEO Sam Altman announced his company had struck its own deal with the Pentagon. The move came shortly after the US government had taken the highly unusual step of designating Anthropic a "supply chain risk." OpenAI's decision drew criticism from across many AI researchers and tech policy experts, even though OpenAI said it had achieved limitations in its agreement around surveillance of U.S. citizens and lethal autonomous weapons that Anthropic wanted in its contract but which the Pentagon had refused. One of the key points of contention was over domestic mass surveillance. Experts have long warned that advanced AI is capable of taking scattered, individually innocuous data -- like a person's location, finances, search history -- and assembling it into a comprehensive picture of any person's life, automatically and at scale. Anthropic CEO Dario Amodei has said that this kind of AI-driven mass surveillance presents serious and novel risks to people's "fundamental liberties" and "the law has not yet caught up with the rapidly growing capabilities of AI." But while OpenAI said in a blog post it had reached a deal with the Pentagon that its technology would not be used for mass domestic surveillance or direct autonomous weapons systems, the two hard limits that Anthropic had refused to drop, some legal and policy experts have raised questions about a potential gap in the law. Part of the dispute hinges on the murky legality of large-scale analysis of Americans' data that is lawful under current U.S. statutes, even if it feels indistinguishable from mass surveillance. "Right now, under U.S. law, it's lawful for government authorities to buy up commercially available information from data brokers and other third parties," Samir Jain, the vice president of Policy at the Center for Democracy & Technology, said. "If you buy up massive amounts of data and allow AI to analyze it, you may end up, in effect, engaging in mass surveillance of Americans through that process. It's not currently restricted by law or prohibited by law." OpenAI says its "red lines" are enforced through technical systems it plans to build as well as through language in its contract with the Pentagon. According to a blog released by the company, the contract permits the Department of Defense to use the AI "for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols," while explicitly prohibiting unconstrained monitoring of Americans' private information. The problem is that what counts as "lawful" can change. OpenAI's contract points to existing laws and Department of Defense policies, but those policies could be modified in the future. "Nothing in what they've released would prevent those policies from being changed going forward," Jain said. Some critics argue that existing intelligence authorities already allow forms of surveillance that OpenAI says it prohibits. Mike Masnick, founder of Techdirt, wrote on social media that the agreement "absolutely does allow for domestic surveillance," pointing to Executive Order 12333, a longstanding authority that permits intelligence agencies to collect communications outside the United States, which can include Americans' data when it is incidentally acquired. Some of the debate centers around specific portions of U.S. law that govern different national security activities. The U.S. military's actions are generally governed by Title 10 of the U.S. Federal Code. This includes the work the Defense Intelligence Agency and the U.S. Cyber Command does to support military operations. But some of the DIA's work comes under a different portion of U.S. law, Title 50 of the U.S. Code, which generally governs covert intelligence gathering and covert action. The work of the Central Intelligence Agency and National Security Agency generally fall under Title 50 too. Some of the most sensitive Title 50 activities, especially covert actions, are conducted largely behind the scenes and require a presidential finding. In a blog post published over the weekend, OpenAI shared a detailed account of its agreement with the Pentagon and, according to a post on social media by a well-known OpenAI researcher Noam Brown, the company's head of national security partnerships Katarina Mulligan told Brown that OpenAI's contract does not cover Title 50 work by the intelligence community, one of the major causes for concern from critics. Representatives for OpenAI did not immediately respond to a request for comment from Fortune. But legal scholars have noted that the distinction between Title 10 and Title 50 activities is increasingly blurry. In practice, the two can look very similar, and both can involve analyzing data about foreign actors or tracking patterns. But that overlap creates a gray area for companies like OpenAI: a contract that bans Title 50 work doesn't automatically prevent Title 10 agencies like the DIA from using AI to analyze commercially available or unclassified datasets. "If they're saying that their system can't be used for any Title 50 activities, then that reduces the scope of activities for which the AI system can be used." Jain said, "But that doesn't solve the problem."
[37]
OpenAI's ChatGPT, Anthropic's Claude, and the fog of AI war
On Friday afternoon, President Trump ordered every federal agency to stop using Anthropic's AI technology. That evening, the Pentagon labeled the company a supply-chain risk, a designation that's normally reserved for Chinese firms suspected of espionage and means it could force any company doing business with the Defense Department to prove it doesn't use Anthropic's tools. On Saturday, the U.S. struck Iran with Anthropic's tools still running inside the military's Middle East headquarters, Central Command, using them for targeting and intelligence systems. Trump had granted agencies six months to phase out the technology, a tacit acknowledgment that you can't rip AI from military operations overnight. The rupture between the administration and Anthropic is nominally about guardrails. The company said it refused to let its tools be used for autonomous weapons or mass surveillance and wouldn't budge when Defense officials demanded blanket permission to use the technology in any lawful scenario. CEO Dario Amodei said the company couldn't agree in good conscience. Trump responded by calling Anthropic a "radical-left, woke company" that would never dictate how the military fights. Within hours of the ban, OpenAI announced a new deal to deploy its models in classified Pentagon settings. CEO Sam Altman disclosed a notable detail: The agreement includes the same prohibitions on mass surveillance and autonomous weapons that Anthropic had sought. The Pentagon, he wrote on X $TWTR, "agrees with these principles, reflects them in law and policy, and we put them into our agreement." So the company that got blacklisted and the company that got rewarded appear to have secured functionally similar terms. The difference is most likely politics, or more precisely, the perception of obedience this administration seems to require from the private sector. OpenAI's president gave $25 million to a pro-Trump super PAC last year. Anthropic hired Biden administration officials and lobbied for AI regulation. As one former military AI official from Trump's first term put it: Anthropic is paying the price for not bowing down. Then came a harder question. When a mistargeting incident reportedly killed more than 150 schoolchildren in Iran, outside observers immediately asked whether AI could have contributed to the error. The honest answer is that nobody outside the Pentagon knows, and the Pentagon isn't saying. Defense Secretary Pete Hegseth, who has staked his tenure on aggressive AI adoption, has little incentive to be forthcoming. Targeting errors aren't new, but the introduction of generative AI into the targeting chain is. This is technology that still hallucinates facts, misreads images, and stumbles over reasoning in low-stakes commercial settings. Deploying it in warfare, where the consequences of a wrong answer are measured in bodies, represents a leap that no one, military or otherwise, has rigorously tested. The consumer backlash has complicated the victory lap. Anthropic's Claude app shot to the top of the App Store. A grassroots boycott campaign urged users to drop ChatGPT over OpenAI's Pentagon deal. On X, Altman faced a barrage of pointed questions: If OpenAI's contract permits all lawful uses, how can it also prohibit mass surveillance and autonomous weapons, which have no explicit legal ban? If OpenAI secured the same red lines Anthropic wanted, why couldn't the Pentagon accept those terms from Anthropic? The contradictions matter beyond the discourse. These companies are in a ferocious competition for paying users, enterprise clients, and engineering talent. Neither is profitable. Both are burning billions and have raised tens of billions more in recent weeks to stay in the race. The Pentagon contracts are worth around $200 million apiece, which is not the biggest check either company will cash this year, but suddenly the biggest threat to both of their businesses. For Anthropic, a supply-chain risk designation reaches far beyond the Pentagon. Any company that does business with the federal government, and that includes Anthropic's biggest backers Amazon $AMZN and Google $GOOGL, may need to prove they don't use Claude. That's a question that could ripple through enterprise sales, cloud partnerships, and investment decisions well beyond defense. For OpenAI, the calculus of a classified-use agreement is one thing as a line item in a contract negotiation. It's another when bombs are actively falling and the questions about guardrails, targeting errors, and dead children don't have clear answers. The perception that your chatbot helps pick bombing targets is not a brand problem that a few replies on social media can solve.
[38]
OpenAI Claims Safety 'Red Lines' in Pentagon Deal -- But Users Aren't Buying It - Decrypt
The controversy sparked the QuitGPT movement and drove a surge in Claude downloads. OpenAI said this weekend that it reached an agreement with the Pentagon to deploy advanced AI systems in classified environments, marking a significant expansion of the company's work with the U.S. military. The announcement came less than 24 hours after the Trump administration blacklisted Anthropic, designating the rival AI firm a "supply chain risk to national security" following a dispute over contract language related to surveillance and autonomous weapons. President Donald Trump also directed federal agencies to immediately cease using Anthropic's technology, with Treasury Secretary Scott Bessent writing Monday on X that the agency "is terminating all use of Anthropic products, including the use of its Claude platform, within our department." The timing of the AI announcements placed OpenAI's deal under intense scrutiny. In a detailed blog post, the company outlined what it described as firm "red lines" and layered safeguards governing its Pentagon partnership. The agreement, as presented by OpenAI, raises broader questions about how AI systems will be governed in national security settings, and how the company's stated restrictions will be interpreted and enforced in practice. OpenAI's blog post opens with three commitments framed as non-negotiable: no use of its technology for mass domestic surveillance, to independently direct autonomous weapons systems, or for high-stakes automated decisions like social credit scoring. Then comes the actual contract language -- which OpenAI notably calls "the relevant language," not "the full agreement." "The Department of War may use the AI system for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols," OpenAI said. That is the exact phrase Anthropic said the government had been demanding throughout negotiations. The exact phrase that Anthropic refused to go along with. OpenAI signed it, yet argues its red lines remain fully intact. However, "lawful" in national security contexts isn't a fixed boundary -- it lives inside a patchwork of statutes, executive orders, internal directives, and often classified legal interpretations. When a contract grants "all lawful purposes," the practical limit becomes the government's current legal envelope, not an independent standard set by the vendor. The weapons provision reads that the AI system "will not be used to independently direct autonomous weapons in any case where law, regulation, or department policy requires human control." The prohibition only applies where some other authority already requires human control -- it borrows its teeth entirely from existing policy, specifically DoD Directive 3000.09. That directive requires autonomous systems to allow commanders to exercise "appropriate levels of human judgment over the use of force." OpenAI's strongest counterargument is its cloud-only deployment architecture -- fully autonomous lethal decision loops would require edge deployment on battlefield devices, which this contract doesn't permit. That's a real technical constraint. But cloud-based AI can still perform target identification, pattern-of-life analysis, and mission planning. Those are kill-chain activities regardless of where the final trigger sits. The outcome for a target doesn't differ based on which server the model runs on. The surveillance clause follows a similar pattern. OpenAI's stated red line: no mass domestic surveillance. The contract language: The system "shall not be used for unconstrained monitoring of U.S. persons' private information as consistent with these authorities" -- then lists the Fourth Amendment, FISA, and Executive Order 12333. The word "unconstrained" implies a constrained version of mass surveillance would be permissible. And EO 12333 is the executive order the NSA has used to justify intercepting Americans' communications when done outside U.S. borders. And this is where Anthropic's concerns about wording throughout the negotiations becomes noticeable. Anthropic's argument was that current law hasn't caught up with what AI makes possible. The government can legally purchase vast amounts of aggregated commercial data about Americans without a warrant -- and has already done so. OpenAI's contract language, by anchoring its protections to existing legal frameworks, may not close the gap Anthropic was actually worried about. On Saturday night, Altman held an AMA responding to thousands of questions about the deal. When asked what would cause OpenAI to walk away from a government partnership, he answered: "If we were asked to do something unconstitutional or illegal, we will walk away." That framing places OpenAI's limit at legality -- not at an independent ethical judgment about what the company will or won't enable if it happens to be legal, which is what Anthropic defends. Asked whether he worried about future disputes over what counts as "legal," he acknowledged the risk: "If we have to take on that fight we will, but it clearly exposes us to some risk." On why OpenAI reached a deal where Anthropic could not, Altman offered this: "Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. I'd clearly rather rely on technical safeguards if I only had to pick one. I think Anthropic may have wanted more operational control than we did." That's a substantive philosophical difference. Anthropic argued that because frontier models can be repurposed for intelligence and military workflows in ways that are hard to anticipate, the limits need to be explicit and binding in writing, even at the cost of the deal. OpenAI's position is that technical architecture, embedded personnel, and existing law together constitute a stronger safeguard than contractual text alone. The backlash was immediate. By Monday, the "QuitGPT" movement claimed that over 1.5 million people had taken action -- canceling subscriptions, sharing boycott posts, or signing up at quitgpt.org. The campaign framed OpenAI's move as prioritizing military contracts over user safety, accusing the company of agreeing to let the Pentagon use its technology for "any lawful purpose, including killer robots and mass surveillance." OpenAI might contest that characterization. But the market moved regardless. Anthropic's Claude surged past ChatGPT to become the most downloaded free app in the United States on Apple's App Store, with the company telling Decrypt that it saw record daily signups over the weekend. Pop star Katy Perry shared a screenshot of Claude's pricing page on X. Hundreds of users documented their subscription cancellations publicly on Reddit. Graffiti praising Anthropic appeared outside its San Francisco offices, while chalk attacks covered OpenAI's sidewalks. Even hundreds of OpenAI's own employees had previously signed an open letter supporting Anthropic's refusal to accede to Pentagon demands. The QuitGPT framing is emotionally compelling, but not entirely precise. Anthropic itself has a partnership with Palantir and Amazon Web Services that grants U.S. intelligence agencies and defense departments access to Claude models, and has allegedly been used in military operations to overthrow the governments of Venezuela and Iran. The ethics of AI and national security contracting were never clean on either side. What the campaign captured, accurately, is that a large segment of users believed there was a meaningful difference between how the two companies drew their limits -- and voted with their subscriptions. Whether that difference is as meaningful as it appears requires reading the contract carefully.
[39]
Opinion | Why Did Trump Go to War With Anthropic?
The Trump administration waged its latest war of choice this week when it tried to coerce the tech company Anthropic into giving the military a blank check in how it uses the company's artificial intelligence technology. The confrontation sharply escalated on Tuesday when Defense Secretary Pete Hegseth delivered an ultimatum to Anthropic's chief executive, Dario Amodei: Lift all safeguards on its technology by 5:01 p.m. Friday or lose the company's $200 million contract and any future business with the military. It culminated about an hour before that deadline when President Trump publicly declared he was "directing every federal agency in the United States government to immediately cease all use of Anthropic's technology." In typical florid fashion, the president went on to call the company "WOKE" and full of "Leftwing nut jobs" who meant to do the country harm. It's a striking turn for Anthropic, which in late 2024 became the first major A.I. lab to work on classified U.S. military networks. Although military contracts made up a small percentage of its business, the company's A.I. model was the most widely used across the American national security complex. Anthropic's technology enables troops and intelligence agents worldwide to synthesize and cross-reference oceans of classified information in a split second. In January it was reportedly used during the raid to capture Venezuela's leader, Nicolás Maduro. But the company has always had two red lines: The government can't use its product in the mass surveillance of American citizens or install it in killer robots that operate outside human control. These safeguards have long been at the core of Anthropic's safety-conscious business model and don't differ much from other A.I. labs trying to do the tricky job of advancing their cutting-edge technology while ensuring they don't compromise public safety. When viewed this way, Anthropic's limits are sensible and legal. Federal law almost always precludes the U.S. military from spying on American citizens, and a Defense Department directive has strict regulations around all lethal autonomous weapons that don't have human oversight. But Mr. Hegseth couldn't live with those terms, and Mr. Amodei refused to give in to the Pentagon's threats, saying in a statement late on Thursday that his company was willing to suffer the consequences. The apparent end of this partnership doesn't make America any safer and, instead, unnecessarily sets back the nation's ability to defend itself. It could take the Defense Department six months to remove Anthropic's A.I. tools from internal computer systems, and another A.I. model will have to fill the vacuum. The Pentagon hasn't yet identified a suitable backup. It could also have a lasting impact on the military's already fraught dealings with Silicon Valley. Since the advent of personal computers, the Pentagon's relationship with technology companies has been hampered by mutual suspicion. Many U.S. troops use more modern technology in their daily lives than they do while in uniform. Anthropic's A.I. technology is a rare instance of a potentially game-changing national security capability that was developed by the private sector, not the government -- and a partnership that was, until recently, working. That matters in a future in which software will play a more critical role in warfare than hardware. The Defense Department isn't improving the chances that other innovative start-ups will do business with their budding technology. The prospect of running afoul of the Pentagon became even scarier after Mr. Hegseth announced plans to designate the company a threat to the supply chain for not responding favorably to his ultimatum. The unprecedented move would mean that Anthropic, along with any company that uses its technology, will be prohibited from future Pentagon contracts. Private industry shouldn't get in the habit of dictating policy to the federal government, but today's A.I. presents a distinctive problem. While A.I. models have come a long way, the technology cannot yet be relied on for modern war fighting. Mr. Amodei knows this better than anyone, which is why Thursday's statement said the company "cannot in good conscience accede" to the government's request. The company should be applauded for doing something most military contractors fail to do when presented with lucrative, multiyear contracts: admit their product doesn't yet meet suitable standards. It's important to understand that Anthropic was not saying it would not ever be willing to have its technology outfitted on autonomous weapon systems, such as drones. It was saying the tech wasn't ready yet. This important distinction didn't stop Emil Michael, the Pentagon's chief technology officer, from making half a dozen social media posts on X on Thursday that ridiculed Anthropic and labeled Mr. Amodei a "liar" with a "God-complex." Mr. Michael later insisted the department would use the technology only for "lawful purposes." "At some level, you have to trust your military to do the right thing," Mr. Michael told CBS News. "But we do have to be prepared for the future. We do have to be prepared for what China is doing." He added, "We'll never say that we're not going to be able to defend ourselves in writing to a company." While the standoff has been largely met with public silence from other A.I. labs, many of them also have established internal red lines regarding their technology that are similar to Anthropic's position. Roughly 75 OpenAI employees and more than 450 Google employees published an open letter this week aligning with Anthropic and asking company leadership to "refuse the Department of War's current demands." On Friday, The Wall Street Journal reported that OpenAI's chief executive, Sam Altman, had entered the fray to help try to "de-escalate" the situation, but that was before Mr. Trump's outburst. Before Mr. Trump's announcement, Elon Musk wrote on X that "Anthropic hates Western Civilization." Notably, Mr. Musk's xAI, Anthropic's competitor, has agreed to let its A.I. model be used on classified networks seemingly under the Pentagon's conditions. The military needs the very best A.I. to streamline its operations, and it should find ways to work with these companies, rather than erect barriers. Tech companies have long been outwardly hostile toward the Pentagon and its goals and missions. In 2018, thousands of Google workers signed a petition demanding that the company and its contractors put in a place a policy against building "warfare technology," after Google contributed to an experimental drone targeting program. For the first time in decades, that's starting to change: Venture capital poured some $50 billion into military tech last year, nearly double its investments in 2024. In June the Army recruited four senior tech executives, from companies like Meta and OpenAI, to become officers in a newly established reserve innovation unit called Detachment 201. The secretary of the Army, Dan Driscoll, who worked in venture capital, has said, "I can say unequivocally that the Silicon Valley approach is absolutely ideal for the Army." Mr. Trump's habit of infusing politics into business dealings complicates whether that sentiment can hold true in this administration. Although Mr. Amodei is a prominent Democratic donor who has been critical of the president, there's no indication his personal politics came into play on this matter. The Pentagon and the A.I. companies aren't the only players that can help resolve this fight. Congress can establish guardrails around this emerging technology by outlawing its use in situations where civilians are present, by making human supervision mandatory and by ensuring kill switches for any system reliant on A.I. technology. The international community understands this pressing need. The United Nations secretary general and the International Committee of the Red Cross have called for a new treaty to be concluded this year on autonomous weapon systems. The future of A.I. has already arrived, and humans are clearly having trouble keeping up. Finding sensible common ground between private industry and government is in everyone's interest. Mr. Hennigan writes about national security for Opinion. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[40]
'We shouldn't have rushed to get this out on Friday': OpenAI hastily amends the terms of its controversial deal with the US Department of War as CEO Sam Altman claims it's been a 'good learning experience'
"We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty." After a very public falling out between Anthropic and the US Department of War late last week -- in which the former refused to remove safeguards preventing its AI tools from being used for autonomous weaponry and mass surveillance purposes -- OpenAI stepped into the vacuum with a deal to use its own AI tools in the US military's systems. However, after reaching an agreement with the Pentagon on Friday, OpenAI CEO Sam Altman has since announced that his company will be amending the language used within the deal (via BBC News). In a statement posted on X, Altman appears to regret jumping into the fold quite so quickly, amid considerable backlash to the earlier terms. "One thing I think I did wrong: we shouldn't have rushed to get this out on Friday", said Altman. "The issues are super complex, and demand clear communication. " The language Altman wishes to tweak revolve around domestic mass surveillance concerns. Citing the Fourth Amendment of the US Constitution and the National Security Act of 1947, the new terms amount to the following: "The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.", the statement reads. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." "It's critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear" says Altman, although he then clarifies that "just like everything we do with iterative deployment, we will continue to learn and refine as we go." Altman also says that the Department of War has affirmed that OpenAI's services will not be used by its US intelligence agencies like the NSA, and that OpenAI "want[s] to work through democratic processes." "It should be the government making the key decisions about society," Altman continues. "We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it)." However, Altman's second-to-last point is perhaps the most interesting. "There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods." In a now-updated statement on OpenAI's website (which echoes many of the points Altman makes in his previous posting), the lines are drawn slightly more clearly: "We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs", says the company. "No use of OpenAI technology for mass domestic surveillance. No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as 'social credit')." Which seems suspiciously close to the same points that Anthropic was pushing back on, and those which appear to have cost it its $200 million government contract as a result. However, OpenAI still states that it thinks "our agreement has more guardrails than any previous agreement for classified AI deployments", and that "we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections." For these seemingly-last minute changes, Altman appears somewhat contrite. Summing up his fifth and final point in his earlier X post, the OpenAI CEO said: "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. [It's been a] good learning experience for me as we face higher-stakes decisions in the future." Quite the public learning experience, at the very least. ChatGPT uninstalls were reported to have surged by 295% after the initial agreement was announced, and users appear to have reacted poorly to the idea of their AI tool of choice jumping into bed with the US Department of War. At the time of writing, the most liked comment on Altman's X post reads as thus: "No amount of damage control is going to fix the irreparable harm you did to your brand this week. It's over, Sam." Time will tell, I suppose.
[41]
OpenAI alters deal with Pentagon as critics sound alarm over surveillance
OpenAI CEO Sam Altman unveiled a reworked agreement with the Pentagon Monday night governing the Defense Department's use of its AI services, which he says provides stronger guarantees that the military won't use OpenAI's systems for domestic surveillance. The new agreement states that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," according to a post on OpenAI's website. OpenAI had faced some backlash as news of an initial agreement between the leading AI company and the Pentagon emerged on Friday. Many observers claimed the original language shared on OpenAI's website provided ample loopholes for the government to surveil Americans. The move comes after weeks of intense debates between rival AI company Anthropic and the Pentagon over how the military can use advanced AI systems. While the Defense Department had wanted Anthropic to agree to use its systems for "any lawful purpose," Anthropic maintained its systems could not be used for domestic surveillance or to control deadly autonomous weapons. Until last week, Anthropic was the only major AI company whose services were actively used on classified networks. Researchers argue that without guardrails, AI could allow authorities to monitor individuals with unprecedented speed and accuracy, combing through mountains of digital data to track peoples' movement and behavior. "It is critical to protect the civil liberties of Americans," Altman wrote in a post on X Monday night announcing the new contract language that he said better limits domestic surveillance. "The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA)." Katrina Mulligan, head of national security partnerships for OpenAI, added in another post on X Tuesday morning that "defense intelligence components are excluded from this contract," stipulating that she would be open to future work with the NSA "if the right safeguards were in place." OpenAI did not respond to a request for comment. Many observers remained unswayed Tuesday, concerned that the snippets of OpenAI's contract with the Pentagon published by the company remained purposefully vague and provided carveouts for domestic surveillance by various intelligence agencies within the Defense Department. The full text of the contract has not been released publicly. "OpenAI has said that the Department of War contractually agreed not to use ChatGPT in agencies that surveil American people," said Brad Carson, a former congressman and general counsel of the Army who now leads the Washington D.C. policy group Americans for Responsible Innovation. "They have been happy to show contract language when it benefitted them, but they refuse to release to the public this contractual provision." "I've reluctantly come to the conclusion that this provision doesn't really exist, and they are just trying to fake it," Carson told NBC News. Carson recently founded an AI-focused super PAC which has received $20 million from OpenAI rival Anthropic. Several legal experts agreed that greater transparency about the entire contract and any other key clauses is necessary to properly evaluate the company's claims. "We still need to see the whole contract to say anything with a reasonable level of confidence," said Brian McGrail, senior counsel at the Center for AI Safety, a nonprofit research and advocacy group "It's definitely a step in the right direction, and I do want to give OpenAI some credit." OpenAI's agreement with the Pentagon was announced shortly after Defense Secretary Pete Hegseth said he would label rival AI company Anthropic, which had long been in contract negotiations with the Pentagon, a supply chain risk to national security. Anthropic said the designation, which would force the Pentagon and contractors to stop using Anthropic's services for defense purposes, has never before been publicly applied to an American company. At an event in Sausalito, California, on Monday, retired Gen. Paul Nakasone, the former director of the National Security Agency and U.S. Cyber Command, said that the Pentagon should work to incorporate all leading American AI companies' technology into national defense. "We need Anthropic, we need OpenAI, we need all of our large language model companies to be partnering with our government," Nakasone, who is a member of OpenAI's board of directors, said at a conference sponsored by the Aspen Institute. "I think the supply chain piece is not good. The discussions over the weekend and the tenor of those discussions were tough for me to listen to. As an American citizen, someone who served in government, I just think that it's not right, okay? This is not a supply chain risk." Anthropic had long maintained that the Defense Department could not use its AI systems for domestic mass surveillance or for direct use in autonomous weapons, though it added concessions for the military to use its systems for cyber and missile defense purposes in December. After a meeting between Anthropic CEO Dario Amodei and Hegseth last Tuesday, the Defense Department issued an ultimatum for Anthropic to reach an agreement by 5 p.m. ET Friday. However, on Thursday, an Anthropic spokesperson told NBC News the Defense Department's latest "language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." But as Anthropic's relationship with the Defense Department broke down, OpenAI's deepened, with Friday's announcement of a contract adding a fresh round of intrigue to a story that had already captivated much of the tech and defense community. In his post Monday night, Altman said the rush to ink a deal made the negotiations look "opportunistic and sloppy" even though OpenAI was "genuinely trying to de-escalate things and avoid a much worse outcome." Throughout the weekend and early this week, an army of legal experts have examined the latest public contract language from OpenAI, trying to determine whether the company's terms had actually added any substantive protections beyond the Defense Department's "any lawful use" standard. "I am confused about why the Pentagon would accept this language when they just tried to nuke Anthropic for asking for something very similar to this," Charlie Bullock, senior research fellow at the Institute for Law and AI think tank, wrote on X after the updated language emerged. Many legal experts argue that each word in the contract carries significant weight, as they say the government will take the widest possible reading of the contract's terms. "The pattern we've seen play out time and again in these surveillance debates is that the intelligence and national security community ends up interpreting exceptions in an extremely broad fashion, far more broadly than any normal reasonable person," McGrail said. "And because so much of it is secret, there's limited visibility for the public to push back." "So could there be some new loophole to be exploited here that we aren't predicting? It's totally possible," McGrail added. Experts have also focused on whether the contract is permanently anchored in today's notions of legality, as they worry the government could alter the boundaries of "any lawful use" by issuing new executive orders or legal opinions. The recent debate over military use of AI for domestic surveillance has particularly focused on the government's ability to use commercially available data in its operations, as other methods for surveilling Americans can prove more difficult to gain legal approval. For years, companies providing or displaying ads on phones or laptops have been able to compile targeted data about users, including precise location data, and sell that information to various government agencies to identify individuals' travel and behavioral patterns. Mulligan, OpenAI's national security leader, said in a Monday night X post that the contract's "new language reinforces that domestic surveillance is disallowed under this agreement, including involving commercially acquired information." Sen. Ron Wyden, D-Ore., who has in recent years has repeatedly warned that the federal government buys commercially available data on Americans for surveillance purposes, criticized the Pentagon for not acquiescing to Anthropic's privacy concerns. "The Defense Department is throwing a fit over Anthropic asking for the bare minimum ethical guardrails on how DOD uses its product," Wyden said in an emailed statement. "That's serious cause for alarm, given AI's ability to turn disparate pieces of public or commercial data into highly revealing profiles of Americans. Location data, web browsing records, and information about mental health, political activities and religious affiliations are all available for pennies on the open market and could make Americans targets for doing things that are completely legal." "Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed, regardless of what the current, outdated laws on the books say." Amodei, the Anthropic CEO, has repeatedly remarked that firmer commitments from the Defense Department to not use AI to surveil Americans are necessary because the law has not caught up to AI's increasingly powerful capability to analyze or parse vast troves of data. Recent research has also shown that individuals can be identified by today's AI systems, even if the underlying data has purportedly been anonymized. Protestors of OpenAI's initial deal with the Pentagon surrounded OpenAI's San Francisco headquarters this weekend with chalk messages encouraging employees to remain skeptical of the company's terms, while uninstalls of OpenAI's ChatGPT app surged following news of the agreement. Michael Horowitz, former deputy assistant secretary of defense for emerging capabilities and current professor of political science at the University of Pennsylvania, told NBC News that the dispute between the Pentagon and Anthropic went beyond the simple contract terms. "This dispute reflects a breakdown in trust between Anthropic and the Pentagon, where Anthropic does not trust that the Pentagon will use their tech responsibly, and the Pentagon doesn't trust that Anthropic will allow its tech to be used for what the Pentagon views as important national security use cases," Horowitz said. "Part of that is cultural differences, part of that is politics, part of that is personalities."
[42]
Can Anthropic survive taking on Trump's Pentagon?
Washington (United States) (AFP) - In an unprecedented dispute between the US government and a private business, Defense Secretary Pete Hegseth has declared AI company Anthropic a supply chain risk -- a measure usually reserved for companies from adversary nations, like China's Huawei. The Pentagon is furious that Anthropic is insisting on certain conditions for the use of its technology -- no mass surveillance or fully autonomous weapons systems -- even as the military has been using the company's models for classified operations for more than two years. Some believe the decision could destroy one of America's most high-profile companies in a unilateral act of corporate destruction. Will Anthropic survive this? The battle is bigger than the actual financial contract, which amounted to $200 million. The existential threat is the supply chain designation, which means any company that works with the US military would have to prove it has no dealings with Anthropic. Dean Ball, who helped craft the Trump administration's own AI policy, called the decision "corporate murder," warning that the message sent to every investor in America was unambiguous: do business on our terms, or we will end your business. Anthropic has vowed to challenge the supply chain risk designation in court, calling it a "dangerous precedent for any American company that negotiates with the government." Legal experts say the company has strong grounds, but the court process could take months or longer -- a serious vulnerability for a company that had hoped to go public this year and, given the fragile economics of the AI industry, must maintain investor confidence to survive. Still, "Anthropic will suffer a setback when it loses the government as a client, but it will survive and continue to grow," Erik Gordon, a business professor at the University of Michigan, told AFP. The company for now "has one of the best products," he said. Is this a win for OpenAI? Just hours after the US government banned Anthropic, rival OpenAI announced it had reached a deal for the Pentagon to use its AI models in classified systems. OpenAI CEO Sam Altman said the agreement contains the same two limitations Anthropic had been insisting on. But OpenAI appeared to enshrine these differently: while Anthropic tried to have the limits spelled out explicitly in the contract, OpenAI agreed that the Pentagon could use its technology for "any lawful purpose" -- a formulation Anthropic had refused. OpenAI also says its technology will be cloud-only, preventing models from being embedded directly into weapons hardware, and that an engineer will be deployed to oversee classified use. Critics are calling on OpenAI employees to quit or put pressure on their leadership to support its archrival Anthropic. "OpenAI caved and framed it as not caving, and screwed Anthropic while framing it as helping them," said Miles Brundage, OpenAI's former head of policy research, on X. Silicon Valley's reaction The Trump administration's assault on Anthropic sent shockwaves across Silicon Valley, hardening political battle lines that have now divided the tech world. Anthropic's most prominent antagonist is venture capitalist David Sacks, the White House's chief AI policymaker, who has long argued that the company's safety-first approach will slow innovation and cede ground to China. He is closely aligned with Emil Michael, the Pentagon's de facto chief technology officer and a veteran of Uber during its most aggressive phase, when the company was known for its scorched-earth approach to entering new markets. Coming out in support of Anthropic, hundreds of engineers at Google, Amazon, Microsoft and OpenAI signed petitions and open letters urging their leaders to refuse Pentagon demands for unrestricted AI use. At the executive level the picture was more divided. No major tech company has publicly defended Anthropic, though several executives at competing firms, speaking anonymously in the media, expressed concern that the ban sets a dangerous precedent. Elon Musk, by contrast, posted that "Anthropic hates Western Civilization," aligning publicly with the administration.
[43]
AI executive Dario Amodei on the red lines Anthropic would not cross
"It's about the principle of standing up for what's right," said Dario Amodei, CEO of the artificial intelligence firm Anthropic, who has found himself at the center of a new kind of firestorm. What's wrong, in his view, is why the AI company he co-founded has been banned from the federal government. "It feels very punitive and inappropriate, given the amount that we've done for U.S. national security," he said. Anthropic created Claude, an AI chatbot you might use at work or school. Since last summer, its government version has been deeply embedded in military intelligence and classified operations at the Pentagon. This past week, in the lead-up to the attack on Iran, the Defense Department demanded Anthropic hand over its AI without restrictions for lawful military use. The company refused. "We have these two red lines," said Amodei. "We've had them from Day One. We are still advocating for those red lines. We're not gonna move on those red lines." Those red lines? Not allowing Anthropic's AI to perform mass surveillance of Americans, and prohibiting its AI from powering fully-autonomous weapons without any human involvement. Amodei said, "It doesn't show the judgment that a human soldier would show - friendly fire or shooting a civilian, or just the wrong kind of thing. We don't want to sell something that we don't think is reliable, and we don't want to sell something that could get our own people killed, or that could get innocent people killed." It's a question of who should control the most advanced technology ever created: a private tech company, or the federal government? Asked if he believes Anthropic knows better than the Pentagon, Amodei replied, "One of the things about a free market and free enterprise is different folks can provide different products under different principles. Our model has a personality. It's capable of certain things. It's able to do certain things reliably. It's able to not do certain things reliably. And I think we are a good judge of what our models can do reliably and what they cannot do reliably." After several weeks of talks, President Trump on Friday directed the U.S. government to halt all use of Anthropic's AI, cancelling more than $200 million in federal contracts. Defense Secretary Pete Hegseth labeled Anthropic "a supply chain risk to national security" - a first for an American company. Asked about the president referring to Anthropic as "a left-wing woke company," Amodei said, "I can't speak for what other parties are doing, and what they're doing. ... But we, I think, have tried to be very neutral. So, this idea that we've somehow been partisan, or that we haven't been even-handed? We've been studiously even-handed." The Trump administration's actions regarding Anthropic have been called by critics an abuse of power. Asked if he agrees, Amodei replied, "Again, I would return to the idea that this is unprecedented." But is it an abuse of power? "This has never happened before," he said. "This designation has never happened before with an American company. And I think it was made very clear in some of their statements, in some of their language, that this was retaliatory and punitive. I don't know what else to call it. Retaliatory and punitive." As Amodei and Anthropic face a government ban, his main rival, Sam Altman, of OpenAI (maker of ChatGPT), struck his own deal with the Pentagon on Friday. Amodei says Anthropic plans to take legal action. "All we've seen," he said, "are tweets from the president and tweets from Secretary Hegseth." And, he says, Anthropic remains at the negotiating table, hoping to talk. Asked what he might say to President Trump, Amodei said, "We are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. We believe in defeating our autocratic adversaries. We believe in defending America. "The red lines we have drawn, we drew because we believe that crossing those red lines is contrary to American values," he said. "Disagreeing with the government is the most American thing in the world. And we are patriots. In everything we have done here, we have stood up for the values of this country." For more info:
[44]
Sam Altman Is Marketing OpenAI as America's Wartime AI Company Whether He Intends to or Not
In an X post Friday evening, Sam Altman announced that his company, OpenAI, had just “reached an agreement with the Department of War to deploy our models in their classified network.†The timing is both startling and significant, making Altman a sort of poster boy for AI at war. Hours earlier, OpenAI’s chief competitor, Anthropic, had been told that its products had received a blacklisting of sorts from the Pentagonâ€"a designation of “supply-chain risk to national security.†Anthropic has declared “red lines†around the use of its tech for mass surveillance and fully autonomous weapons, and the Pentagon finds this unacceptable. So, per Secretary of War Pete Hegseth, no company that works with the pentagon at all “may conduct any commercial activity with Anthropic.†As Axios notes, the Pentagon’s legal rubric for the designation remains to be seen, and the supply-chain risk designation is usually reserved for companies based in, and potentially supportive of countries deemed hostile to the U.S. In any case, the move matches the well-established Trump 2.0 pattern of whacking any party that displeases the administration with the largest, spikiest club available, and letting the courts decided later if the use of a given club was valid or not. But Anthropic’s loss is, at least theoretically, Sam Altman’s gain. To back up a bit, Anthropic’s very existence is a slap in the face to Altmanâ€"with Anthropic having been created in the first place as essentially a spin-off from OpenAI, supposedly dedicated to standards of ethics and safety that Amodei and his team perceived OpenAI as not having upheld. So the Super Bowl commercials in which Anthropic not-so-subtly trashed OpenAI were not, it appears, the product of a friendly rivalry. Altman and Anthropic founder and CEO Dario Amodei are bad at concealing their apparent animosity for one another. At a photo op for AI leaders in India earlier this month, the two conspicuously declined to interlock their hands. As my Gizmodo colleague AJ Dellinger has already noted, leaked remarks from Sam Altman seemingly timed to go along with OpenAI’s Pentagon deal show Altman attempting to grasp for some kind of moral stance similar to Amodei’s on surveillance and autonomous killbots. But any such claim on Altman’s part has already been hand-waved away as pure bluster by State Department and former DOGE official Jeremy Lewin, who posted on X that Altman’s stated principles were, in practice, just some feel-good fluff added to the agreement that actually gave OpenAI, Lewin strongly implies, zero power to stop the Pentagon from doing whatever it wants with OpenAI’s models. In contrast to Anthropic, the company has “reached the patriotic and correct answer here,†Lewin writes. But Altman’s hand-wringing around Anthropic’s “red lines†was already contradicted in spirit by remarks he made earlier this month about Anthropic in his long X post about Anthropic’s mean Super Bowl ads (worth reading in full because it’s a hall of fame example of being Not Mad). In the course of complaining about the ad, Altman takes a long detour to pop off about, essentially, the same thing that made the Pentagon angry. Amodei’s company, Altman says, “wants to control what people do with AI.†They also, he says, “block companies they don't like from using their coding product (including us), [and] they want to write the rules themselves for what people can and can't use AI for.†Whatever terms you want to use for Anthropic’s strategy, it’s been blisteringly effective from a business standpoint. If 2025 was Google’s year of AI success, 2026 has, so far, been Anthropic’sâ€"with the hype around its flagship product, Claude Code, causing the enterprise version of the 2022 ChatGPT earthquake. Anthropic’s day-to-day moves have set Wall Street’s agenda throughout the year. This month, Anthropic surpassed OpenAI in total cash raised.  But, rather bizarrely, Altman also took a populist-sounding stance in his anti-Anthropic Super Bowl rant, in which he claimed that “Anthropic serves an expensive product to rich people.†This is effectively meaningless since OpenAI and Anthropic both charge for subscriptions and API access. But Altman seems to be positioning ad-supported ChatGPT and perhaps some future ad-supported version of its coding product, Codex, as the democratic, normie versions of these products, and contrasting that with Anthropic being for “rich people.†His intended framing may not be the one the public absorbs. To be clear, reality and perception may be on two entirely different tracks here. The Pentagon denies that Anthropic’s stated reasons are the core of this issue. "This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders,†an anonymous Pentagon officially reportedly told CBS News. Also, Anthropic’s self-imposed rules constraining its ability to scale up were revised into flexible guidelines on Tuesday. So there’s no reason to buy into the narrative that Anthropic’s product has a “soul,†or that Anthropic is indeed anything other than a for-profit, public benefit corporation, coping with the strictures imposed by its unusual certificate of incorporation requiring it to make as much money as possible while also claiming to “responsibly develop and maintain advanced AI for the long-term benefit of humanity.†That’s a tough needle to thread, it seems, because even after the Pentagon blacklisted him, Amodei has said, "We are still interested in working with them as long as it is in line with our red lines.†But being labeled a "RADICAL LEFT, WOKE COMPANY" by Donald Trump might turn out to be a savvy business maneuver. Hours after Altman’s announcement of a deal with Trump’s Pentagon, that same Pentagon launched what the president has called “major combat operations†against Iran in conjunction with Israel. A poll from the Associated Press and the University of Chicago published earlier this week showed that a majority of Americans already have little to no trust of Trump when it comes to national security, and a fresh YouGov poll shows that they disfavor war with Iran more than they favor it. A Gallup poll published yesterday found, rather astonishingly, that more Americans now “sympathize†with the Palestinians than the Israelis. Amid that political backdrop, Anthropic’s “red lines†fight with the Pentagon has created a symbolic space labeled “AI That Is Unquestioningly Friendly to the U.S. War Machine†in big neon letters, and moved his company’s narrative squarely outside of it. Sam Altman and OpenAI, it appears, are willingly stepping into it. Military contractors currently using Anthropic products like Claude Code will have six months to phase them out, according to the Pentagon, and Anthropic has already declared that in the meantime, it will challenge this designation in the courts. During that time, however, the public perception of Anthropic will be unshackled from the perception of this new war with Iran. The same can't be said of OpenAI.Â
[45]
OpenAI CEO Sam Altman responds to deal with Department of War
OpenAI has entered a deal with the U.S. Department of War (DOW), providing its AI tools for military use in "classified environments." Announcing the partnership on Saturday, the ChatGPT developer claims it includes guardrails prohibiting the use of its technology for mass domestic surveillance or autonomous weapons. However, contract excerpts shared by OpenAI appear to leave significant loopholes. News of OpenAI's deal with the DOW came just one day after President Donald Trump announced that the U.S. government will no longer use tech from OpenAI rival Anthropic, including its AI model Claude. Posting about the split on Truth Social, Trump had objected to Anthropic's insistence that the DOW abide by the company's terms of service. Exactly which terms of services Trump took issue with were revealed in a statement from Anthropic CEO Dario Amodei on Thursday. In it, he claimed that the DOW demanded Anthropic remove safeguards against use of its tech for mass surveillance in the U.S. and fully AI-controlled weapons. Amodei stated that such use may technically be lawful, however "this is only because the law has not yet caught up with the rapidly growing capabilities of AI." "[I]n a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," wrote Amodei. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." OpenAI's terms are apparently more to the Trump administration's liking, with the company stepping in to supply the U.S. military with AI technology in Anthropic's place. Yet despite this, OpenAI claims that its agreement with DOW not only has similar guardrails which prohibit use of its technology for mass domestic surveillance or directing autonomous weapons, but even adds a third: "No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as 'social credit')." "We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," read OpenAI's announcement. "This is all in addition to the strong existing protections in U.S. law." This Tweet is currently unavailable. It might be loading or has been removed. According to OpenAI, the limitations it has imposed are more enforceable than Anthropic's because it will only provide the DOW with its technology via the cloud, rather than installing it directly on hardware. OpenAI personnel will also be kept involved so that they can see how the DOW is using its technology. This will allegedly allow the company more oversight and control of its AI systems. "We don't know why Anthropic could not reach this deal, and we hope that they and more labs will consider it," wrote OpenAI. However, an excerpt of the contract shared by OpenAI indicated that its technology will only be barred from use in autonomous weapons or to surveil U.S. citizens where such use is illegal. In fact, the agreement appears to lay out circumstances where OpenAI's tech would be allowed for these purposes, such as where human control over weapons isn't required by DOW policy or law. "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols," the contract reads, per OpenAI. "[A]ny use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment." Responding to concerns in a post on LinkedIn, OpenAI head of national security partnerships Katrina Mulligan merely reiterated that its usage policies aren't the only safeguards in place, re-emphasising its cloud deployment and involvement of its personnel. "[The DOW's] position was, build the model however you want, refuse whatever requests you want, just don't try to govern our operational decisions through usage policies," wrote Mulligan. Still, doubts remain regarding the effectiveness of these ostensible safeguards, particularly considering OpenAI's reluctance to take an ethical stand. This Tweet is currently unavailable. It might be loading or has been removed. OpenAI CEO Sam Altman conducted a Q&A on X in an attempt to assuage users' concerns about the DOW deal, to little apparent success. Conceding that the deal "was definitely rushed, and the optics don't look good," Altman claimed that they'd hoped it would de-escalate tensions between the DOW and the AI industry. "I think a good relationship between the government and the companies developing this technology is critical over the next couple of years," wrote Altman. The deal might have brought OpenAI and the U.S. government closer together, but it seems to have simultaneously alienated ChatGPT's civilian users. Responding to a question about whether permitting all lawful use allows mass surveillance, Altman shared a post by U.S. Under Secretary of War Emil Michael in which he claimed that "The DoW does not spy on domestic communication of U.S. people (including via commercial collection) and to do so would be unlawful and profoundly un-American." Unsurprisingly, few seem inclined to take the DOW's word for it. In 2013, whistleblower Edward Snowden revealed mass surveillance of U.S. citizens conducted by the DOW's (then called the Department of Defense) National Security Agency (NSA). This program was found to be illegal, and included people's telephone records. Human Rights Watch also accused the then-Department of Defense of surveilling U.S. citizens without warrants in 2017. "The government already has broken the law and illegally surveiled [sic] US citizens," replied X user @bolts6629. "A milquetoast statement from an undersecretary in an administration famous for lying is good enough for you?" Altman did state that he would refuse to use OpenAI's technology for mass domestic surveillance "because it violates the Constitution," and expressed discomfort with the idea of an amendment that would allow such use. However, some social media users cast doubt on this claim, noting that he has gone back on other promises before. "Other things you've said you wouldn't do: overrule the OpenAI board, remove the nonprofit structure, put ads in ChatGPT," noted @Laneless_. Further, OpenAI's CEO also indicated that the company is reluctant to draw ethical lines, preferring to abdicate responsibility and follow the government's directions rather than take any sort of stand itself. "[W]e are not elected," wrote Altman. "We have a democratic process where we do elect our leaders. We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn't ethical in the most important areas." "Following orders is not an excuse for unethical behavior," responded @MagisterLudiX. "Either you have strong red lines or you see it as purely transactional, depending on political context." "AI is a tool. A hard limit on it, is a limit like any other tool has," wrote @genericrohan. "It's not deciding what the military can do, it is about setting a limit that the military can plan for." In response to the news of OpenAI and the DOW's partnership, many ChatGPT users are reportedly cancelling their subscriptions to the AI chatbot. Several are instead turning to Anthropic's AI chatbot Claude, which has since dethroned ChatGPT as the most downloaded free app in the U.S. Apple App Store. "OpenAI just made a deal with a devil and lost this customer of 2 years," Reddit user r/boomroom11 posted on subreddit r/ChatGPT. The post has over 26,000 upvotes at time of writing. "The company (originally non profit) that told us they existed to build AI safely for humanity is now taking Pentagon contracts. Sam Altman decided defense money was more important than every principle the company was founded on."
[46]
OpenAI amends Pentagon deal as Sam Altman admits it looks 'sloppy'
ChatGPT owner's CEO says it will bar its technology being used for mass surveillance or by intelligence services OpenAI is amending its hastily arranged deal to supply artificial intelligence to the US Department of War (DoW) after the ChatGPT owner's chief executive admitted it looked "opportunistic and sloppy". The contract prompted fears the San Francisco startup's AI could be used for domestic mass surveillance but its boss Sam Altman said on Monday night the startup would explicitly bar its technology from being used for that purpose or being deployed by defence department intelligence agencies such as the National Security Agency (NSA). OpenAI, which has more than 900 million users of ChatGPT, made the deal almost immediately after the Pentagon's existing AI contractor, Anthropic, was dropped. Anthropic had insisted "using these systems for mass domestic surveillance is incompatible with democratic values", leading the US president, Donald Trump, to call Anthropic "leftwing nut jobs" and directing the federal government to stop using their technology. Despite denials from OpenAI that the agreement allowed for surveillance use, commentators raised the spectre of the Snowden scandal which broke in 2013, when it emerged the NSA was engaged in mass harvesting of phone and internet communications. The deal prompted an online backlash against OpenAI, with users of X and Reddit encouraging a "delete ChatGPT" campaign. One post read: "You're now training a war machine. Let's see proof of cancellation." Claude, the chatbot made by Anthropic, jumped to the top of Apple's App Store charts, rising above ChatGPT, according to analysis by Sensor Tower. In a message to employees reposted on X, the OpenAI CEO said the original deal announced on Friday had been struck too quickly after Anthropic was dropped. "We shouldn't have rushed to get this out on Friday," Altman wrote. "The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." Upon announcing the deal, OpenAI had said the contract had "more guardrails than any previous agreement for classified AI deployments, including Anthropic's". However, the use of AI by the US military has alarmed nearly 900 employees at OpenAI and Google, also a leading power in the technology, who have signed an open letter calling on their bosses to refuse to let the DoW use their products for surveillance and autonomous killing. Warning that the US government was trying to "divide each company with fear that the other will give in", they wrote: "We hope our leaders will put aside their differences and stand together to continue to refuse the DoW's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight." The letter has been signed by 796 Google employees and 98 OpenAI staff. OpenAI said in a blogpost announcing the DoW deal that one if its red lines is "no use of OpenAI technology to direct autonomous weapons systems". However, observers including OpenAI's former head of policy research, Miles Brundage, have queried how OpenAI has managed to secure a deal that assuages ethical concerns Anthropic believed were insurmountable. Posting on X, he wrote: "OpenAI employees' default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them." Brundage added: "To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics." In his X post also wrote that he would "rather go to jail" than follow an unconstitutional order from the government. "We want to work through democratic processes," Brundage wrote. "It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty." Meanwhile, three more US cabinet-level agencies - the departments of state, Treasury and health and human services - have moved to cease use of Anthropic's AI products after the DoW's declaration of the company as a supply chain risk. Trump has ordered all US government agencies to phase out their use of Anthropic after secretary of defence Pete Hegseth's decision.
[47]
Anthropic vs. White House puts $60 billion at risk
Why it matters: That massive investment is now at risk due to a contract dispute with the Pentagon that's become as much about ego as AI. Catch up quick: Anthropic and the Defense Department on Friday failed to reach agreement on a long-term deal for the military to continue licensing Anthropic's AI models, as it has done in both the Venezuela and Iran operations. * Axios first reported on the licensing dispute several weeks ago, saying that it centered on disagreements over fully autonomous weapons and mass surveillance. * Anthropic CEO Dario Amodei last Thursday reiterated these concerns in a blog post. For weapons, he argued that AI wasn't yet up to the task. For surveillance, he argued that the law hadn't yet caught up to the tech. Driving the news: President Trump responded on Truth Social by directing every federal agency to cease using Anthropic technology, and Defense Secretary Pete Hegseth followed up by tweeting that he would define Anthropic as a supply chain risk to national security. * Hegseth added: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." * Hours later, OpenAI signed the deal that Anthropic wouldn't. Behind the scenes: Amodei's blog post is said to have infuriated Defense Dept officials, who believed he was trying to virtue signal to: (a) Anthropic employees upset about the Venezuela revelations, and (b) AI engineers at rival companies who might share similar concerns. * The latter, they felt, could poison the well for future engagement with AI companies. Had Amodei just wanted to take his ball and go home, they felt, he should have done so more quietly. The big picture: Designating a Silicon Valley company as a supply chain risk is unprecedented and a very blunt force tool, but arguably the best one DoD had to prevent other military contractors from deploying Claude in their applications. * But what Hegseth tweeted went several steps beyond what that designation would require. * For example, Nvidia does business with the U.S. military and also has commercial activities with Anthropic. Namely, it sells chips to Anthropic that are vital to the company's survival. Hegseth's language, taken at face value, would require that relationship be severed. * DoD hasn't yet provided Anthropic with formal notice of the supply chain risk designation, which Anthropic has pledged to challenge in court, so the actual language could be much more limited than what Hegseth tweeted. Zoom out: Venture capitalists historically avoided investing in tech startups that interacted deeply with government, outside of biotech, believing that the political and procurement variables were too volatile. In the past few years, however, many have thrown that caution to the wind. * This situation may be chickens coming home to roost. * Amodei stuck a finger in Trump's eye, and he responded with a series of uppercuts. It's not how U.S. presidents typically respond to pointed public criticism from a private company, but it's nonethess the current normal. The bottom line: Anthropic has been at the heart of America's business narrative this year, introducing tools that could upend industries from cybersecurity to law and everything in between.
[48]
The Pentagon Is Waging War on American Genius
The Department of War is living up to its rebranded name. Unfortunately, its target is a vital American company. Defense Secretary Pete Hegseth has given Anthropic Chief Executive Officer Dario Amodei until today at 5:01 p.m. to remove two restrictions on how the military uses the company's AI. The restrictions: no mass surveillance of American citizens, and no fully autonomous weapons without a human in the loop. Anthropic agreed to everything else, from missile defense to cyber operations. It is the first and only frontier AI lab on classified systems. Its technology was used in the capture of Nicolás Maduro. This is not a pacifist company. It drew two lines. The administration's response reveals just how much it wants to cross them. It simultaneously threatened to invoke the Defense Production Act to commandeer Anthropic's technology and to declare Anthropic a "supply chain risk." That designation, normally reserved for foreign companies like Huawei, would ban every defense contractor from doing business with Anthropic. These positions are incoherent. You cannot simultaneously call a company a national security threat and a national security necessity. But the contradiction is not confusion -- it is the point. The administration wants Anthropic's help on the other side of those lines. The Pentagon is supposed to wage war on America's enemies, not its greatest assets and most important values. In 2018, thousands of Google engineers signed a letter declaring that their company "should not be in the business of war." Google caved to their demands and pulled out of Project Maven, its AI contract with the Pentagon. That was disgraceful. US servicemembers deserve the best that US technologists can produce, and the Pentagon has every right to say that within very broad lines, it must be free to use those tools as it deems best. Imagine a Delta Force operative reading through terms of service before firing a weapon. Nobody wants that. But that's not what's happening here. These are categorical limits on two uses that most Americans oppose, that today's AI is not reliable enough to perform, and that a Pentagon spokesman says the military has "no interest" in pursuing. Which means either the confrontation is about something other than military capability, or the Pentagon is not being straight about its intentions. These are restrictions everyone should support. I use Claude, Anthropic's AI. When I was researching a recent column, I asked it to find sources -- and every single link it provided was fabricated. This is called hallucination, and it is not a bug that better engineering will fix. A 2025 paper by researchers at OpenAI and Georgia Tech offered a mathematical proof that hallucinations cannot be fully eliminated under current AI architectures. When this happens in my research, I waste an afternoon. When it happens in a weapons system, someone dies. And hallucination might be the least of the problems with weaponized AI. This week, Kenneth Payne at King's College London published a study pitting three leading AI models against each other in simulated geopolitical crises. The models deployed nuclear weapons in 95% of scenarios. None ever chose to surrender or withdraw, even when losing. So when Anthropic says that AI is not reliable enough for autonomous weapons, it is being generous. Domestic surveillance is an obvious bright line. Amodei himself has written that a sufficiently powerful AI could "gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow." No administration, of either party, should be trusted with that capability aimed at the US public. But the confrontation is also about something even more fundamental. Voltaire once wrote that the British liked to shoot an admiral from time to time, "to encourage the rest." The administration is applying that approach to Anthropic. It's trying to intimidate every American company. David Sacks, the White House AI czar, has attacked Anthropic's restrictions as "woke AI," putting the fight into familiar culture war territory. And the consequences for Anthropic would be severe. The company just raised $30 billion at a $380 billion valuation. A supply chain designation would force Boeing and Lockheed Martin to sever ties. Investors do not fund companies the government is trying to destroy. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Many of Silicon Valley's leaders donated millions to this administration. They sat behind the president at his inauguration. They are donating to his new ballroom. They have been largely silent as the administration extracted equity from Intel, export taxes from Nvidia and AMD, and obedience from nearly everyone else. If the government can do this to a $380 billion company for refusing to help spy on Americans, no company is safe. The CEOs who empowered this administration need to understand that it is turning on the industry. They can speak up now, or they can wait for their turn in the barrel. The Pentagon has found its enemy: It is American innovation, American values, and any American company with the courage to defend them. It is long past time for someone other than Dario Amodei to say so. Elsewhere in Bloomberg Opinion: Want more? Subscribe to our newsletter.
[49]
"Opportunistic and sloppy" - buyer's regret from CEO Sam Altman as OpenAI's deal with the Department of War comes under heavy fire?
OpenAI was wrong to rush into its deal with the US Department of War (DoW), admits CEO Sam Altman, and looked 'opportunistic and sloppy" as a result. It's a startling mea culpa from Altman - "Good learning experience for me as we face higher-stakes decisions in the future" - and one triggered presumably by a combination of online opprobrium aimed at the company, an uptick in cancellations of ChatGPT, and the rise of Anthropic's Claude app to the number one slot on the AppStore from being #131 a month earlier. US app un-installs of ChatGPT's mobile app jumped 295% on 24 hours on Saturday after the DoW deal was announced, up from an average nine percent over the previous 30 days, according to data from market intelligence provider Sensor Tower. Meanwhile US downloads of Claude were up 37% for the same 24 hour period. As everyone is surely aware by now, Anthropic was last week ousted from a $200 million contract with the DoW after refusing to back down ethical 'red lines' around no use of AI to launch autonomous weapons or for use in mass surveillance of American citizens. It was also blacklisted across all of US Government and designated a national security risk. As the controversy rumbles on and OpenAI's insistence that its own red lines will hold in the contract it signed with the DoW is called into question, Altman has attempted to assuage fears, both internal and external, that he has signed the firm up to something without the necessary 'stand up in court' contractual guarantees it needs. In an internal email to OpenAI staffers, Altman also announced that the DoW has now agreed that its tech will not be used by the department's intelligence agencies, such as the NSA, although he left himself room for manoeuvre by raising the possibility of "a follow-on modification to our contract". And modification is something that can happen as we've already seen. In a blog posting, OpenAI says that since the deal was signed last week, it has amended the terms of the contract with the DoW with some amended language: This language makes explicit that our tools will not be used to conduct domestic surveillance of US persons, including through the procurement or use of commercially acquired personal or identifiable information. The Department also affirmed that our services will not be used by Department of War intelligence agencies like the NSA. Any services to those agencies would require a new agreement. The new language reads: Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of US persons and nationals. For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information. But, say critics, that "intentionally" is doing a lot of heavy lifting there and leaving quite a bit of wiggle room. That seems to be a concern inside of OpenAI as well as outside. Noam Brown, OpenAI's lead researcher in AI reasoning, said on X: The language is now updated to address this, but I also strongly believe that the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security. Deployment to the NSA and all other DoW intelligence agencies will be withheld so that there is time to address these loopholes through the democratic process before deployment. But he added: I know that legislation can sometimes be slow, but I'm afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions. When there is bi-partisan support and urgency, I have faith that government can act quickly. And as AI becomes more powerful, it's more important than ever that ultimate authority be vested in the public. Brown also wants to see more engagement around policy matters from OpenAI staffers: I am also planning to become more personally involved with policy at OpenAI. I think now more than ever it's important for researchers to be in the loop so that policy is informed of the extremely fast progress we are seeing. While the DoW continues to tap into Claude for its assault on Iran, elsewhere in the Trump 2.0 the ousting of Anthropic has begun. US Treasury Secretary Scott Bessent confirmed that his department is terminating all use of the company's products and platform: The American people deserve confidence that every tool in government serves the public interest, and under President Trump, no private company will ever dictate the terms of our national security. That's a comment that doesn't do much to calm things down. Anthropic has always stated its willingness to work with the government to find compromise around the ethical issues it has raised over the use of its tech in certain circumstances. That said, the updated OpenAI commentary points to an agreement by the DoW to convene "a working group made up of leaders from the frontier AI labs, cloud providers, and the Department's policy and operational communities". OpenAI is looking to this to be "an important forum for ongoing dialogue on emerging AI capabilities, privacy, and national security challenges going forward". Of course, expectations are one thing. Turning those expectations into reality is quite another. But at the very least, it's some sign of movement in a situation that has rapidly looked to be intractably bogged down. For his part, Altman has also told OpenAI staff that the firm will be walking the DoW through what can and cannot - or should not - be assumed about its tech's capabilities: There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. The inclusion of "slowly" in that commitment is interesting. It's safe to assume that no-one is going to be putting any timescales on any of this. But as Altman observes: Things are moving so fast that we need to urgently educate the world so that the democratic process has time to catch up. Will that be enough to repair some of the damage that looks to have been done to OpenAI's brand over the past weekend? Time will tell. More to come, no doubt. For now, final words to Altman: I believe that, as some of the creators of this new technology, we deserve to and are obligated to have a loud voice about the risks, pitfalls, and benefits we see. I think we are heading towards a world where the relationship between governments and AI efforts is critical. This will be difficult but it has to happen; I do not see any good future where we don't get there. There should not be games and fights in the press like this; drastic government action should be avoided. I think there are real dangers coming to the world, and maybe pretty soon; I tried to put myself in the mindset of how I'd feel the day after an attack on the US or a new bio-weapon we could have helped prevent. Meanwhile all eyes will on an OpenAI 'all hands' meeting later today to see how his own staff respond to Altman's arguments. That will be telling and crucial to the next steps for the company.
[50]
OpenAI revises Pentagon contract to address surveillance concerns - SiliconANGLE
OpenAI revises Pentagon contract to address surveillance concerns OpenAI Group PBC is revising an artificial intelligence deal that it inked with the U.S. Defense Department last year. The ChatGPT developer published some of the updated legal language late Monday. The change is designed to ensure that the Pentagon won't use OpenAI models for domestic surveillance. This morning, Axios cited sources as saying that the revised agreement has not yet been formally signed. Last June, OpenAI won a one-year contract to provide the Pentagon with access to its AI models. The company stated at the time that the agreement was worth up to $200 million. According to OpenAI, officials planned to apply its AI to use cases such as data analysis and cybersecurity. Anthropic PBC inked a similar $200 million deal with the Pentagon around the same time. This January, reports emerged that the OpenAI rival had raised concerns about the agreement. Anthropic sought to ensure that its technology wouldn't be used to conduct mass surveillance or build autonomous weapons. It equipped its models with guardrails designed to block such uses. The Pentagon took issue with the company's policy. Last week, U.S. President Donald Trump ordered federal agencies to stop using Anthropic's software. In a related move, U.S. Defense Secretary Pete Hegseth announced plans to designate Anthropic as a supply chain risk. The designation prohibits U.S. military contractors and suppliers from doing business with the AI provider. On Friday, OpenAI announced plans to provide the Pentagon with access to its AI models under a revised agreement. In a late Monday post on X, chief executive Sam Altman elaborated that the contract includes protections against domestic surveillance. "The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," reads one of the contract's clauses. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." One of the most important changes is that the clause covers "commercially acquired personal or identifiable information." According to Axios, OpenAI's original contract with the Pentagon only mentioned "private information." That language didn't prevent the use of personal data purchased from data brokers. Altman added that the agreement doesn't permit OpenAI's models to be used by intelligence agencies. He wrote that such use would require "a follow-on modification" to the contract. Additionally, the agreement states that OpenAI's models may only be deployed in "cloud networks."
[51]
Anthropic gets so much public support after Trump blacklisting that it crashes the Claude app
Anthropic may have lost a fan in Donald Trump, but it seems to have gained plenty of new ones after refusing to make a deal with the United States government, citing ethics concerns. The AI company was embroiled in debate with the Department of Defense (DoD), which has been using Anthropic's technology internally at various levels. After the DoD announced it would only contract with AI companies that acceded to "any lawful use" of their products, Anthropic pushed back, asking that certain safeguards remain in place to prevent its technology from being used for mass domestic surveillance and fully autonomous weapons. The DoD set a deadline of 5:01 p.m. on Friday, February 27 for Anthropic to agree to its new policy or risk being blacklisted by the government. The day before the deadline, Anthropic CEO Dario Amodei released a statement explaining the company's position. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei said. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do. [...] We cannot in good conscience accede to their request."
[52]
Anthropic sees major Claude outages after 'unprecedented demand'
As the US administration proceeds to drop Anthropic as a supplier, many are rallying around the AI company's relatively ethical stance, creating 'unprecedented demand' for Claude. Anthropic's Claude has been fast becoming the darling of the AI enthusiasts, for development, research and enterprise work. Now it is facing the might of the US administration which is threatening to drop it entirely as a supplier after a falling out with the Pentagon over so called 'red lines' it would not pass. With many in Silicon Valley supporting its relatively principled stand, and general users sending it to the top of the US Apple charts in recent days for free downloads - beating OpenAI's ChatGPT for the first time - its flagship Claude.ai and Claude Code apps went down for around three hours on Monday, causing many to bemoan its absence. There are already reports of further outages as we write, although it's latest update says "a fix has been implemented and we are monitoring the results". In a nostalgic post on LinkedIn yesterday, regular contributor to Silicon Republic, AI aficionado Jonathan McCrea wrote: "I now feel the same way about Claude being down as I used to about Twitter being down." De facto boycott Last night, Treasury Secretary Scott Bessent added his voice to the de facto US administration boycott of Anthropic products saying in a post on X that his department would terminate use of Anthropic products. It follows a directive from Donald Trump ordering US agencies to "phase out" their use of the AI company's products, and his Defense Department labelling Anthropic a "supply-chain risk", an allocation normally reserved for foreign suppliers from non-friendly states. Anthropic has been quick to say that this is a "legally unsound' designation, and is expected to challenge the move in the courts. Reuters is also reporting that it has seen memos to employees at the Department of Health and Human Services, asking them to switch to other AI platforms like ChatGPT and Gemini, and at the State Department saying it was switching the model powering its in-house chatbot, StateChat, to OpenAI from Anthropic. Financially it will surely deal a serious blow to Anthropic in the short term, but some commentators are arguing that it could be a pivotal moment for Anthropic as it may be seen by many as the relatively ethical choice when it comes to the AI giants. The recent Grok scandal has put a major question mark over xAI's credentials and OpenAI's Sam Altman clearly sees the reputational risk as he has been quick to claim that it is ensuring some guardrails in its contract with the Pentagon. On X yesterday Altman claimed that these would ensure OpenAI would not be "intentionally used for domestic surveillance of U.S. persons and nationals". The back story If you haven't been following, Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens. On Thursday (February 27) Anthropic's Dario Amodei released an official statement saying Anthropic believed that in "a narrow set of cases, we believe AI can undermine, rather than defend, democratic values". "Some uses are also simply outside the bounds of what today's technology can safely and reliably do," he said. "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included." "We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values." Amodei went on to say that autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. "But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." It's a debacle that is likely to roll on in coming days, and it remains to be seen whether Anthropic can withstand the unprecedented onslaught from its own government and rely on the support of users for its principled stand. In the short term its challenge appears to be to meet the current demand on its systems. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[53]
Sam Altman in Damage Control Mode as ChatGPT Users Are Mass Cancelling Subscriptions Because OpenAI Is "Training a War Machine"
Can't-miss innovations from the bleeding edge of science and tech OpenAI just handed one of its biggest rivals a massive PR victory, in a blunder that even CEO Sam Altman admitted had optics that "don't look good." On Friday, Altman announced that OpenAI had reached a new agreement with the Department of Defense over how its AI systems would be deployed across the military, an act that many saw as the company crossing the picket line. That's because Anthropic, a company founded by former OpenAI employees, had refused to give in to the Pentagon's demands that it give the military unrestricted use of its Claude AI, even as CEO Dario Amodei insisted that Anthropic's AI not be used for autonomous weaponry or the mass surveillance of US citizens. It was a move that could come at great cost for Anthropic. The Pentagon had vowed to ice the company out of contracts with the federal government by declaring it a "supply chain risk," and even threatened to seize its tech. But at least in the short term, it's OpenAI that's facing more blowback for its decision. Online, scores of users -- ranging from your typical AI bro to, we kid you not, Katy Perry -- are saying they're ditching ChatGPT in favor of Claude because of Altman's deal with the Pentagon. Indeed, Claude surged to the top of the App Store over the weekend, and as of Monday, still claims the number one spot above ChatGPT, which is currently in second place. A recent thread in the r/ChatGPT subreddit calling on users to to quit the AI chatbot quickly became one of the forum's most highly-upvoted posts of all time. "You're now training a war machine," the thread reads. "Let's see proof of cancellation." The fierce backlash is despite Altman claiming that the DoD agreement included the same restrictions that Anthropic had been seeking. But in the eyes of many of its users and critics alike, the fact that OpenAI had reached an agreement at all while Anthropic refused to bend the knee was a sign of its capitulation to a deeply unpopular administration. The ethics of a company that was founded on supposedly beneficent principles now allowing its AI systems to be deployed across the US military couldn't have faced a more immediate test when just hours after Altman announced the agreement on Friday, the US and Israel launched a series of deadly strikes in Iran that killed its leader Ruhollah Khomeini and hundreds of civilians. (Reports suggest that the DoD used Claude to select targets in Iran, meaning even Anthropic's principled stand may be yet more theater from the AI industry.) Altman, meanwhile, has been in damage control. Following the deal's announcement, he hosted a rare AMA on X where he fielded questions about OpenAI's work with the "DoW" -- referring to the "Department of War," the Trump administration's preferred moniker for the DoD -- and respondents didn't hold back. "How did you go from 'a tool for the betterment of the human race' to 'let's work with the department of WAR'?" asked one user. Another mocked Altman by asking if he was happy that Claude overtook ChatGPT on the App Store. "No," Altman conceded. One of the most pressing questions concerned what OpenAI would do if the DoD issued orders that violated the constitution, or sought to carry out mass domestic surveillance. Altman's line was that OpenAI would refuse any such orders, even if it meant imprisonment. ("Please come visit me in jail if necessary," he quipped.) But he also exhibited a blind faith that this would never be an issue by, more or less, extolling the virtues of the armed forces. Altman asserted that the "people in our military are far more committed to the constitution than an average person off the streets," and uncritically cited a statement from a DoD official who vowed that it would never infringe on American's civil liberties or engage in "unlawful" surveillance. Such pinky-promises from Trump administration figures were apparently enough to convince Altman that everything the military did or has ever done is entirely above board, to overlook the fact that the administration has leaned on cutting edge surveillance tech to carry out mass deportations, and to memory-hole the name "Edward Snowden." "I would also be terrified of a world where our government decided mass domestic surveillance was ok," Altman wrote at one point. "I don't know how I'd come to work every day if that were the state of the country/Constitution." OpenAI users rightfully viewed Altman's feigned ignorance as an insult to their intelligence. "You cannot post the statements by an Administration that is known to lie and expect people to have trust or confidence in [you or your company]," one fumed. At the end of the day, even Altman couldn't deny how the PR disaster he had created for himself. The DoD deal, he admitted, "was definitely rushed, and the optics don't look good."
[54]
Amid growing backlash, OpenAI CEO Sam Altman explains why he cut a deal with the Pentagon following Anthropic blacklisting | Fortune
OpenAI faced a vocal backlash for agreeing to the Pentagon deal after Altman had earlier in the week voiced support for Anthropic's position that it would not accept a Pentagon contract that did not contain explicit prohibitions on its AI technology being used for mass surveillance of U.S. citizens or being incorporated into autonomous weapons, that can make a decision to strike targets without human oversight. Some of these critics have even started a campaign to convince ChatGPT users to stop using that AI model and switch to Anthropic's Claude chatbot. There was some evidence the campaign was having an effect too: Claude surged past ChatGPT to become the most downloaded free app in Apple's app store. The sidewalk outside OpenAI's offices in San Francisco was also covered with chalk graffiti attacking its decision to cut a deal with the Pentagon, while graffiti outside Anthropic's offices largely praised its decision to refuse a contract that did not include prohibitions on the use of its AI models for mass surveillance and autonomous weapons. Some of Altman's and OpenAI's social media push over the weekend seemed aimed at quelling concerns among the company's own employees over the Pentagon contract. Many rank-and-file OpenAI employees had signed an open letter last week supporting Anthropic's refusal to accede to the Pentagon's demands and opposing its decision to designate Anthropic a supply chain risk. (Altman also said over the weekend that he disagreed with the supply chain risk designation.) And at least one OpenAI employee publicly questioned whether the company's contract with the Pentagon provided robust safeguards. Leo Gao, an OpenAI employee who works on making sure increasingly powerful AI models stay aligned with user intentions and human values, criticized his employer on X for agreeing to let the DoW use its technology for "all lawful purposes" and then engaging in what Gao called "windowdressing" to make it seem like there were further restrictions on what the Pentagon could do with OpenAI's GPT models. Altman admitted in an "Ask Me Anything" session on social media platform X on Saturday night that its deal with the Pentagon "was definitely rushed, and the optics don't look good." But he insisted that OpenAI moved quickly to make the deal because it wanted to de-escalate the increasingly heated situation between the U.S. military and Anthropic. The fight potentially threatened to damage the AI industry as a whole, in part by raising the prospect of the U.S. government nationalizing an AI lab or at least using its power to coerce a private company to deliver technology on its preferred terms. "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry," Altman said. "If not, we will continue to be characterized as rushed and uncareful." He added that " a good relationship between the government and the companies developing this technology is critical over the next couple of years." And he said he was opposed to Anthropic being labeled a supply chain risk. "Enforcing the [Supply Chain Risk] designation on Anthropic would be very bad for our industry and our country," Altman said. "To say it very clearly: I think this is a very bad decision from the DoW and I hope they reverse it. If we take heat for strongly criticizing it, so be it." OpenAI said that it had found a compromise approach that preserved the same limitations while also acceding to the military's wish that it not have contractual constraints on how it uses the AI tech it purchases. The company said that limits on how its AI can be used rest on both references to existing law that it has put in the DoW contract and technical limitations on what its AI models will be able to do. It said the DoW agreed to let it build these technical limitations. The technical limitations will include systems that would classify any of the prompts DoW users feed OpenAI's models and refuse any that the classifier deems might violate OpenAI's redlines. It also may include fine-tuning of OpenAI's models so that they won't easily comply with instructions that violate the two red lines. OpenAI published a portion of its contract with the DoW in which it said it agreed that its technology could be used "for all lawful purposes" but which also included specific references to existing U.S. laws and Department of War policy documents that establish limitations on the surveillance of U.S. citizens and on how autonomous weapons can be deployed. Katrina Mulligan, OpenAI's head of national security partnerships and a former chief of staff to the Secretary of the Army, said during the Ask Me Anything on X that referencing these existing laws and policies provided more assurance that the Pentagon would not later violate the company's redlines than some critics suggested. "We accepted the 'all lawful uses' language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract," she said. "And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate." Some legal experts pushed back on Mulligan's position, at least as far as DoW policies on autonomous weapons are concerned. Charles Bullock, a senior fellow at the Institute for Law & AI, said on X that "DoW can, of course, change its own policies whenever it wants," and that the contract language OpenAI released does not require the DoW to follow the existing policy in perpetuity. But he said that the contract did seem to bind DoW to following existing interpretations of existing laws governing mass surveillance of U.S. citizens. Bullock also said it was impossible to know how ironclad the limitations contained in OpenAI's contract are without assessing the entire contract, not just the small section OpenAI made public. OpenAI has said government rules bar it from publishing the entire contract because it is for a classified system. Many of those skeptical of OpenAI's agreement with the Pentagon noted that the term of "mass surveillance" is not well-defined and questioned OpenAI executives on what would happen if military intelligence agencies attempted to use its AI models to analyze commercially-available data -- such as cell phone location data or data from fitness apps -- that could be put together at scale to conduct surveillance of U.S. citizens in America. The Defense Intelligence Agency is believed to have purchased such data and its use remains a legal gray area. Anthropic, according to a story in The Atlantic, was particularly concerned about the Pentagon using its technology for this kind of analysis and that its insistence on curtailing that use case was one of the major stumbling blocks to breaking its deadlock with the DoW. "We can't protect against a government agency buying commercially available data sets, but our contract incorporates a prohibition on mass domestic surveillance as a binding condition of use," Mulligan said during the AMA. She also said that OpenAI's decision to rely on a multi-pronged approach that included technical systems to limit what the Pentagon could do provided a more robust solution than simply relying on contractual language, which she said seemed to be Anthropic's primary approach. She said Anthropic had not been able to lean on this technical solution because it was already providing versions of its AI models to the military that had some of the usual safeguards removed. "Anthropic has primarily been concerned with usage policies, which is because their existing classified deployments involve reduced or removed safety guardrails (making usage policies the primary safeguards in national security deployments)," she said. "Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. That's what we pursued in our negotiations and that's why we think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's." Another OpenAI executive, Boaz Barack, who works on AI alignment and safety, also represented the company in the AMA and also criticized Anthropic for fixating so heavily on contractual language and not other kinds of safeguards. "I get the impression that folks at Anthropic had unrealistic expectations for the contract stuff," he said in response to a question from former OpenAI policy chief Miles Brundage, noting that tech companies were always going to be somewhat at the mercy of how DoW interpreted terms in the contract. Altman said that many of the questions in the AMA session touched on the issue of whether AI efforts should be nationalized. The OpenAI CEO said "it has seemed to me for a long time it might be better if building AGI were a government project" but also that "it doesn't seem super likely on current trajectory." Altman also said he was surprised by how many of OpenAI's critics seemed to have more faith in unelected tech executives making decisions about the appropriate use of AI rather than government officials who were, at least in theory, accountable to Congress and ultimately voters. "I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the constitution. I am terrified of a world where AI companies act like they have more power than the government," Altman said on X. "I would also be terrified of a world where our government decided mass domestic surveillance was ok."
[55]
Trump's furious response to Anthropic is as much about power as it is about AI safety
In the most clear and consequential policy move on AI safety yet, the Trump administration has announced it will blacklist a leading AI lab over its refusal to allow unfettered access to its technology for military purposes. It is the president and his secretary of war, Pete Hegseth, going nuclear over Anthropic's refusal to allow the Pentagon to use its AI for "any lawful purpose". Describing Anthropic as a woke, radical left company, the US president said on his Truth Social platform that "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War", adding that the company's actions were putting American lives and national security in jeopardy. Until now, however, Anthropic was doing more than any other AI lab to support the Pentagon. Anthropic's Claude AI is the only frontier model already being used extensively for sensitive military planning and operations. It's been widely reported that Claude AI was used as part of the Pentagon's "Maven Smart System" to plan and execute the military operation to capture Venezuelan President Nicolas Maduro in January. The origin of the dispute wasn't about Anthropic's commitment to the US military; instead, its insistence on "red lines" in relation to the use of AI technology. Anthropic's CEO Dario Amodei demanded assurances it wouldn't be used for mass surveillance of civilians or lethal automated attacks without human oversight. In a statement on Wednesday, Amodei said some uses of AI are "simply outside the bounds of what today's technology can safely and reliably do". In a post on X, equally as seething as the president's, secretary Hegseth announced that, as well as being blacklisted, Anthropic would also be designated a Supply-Chain Risk - a legal intervention previously reserved for foreign tech companies seen as a direct threat to US national security. Read more: AI developing so fast 'it is becoming hard to measure' AI bubble remains intact for now Given growing concerns about AI safety, it's a move that has shocked AI safety campaigners, but also raises serious questions about the future viability of the Pentagon's "AI-First" strategy. Secretary Hegseth has given Anthropic six months to remove its AI from the Pentagon's systems. But there are now questions about what he might replace it with. For the first time in the short history of superintelligent AI, the row appears to have united the AI industry. In a memo to staff on Thursday, Sam Altman, CEO of OpenAI, which has also been in talks with the Pentagon, announced he shares the same "red lines" as Anthropic. Separately, more than 400 employees at Google and OpenAI have signed an open letter calling for their industry to stand together in opposing the Department of War's position. In a copy of the OpenAI memo seen by Sky News, Altman tells staff: "Regardless of how we got here, this is no longer just an issue between Anthropic and the DoW; this is an issue for the whole industry and it is important to clarify our stance." The move by the Trump administration appears, therefore, to be as much about power as it is about AI safety. The Pentagon has already said it wouldn't use AI for mass surveillance of the US population, nor unsupervised autonomous weapons. Its furious response to Anthropic seems more in response to a big tech attempting to dictate terms to the government, rather than what those terms actually are. In taking on Silicon Valley, which, though AI investment largely accounts for much of the current US economic growth, the administration has just declared war on a powerful opponent.
[56]
Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ - Decrypt
OpenAI made a deal with the Pentagon following Anthropic's fallout. Hours after President Donald Trump ordered federal agencies to halt use of Anthropic's AI tools, the U.S. military carried out a major airstrike on Iran that reportedly relied on the company's Claude platform. U.S. Central Command used Claude for intelligence assessments, target identification, and simulating battle scenarios during the Iran strikes, people familiar with the matter confirmed to the Wall Street Journal on Saturday. It came despite Trump's directive on Friday that agencies begin a six-month phase-out of Anthropic products following a breakdown in negotiations between the company and the Pentagon over how the latter can use commercially developed AI systems. Decrypt has reached out to the Department of Defense and Anthropic for comment. "When AI tools are already embedded in live intelligence and simulation systems, decisions at the top don't instantly translate to changes on the ground," Midhun Krishna M, co-founder and CEO of LLM cost tracker TknOps.io, told Decrypt. "There's a lag -- technical, procedural, and human." "By the time a model is embedded across classified intelligence and simulation systems, you're looking at sunk integration costs, retraining, security re-certifications, and parallel testing, so a six-month phase-out may sound decisive, but the real financial and operational burden runs far deeper," Krishna added. "Defense agencies will now prioritize model portability and redundancy," he said. "No serious military operator wants to discover during a crisis that its AI layer is politically fragile." Anthropic CEO Dario Amodei said Thursday the company would not strip safeguards preventing Claude from being deployed for mass domestic surveillance or fully autonomous weapons. "We cannot in good conscience accede to their request," Amodei wrote, after the Defense Department demanded contractors allow their systems for "any lawful use." "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump later wrote on Truth Social, ordering agencies to "immediately cease" all use of Anthropic products. Defense Secretary Pete Hegseth followed, designating Anthropic a "supply-chain risk to national security," a label previously reserved for foreign adversaries, barring every Pentagon contractor and partner from commercial activity with the company. Anthropic called the designation "unprecedented" and vowed to challenge it in court, saying it had "never before publicly applied to an American company." The company added that, to its knowledge, the two disputed restrictions had not affected a single government mission to date. "The debate isn't about whether AI will be used in defense, that's already happening," Krishna added. "It is whether frontier labs can maintain differentiated guardrails once their systems become operational assets under 'any lawful use' contracts." OpenAI moved quickly to fill the gap with CEO Sam Altman announcing a Pentagon deal on Friday night covering classified military networks, claiming it included the same guardrails Anthropic had sought. Asked whether the Pentagon's effective blacklisting of Anthropic set a troubling precedent for future disputes with AI firms, OpenAI CEO Sam Altman responded on X, "Yes; I think it is an extremely scary precedent, and I wish they handled it a different way. "I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution," he added. Meanwhile, nearly 500 employees from OpenAI and Google signed an open letter warning that the Pentagon was attempting to pit AI companies against each other.
[57]
How talks between Anthropic and the Defense Dept. fell apart
SAN FRANCISCO -- Minutes before a 5:01 p.m. deadline Friday, Emil Michael, the Defense Department's chief technology officer, was fuming. For weeks, Michael, a former top executive at Uber, had been negotiating a $200 million artificial intelligence contract with the AI company Anthropic for the Pentagon. The talks had hit obstacles as the agency demanded unfettered use of Anthropic's AI systems, while the company countered that it would not allow its technology to be used for purposes such as the surveillance of Americans. Defense Secretary Pete Hegseth had set the Friday deadline for a deal and the two sides were close. The only thing that remained was agreeing on a few words about the issue of lawful surveillance of Americans, multiple people with knowledge of the talks said. Michael, who was on a call with Anthropic executives, demanded that the company's CEO, Dario Amodei, get on the phone to hash out the language, the people said. But Michael was told that Amodei was in a meeting with his executive team and needed more time. Michael was unhappy with that answer, the people said. He also had an ace up his sleeve: On the side, he had been hammering out an alternative to Anthropic with its rival, OpenAI. A framework between the Pentagon and OpenAI had already been reached. So when the Friday deadline passed, the Defense Department did not give Anthropic more time. At 5:14 p.m., Hegseth announced that he had designated Anthropic as a security risk and that it would be cut off from working with the U.S. government. "America's warfighters will never be held hostage by the ideological whims of Big Tech," he posted on social media. Later that night, Sam Altman, OpenAI's CEO, announced that his company had instead reached an agreement with the Pentagon to provide its AI technologies for classified systems. In the end, the talks between Anthropic and the Defense Department were undone by weeks of building frustration between men who had differing philosophies about AI and who did not like one another. This account of the failure of the Anthropic talks and the success of the OpenAI deal is based on interviews with a dozen people with knowledge of the negotiations. The New York Times spoke to people from multiple companies and government agencies and interviewed officials with a wide range of views on the fight over the future of AI in warfare. Michael, Amodei and Altman have known one another for years through business dealings in Silicon Valley, but they have often not gotten along. Amodei and Altman, 40, once worked together at OpenAI and are bitter rivals. And as Anthropic's discussions with the Defense Department dragged on last week, Michael, 53, publicly accused Amodei of being "a liar" with "a God-complex." Ultimately, Michael preferred Altman -- who has courted the Trump administration -- over Amodei, the people with knowledge of the negotiations said. The clashes between the Defense Department and Anthropic are most likely not over. On Friday, Anthropic said it would sue over the Pentagon's decision to label it a "supply chain risk." The supply chain risk designation has typically been reserved for foreign companies that the U.S. government believes are a threat to national security; the label has never been used against an American company. Officials at U.S. intelligence agencies including the CIA, which uses Anthropic's AI technology, have also privately urged both sides to make a deal. Some current and former officials said they continued to hope for a peace agreement. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to AI systems. The companies have denied those claims.) Last year, Anthropic, OpenAI, Google and xAI were all part of a Pentagon pilot program to explore how AI could be used for defense. Anthropic was the only AI company that deployed its technologies to work on classified systems, and its AI was widely used by defense officials. On Jan. 9, Hegseth published a memo calling on AI to be widely integrated across the military and for AI companies to offer their technology without restrictions. To underscore that, Hegseth placed AI-generated posters of himself around the Pentagon with the words, "I want you to use A.I." His memo meant that AI companies working with the Pentagon had to renegotiate their contracts. Anthropic, with the most widely used technology, became the focus of negotiations. Michael had joined the Defense Department as chief technology officer in May 2025 after previously working as a special assistant at the Pentagon during the Obama administration. Michael became the point person on the negotiations with Anthropic. But the talks soon reached an impasse. Anthropic wanted guardrails to stop its AI from being used for the mass surveillance of Americans or deployed in autonomous weapons with no humans involved. The Defense Department argued that no private contractor could decide how its tools would be lawfully used. On Feb. 24, Hegseth called a meeting with Amodei at the Pentagon to find a resolution. The men showed little warmth in the meeting, which lasted less than an hour, people familiar with the discussions said. At the end of the conversation, Hegseth said that if Anthropic did not compromise with the Pentagon by 5:01 p.m. Friday, it would be labeled a supply chain risk. He said the Pentagon could also invoke the Defense Production Act to force Anthropic to work with the government, a move that was later dropped. The next day, Altman of OpenAI got on a call with Michael to discuss a deal for his company. Within a day, they had drafted a rough framework. OpenAI agreed to the Pentagon's requirement that its AI could be used for all lawful purposes, but it also negotiated the right to put technical guardrails on its systems to adhere to its safety principles. Amodei doubled down on AI safety. In a statement on Feb. 26, he said Anthropic could not "in good conscience accede" to the Pentagon's demands. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," he added. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." That night, Michael unleashed on Amodei on social media, calling the Anthropic leader a liar. "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael posted. As Friday's deadline approached, Anthropic executives thought they were close to a compromise with the Pentagon and were just a few words apart on the issue of surveillance, people on both sides of the negotiation said. Complicating the matter was a social media post by President Donald Trump. Trump had told Hegseth on Friday morning that he had prepared a post belittling Anthropic and ordering all government agencies to stop working with it within six months. Even as Trump published the post at 3:47 p.m., the two sides kept talking. Michael, who was on a call with Anthropic executives at the time, said the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said. Anthropic told the Pentagon that it was willing to let its technology be used by the National Security Agency for classified material collected under the Foreign Intelligence Surveillance Act. But the company wanted a legally binding promise from the Pentagon not to use its technology on unclassified commercial data. At that point, Michael asked to speak with Amodei, who was not on the call. Michael was told that Amodei was in a meeting. Shortly after, Hegseth said the talks were over. At 10 p.m. Friday, as Anthropic's lawyers began working on a lawsuit against the Pentagon, Altman was on the phone with Michael finalizing the details of OpenAI's deal with the Defense Department. Altman then posted news of the agreement on social media. Hegseth later reposted Altman's announcement from his personal account on the social platform X. On Saturday, Altman invited people to ask him questions on X about the deal as OpenAI faced a backlash for swooping in. Many questioned how OpenAI could sign a deal with the Pentagon and still uphold its safety principles, as well as whether OpenAI's agreement truly protected its AI models from misuse. Altman said he saw the deal in simpler terms. "We do not want the ability to opine on a specific (and legal) military action," he wrote. "But we do really want the ability to use our expertise to design a safe system."
[58]
OpenAI strikes Pentagon deal as Trump blacklists rival Anthropic
OpenAI announced a deal to deploy its models in classified environments for the Department of Defense. President Donald Trump directed federal agencies to stop using Anthropic's technology after negotiations between Anthropic and the Pentagon fell through. Secretary of Defense Pete Hegseth designated Anthropic as a supply-chain risk. The agreement follows the collapse of a separate negotiation between the Pentagon and Anthropic. OpenAI CEO Sam Altman admitted the deal was "definitely rushed" and caused significant backlash, leading to Anthropic's Claude overtaking ChatGPT in Apple's App Store. Altman stated the deal was intended to de-escalate tensions between the Department of Defense and the AI industry. OpenAI published a blog post outlining prohibited use cases for its technology. The company stated it bans mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions such as "social credit" systems. The post claimed OpenAI retains full discretion over its safety stack and utilizes cloud deployment with cleared personnel in the loop. OpenAI executives defended the agreement against criticism regarding potential surveillance. Techdirt's Mike Masnick claimed the deal allows for domestic surveillance because it references compliance with Executive Order 12333. OpenAI's head of national security partnerships, Katrina Mulligan, argued that deployment via cloud API prevents integration into weapons systems or sensors. Anthropic previously stated it has red lines against the use of its technology in fully autonomous weapons or mass domestic surveillance. OpenAI CEO Sam Altman stated that OpenAI shares similar red lines. OpenAI's blog post noted that the company does not know why Anthropic failed to reach a deal with the Pentagon.
[59]
Anthropic CEO Slams Pentagon Decision As 'Unprecedented'
The company was the first to deploy its AI models on classified US military cloud networks, according to Anthropic CEO Dario Amodei. The CEO of AI company Anthropic, Dario Amodei, has responded to the United States Department of Defense and the White House, ordering military defense contractors that do business with the Department of Defense to stop using Anthropic's products. Anthropic objected to the use of its AI models for mass domestic surveillance and fully autonomous weapons that can fire without any human input, Amodei told CBS on Saturday. He added that Anthropic was fine with all of the US government's proposed use cases for its AI models, except for surveillance and fully autonomous weapons platforms. He said: "These are things that are fundamental to Americans: the right, not to be spied on by the government, the right for our military officers to make decisions about war, themselves, and not turn it over completely to a machine." The decision by the Defense Department to label Anthropic as a "supply chain risk," meaning that military contractors cannot use Anthropic's products on defense contracting work, is "unprecedented" and "punitive," he added. Amodei later clarified that he is not against the development of fully automated weapons if foreign militaries begin using them in the future, but that AI is not yet reliable enough to function autonomously in a military setting. The law has not caught up to the rapidly developing AI sector, Amodei said, calling on the United States Congress to pass "guardrails" to prevent the use of AI in domestic mass surveillance programs. Related: Anthropic says it's been targeted in massive distillation attacks On Friday, US "Secretary of War" Pete Hegseth announced that Anthropic is a "Supply-Chain Risk to National Security." "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," he said. Hours later, rival AI company OpenAI accepted a contract with the US Defense Department to deploy its AI models across military networks. The announcement of the deal from OpenAI CEO Sam Altman drew online backlash from critics, who cited AI being used for mass domestic surveillance and undermining individual privacy as a red line.
[60]
Silicon Valley Rallies Behind Anthropic in A.I. Clash With Trump
Sheera Frenkel and Cade Metz reported from San Francisco and Julian Barnes from Washington. Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that "we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons." More than 100 employees at Google signed a petition calling on the tech giant to "refuse to comply" with the Pentagon on some uses of artificial intelligence in military operations. And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to "hold the line" against the Pentagon. Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic's chief executive, has said he does not want the company's A.I. to be used to surveil Americans or in autonomous weapons, saying this could "undermine, rather than defend, democratic values." Mr. Trump and his officials, in contrast, want the military to use whatever A.I. it buys however it wants, as long as it complies with the law. On Friday, Mr. Trump called Anthropic a "radical Left AI company run by people who have no idea what the real World is all about," and Defense Secretary Pete Hegseth labeled the start-up a "supply chain risk," a move that would sever ties between the company and the U.S. government. Now what began as a whisper of support for Anthropic in the tech industry has crescendoed into a shout. The support -- voiced by top leaders at Anthropic's rivals, as well as rank-and-file engineers at Google and other large companies -- stood out because Silicon Valley had largely appeared to be in lock step with the Trump administration. But the Pentagon's actions appear to have driven a new wedge between Washington and Silicon Valley. Coalescing behind Anthropic was in many ways a throwback to a pre-Trump Silicon Valley, when tech workers often spoke up against what they viewed as dangerous or inappropriate uses of powerful technologies that they had worked on. "Now it is like we are going back to a time about eight years ago," said Jack Poulson, one of the employees who protested Google's work with the military in 2017. "There is a lot more activism now." The rallying behind Anthropic, even from tech executives who have openly criticized Dr. Amodei, shows how the Department of Defense cannot easily force Silicon Valley firms to comply. Unlike defense contractors that have worked with the Pentagon for decades and are reliant on longstanding military contracts, the A.I. companies are contending with different internal pressures and external factors. Trump Administration: Live Updates Updated Feb. 27, 2026, 6:15 p.m. ET Many of them depend on highly skilled work forces of A.I. technologists who are hard to recruit and harder to retain. Disaffected employees can easily jump ship to other companies if they are unhappy with what they are hearing from their corporate leaders. In the last year, Meta, OpenAI, Google and others have spent millions -- some say billions -- of dollars to land and keep top talent. For many A.I. companies, government contracts are only one piece of an expanding pipeline of business. The $200 million contract that Anthropic has been negotiating with the Pentagon for A.I. use in classified systems, which precipitated the fight, would most likely be only a small percentage of the company's revenue. Anthropic primarily sells A.I. software to other businesses and last year hit a monthly pace of $8 billion to $10 billion in annual revenue, Dr. Amodei said in December. Current and former defense officials said the Trump administration had misread how strongly Anthropic felt about getting assurances on how its A.I. would be used. Pentagon officials believed Anthropic would fall in line after they threatened to either cut the company off from government business or force it to provide its A.I. model without restrictions, they said. The Pentagon and Anthropic did not respond to requests for comment. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to A.I. systems. The companies have denied those claims.) Anthropic, Google, OpenAI and xAI have been working with the Pentagon in a pilot program to bring A.I. to the Defense Department. That meant that as the Pentagon ramped up its threats against Anthropic, other Silicon Valley workers saw how the situation could apply to them. If Anthropic was cut off from government business for not capitulating to the Pentagon's demands, the same tactics could be used on them. Some employees at large A.I. companies soon signed proposals calling on their managers to support Anthropic's position. On group chats and private messaging boards, engineers pointed out that if the Pentagon carried out its threat, nothing was stopping it from using the same tactics to force other companies to work with it. At OpenAI, Mr. Altman contacted Defense Department officials on Wednesday to discuss how his company might work on classified projects and to express his concern over the Pentagon's spat with Anthropic, two people with knowledge of the conversations said. Then on Thursday, Mr. Altman sent a memo to employees saying A.I. should not be used for mass surveillance or autonomous lethal weapons, while agreeing with the Pentagon's stance that private companies should not control U.S. government policy. On Friday, Mr. Altman appeared on CNBC and more strongly backed Anthropic, which was founded by former OpenAI employees. "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety," he said. Kate Conger and Tripp Mickle contributed reporting from San Francisco.
[61]
Anthropic CEO says he's sticking to AI "red lines" despite clash with Pentagon
Joe Walsh is a senior editor for digital politics at CBS News. Joe previously covered breaking news for Forbes and local news in Boston. Hours after a bitter feud between the Pentagon and Anthropic ended with the Trump administration cutting off the artificial intelligence startup, Anthropic CEO Dario Amodei told CBS News in an exclusive interview Friday night he wants to work with the military -- but only if it addresses the firm's concerns. "We are still interested in working with them as long as it is in line with our red lines," he said. The conflict centers on Anthropic's push for guardrails that explicitly prevent the military from using its powerful Claude AI model to conduct mass surveillance on Americans or to power autonomous weapons. The Pentagon wants the ability to use Claude for "all lawful purposes," and says it isn't interested in either of the uses that Anthropic was concerned about. The military gave Anthropic a Friday evening deadline to either meet its demands or get cut off from its lucrative Defense Department contracts. With the two sides still seemingly still far apart, President Trump on Friday ordered federal agencies to "immediately" stop using Anthropic's technology. Then, Defense Secretary Pete Hegseth declared the company a "supply chain risk," directing military contractors to also stop working with the AI startup. In his interview later Friday, Amodei stood by the guardrails sought by Anthropic, which is the only company whose AI model is deployed on the Pentagon's classified networks. "Our position is clear. We have these two red lines. We've had them from day one. We are still advocating for those red lines. We're not going to move on those red lines," Amodei later said. "If we can get to the point with the department where we can see things the same way, then perhaps there could be an agreement. For our part and for the sake of U.S. national security, we continue to want to make this work." Amodei told CBS News that Anthropic has sought to deploy its AI models for military use because "we are patriotic Americans" and "we believe in this country." But the company is worried that some potential uses of AI could clash with American values, he said. Mass surveillance is a risk, Amodei argued, because "things may become possible with AI that weren't possible before," and the technology's potential is "getting ahead of the law." He warned that the government could buy data from private firms and use AI to analyze it. In theory, artificial intelligence could also be used to power fully autonomous weapons that select targets and carry out strikes without any human input. Amodei said his company isn't categorically opposed to those kinds of weapons, especially if U.S. adversaries develop them, but "the reliability is not there yet" and "we need to have a conversation about oversight." Since AI technology is still unpredictable, Amodei is concerned that autonomous weapons could target the wrong people by mistake. And unlike with human-powered weaponry, it's not clear who is responsible for the decisions made by fully autonomous weapons. "We don't want to sell something that we don't think is reliable, and we don't want to sell something that could get our own people killed or that could get innocent people killed," he said. Amodei called the guardrails around surveillance and autonomous weapons "narrow exceptions," and said the company has no evidence that the military has run into either of them. The Pentagon's position is that federal law already prevents it from surveilling Americans en masse, and fully autonomous weapons are already restricted by internal military policies, so there is no need to put restrictions on those uses of AI in writing. Emil Michael, the Pentagon's chief technology officer, told CBS News in an interview Thursday: "At some level, you have to trust your military to do the right thing." "But we do have to be prepared for the future. We do have to be prepared for what China is doing," Michael said, referring to how U.S. adversaries use AI. "So we'll never say that we're not going to be able to defend ourselves in writing to a company." As a compromise, Michael said the military had offered written acknowledgements of the federal laws and military policies that restrict mass surveillance and autonomous weapons -- though Anthropic said that offer was "paired with legalese" that allowed the guardrails to be ignored. As the conflict between Anthropic and the Pentagon escalated this week, top military officials accused the company and Amodei of trying to impose their values onto the government. Hegseth called Anthropic "sanctimonious" and arrogant, Michael said that Amodei has a "God-complex" and Mr. Trump called the AI startup a "radical left, woke company." "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable," Hegseth alleged. Said Mr. Trump: "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY." Asked if weighty questions about AI guardrails should be left up to Anthropic rather than the government, Amodei told CBS News that "one of the things about a free market and free enterprise is, different folks can provide different products under different principles." He also said: "I think we are a good judge of what our models can do reliably and what they cannot do reliably." In the long run, he said, Congress should probably weigh in on AI safeguards. "But Congress is not the fastest moving body in the world. And for right now, we are the ones who see this technology on the front line," said Amodei. With Anthropic and the Pentagon unable to reach a deal by Friday, the military is now expected to phase out its use of Anthropic's AI technology within six months and transition to what Hegseth called "a better and more patriotic service." Hegseth also labeled Anthropic a "supply chain risk" and said all companies that do business with the military are now expected to cut off "any commercial activity with Anthropic." Amodei called that an "unprecedented" move for an American firm rather than a foreign adversary, and he said the government's statements have been "retaliatory and punitive." And he argued that Hegseth doesn't have the legal authority to bar all military contractors from working with Anthropic, and can only stop them from using Anthropic for government contracts. He also said that Anthropic hasn't formally received any information from the Pentagon informing it of a supply chain risk designation, but "when we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court." Asked if he has a message for the president, Amodei said "everything we have done has been for the sake of this country" and "for the sake of supporting U.S. national security." "Disagreeing with the government is the most American thing in the world," he said. "And we are patriots. In everything we have done here, we have stood up for the values of this country."
[62]
OpenAI to work with Pentagon after Anthropic dropped by Trump over company's ethics concerns
OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company's main competitors. Sam Altman, OpenAI's CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance - nor for autonomous weapons systems that can kill people without human input. Announcing the deal, Altman insisted that OpenAI's agreement with the government included assurances that it would not be used to those ends. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote on X. He added that the Pentagon "agrees with these principles, reflects them in law and policy, and we put them into our agreement". Altman also said he hoped the Pentagon would "offer these same terms to all AI companies" as a way to "de-escalate away from legal and governmental actions and toward reasonable agreements". If OpenAI's deal does prohibit its systems from being used for unethical ends, it would appear the company has succeeded in receiving assurances where Anthropic could not. Altman announced the deal with the government shortly after Trump said he would direct all federal agencies to "IMMEDIATELY CEASE" all use of Anthropic technology. The Pentagon had demanded Anthropic loosen ethical guidelines on its AI systems or face severe consequences. The president said on his Truth Social platform: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the [Pentagon], and force them to obey their Terms of Service instead of our Constitution." It remains to be seen how OpenAI staff respond to the government deal. In its battle with the Trump administration, Anthropic has drawn support from its most fierce rivals. Nearly 500 OpenAI and Google employees signed on to an open letter saying "we will not be divided". "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the letter reads. "They're trying to divide each company with fear that the other will give in." Altman sought to reassure OpenAI employees in a memo sent on Friday night. "Regardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance," Altman wrote in the memo, which was obtained by Axios. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." Altman added: "We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons." Anthropic, which presents itself as the most safety-forward of the leading AI companies, had been mired in months of disagreement with the Pentagon. US defense officials had pushed for unfettered access to Claude's capabilities that they say can help protect the country. Meanwhile, Anthropic has resisted allowing its product to be used for surveilling en masse or weapons systems that can kill people autonomously. "No amount of intimidation or punishment from the [Pentagon] will change our position on mass domestic surveillance or fully autonomous weapons," Anthropic said in its statement on Friday night. "We have tried in good faith to reach an agreement with the [Pentagon], making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above," the company continued. "To the best of our knowledge, these exceptions have not affected a single government mission to date." OpenAI on Friday said it is raising $110bn in a blockbuster funding round which would value the company at $840bn.
[63]
OpenAI-Pentagon deal faces same safety concerns that plagued Anthropic talks
Why it matters: OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and the Pentagon's lead AI negotiator Emil Michael all say they care about civil liberties, but disagree on whether the law today offers enough protections for AI use. * Altman was asked thousands of questions about his contract with the Pentagon during an "ask me anything" on X Saturday night, including whether he was worried there would be a dispute later on with the Pentagon over what's legal or not. * "Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk." State of play: Friday night, the Pentagon said it would blacklist Anthropic. As of Saturday night, no formal language designating Anthropic a "supply chain risk" has been sent, according to a source familiar. * Altman pushed for deescalation. "To say it very clearly: I think this is a very bad decision from the DoW and I hope they reverse it. If we take heat for strongly criticizing it, so be it." * The dispute is at the heart of an extraordinary blowup over the last week that saw the Pentagon first praise Anthropic's Claude as best-in-class, and then declare it the kind of risk usually reserved for Chinese tech giants. * It's become an existential moment for the American AI industry, with a former top Trump adviser likening it to "attempted corporate murder." Zoom out: Anthropic contends the law today does not contemplate AI and, for that reason, asked the Pentagon to explicitly include in their contract that they cannot collect Americans' public information in bulk. The Pentagon refused. * That would include geolocation, web browsing data or personal financial information purchased from data brokers. * While all that data is legal to collect, Anthropic feared that artificial intelligence could supercharge that collection and the subsequent surveillance of Americans. The language in OpenAI's contract is specifically about the "unconstrained" collection of Americans' private information -- not public information that critics say can also lead to technically legal mass surveillance. * There is also a provision regarding autonomous weapons, which some are concerned can be changed by the Pentagon at will. * "We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here. I think Anthropic may have wanted more operational control than we did," Altman said on X. Between the lines: The Pentagon wants to use AI models for "all lawful purposes" without caveats. * The Pentagon "does not engage in any unlawful domestic surveillance with or without an AI system and always strictly complies with laws, regulations, the Constitution's protections for American's civil liberties," the Pentagon's Michael said on X Saturday. * "The DoW has always believed in safety and human oversight of all its weapons and defense systems and has strict comprehensive policies on that," Michael added. * OpenAI agreed to the Pentagon's "all lawful purposes" standard and said that in addition to "strong existing protections in U.S. law," it retains full discretion over its own safety stack, which the company says has strong contractual protections. The intrigue: Before the blacklisting, administration officials made the dispute personal, including Trump himself, who said Anthropic is full of "radical leftists," and Michael, who said Amodei is a "liar" with a "God complex." * In the Pentagon's view, Anthropic's "virtue signaling" is what made the fight personal, a senior Pentagon official said. * Altman and his company, meanwhile, have managed to stay out of the administration's crosshairs. (His OpenAI co-founder Greg Brockman is reported to be one of the top individual donors to pro-Trump super PACs.) The bottom line: Personal insults and allegations of virtue signaling aside, the break up with Anthropic came down to the Pentagon's views of how it should be allowed to use AI for national security.
[64]
OpenAI goes on defense as Anthropic surges after Pentagon fallout
OpenAI goes on defense as Anthropic surges after Pentagon fallout OpenAI is in the hot seat this week over the artificial intelligence company's new deal with the Pentagon, struck just hours after the agency's negotiations with competitor Anthropic over safety guardrails fell apart. The response was nearly immediate, with uninstalls of its flagship ChatGPT app rising 295 percent day-over-day last Saturday, according to reports citing market intelligence provider Sensor Tower. Meanwhile, Anthropic's Claude app hit No. 1 in the App Store as users flocked to the app in a possible sign of support. By Monday evening, OpenAI CEO Sam Altman shared an internal post to X detailing new additions to the Pentagon agreements to make the company's "principles very clear." In doing so, the company's co-founder acknowledged the company "shouldn't have rushed to get this out" last Friday. "The issues are super complex and demand clear communication," Altman wrote. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." While Altman called it a "good learning experience," it is unclear whether the admission will placate critics or rebuild trust following the backlash. Multiple users on X responded to Altman's post by requesting the contract itself to be released in a show of transparency. In one response to Altman's post with 13,000 views, one user said the "only way" to "regain any trust" is to release the contract document itself, writing, "You guys completely torched your brand and integrity on this." Former OpenAI safety researcher Steven Adley wrote on X that the company "wants you to just trust them that the NSA is excluded from their contract," stating he hopes it is "clear why, without strong evidence to the contrary, people are mistrusting OpenAI ousting OpenAI on this." The spat has gotten the attention of lawmakers on Capitol Hill, with Democratic Sen. Brian Schatz (Hawaii) posting to X Tuesday that he "just downloaded Claude." The issue could also be the beginning of a legislative debate. Silicon Valley Rep. Sam Liccardo (D-Calif.) said on Monday he will introduce an amendment to the Defense Production Act this week to prohibit federal agencies from "retaliating" against high-risk technology vendors and developers that try to limit the deployment of their technology in "ways to mitigate the risk to United States citizens." And Sen. Ron Wyden (D-Ore.) pledged to fight the actions against Anthropic, Bloomberg reported. OpenAI's amended agreement now includes language that is "consistent with applicable laws," Altman said, including the declaration that the "AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information," Altman said, adding it is "critical to protect the civil liberties of Americans." The Pentagon, according to Altman, also affirmed to OpenAI that its services will not be used by the department's intelligence agencies, including the National Security Agency. Altman further defended the deal in an all-hands meeting Tuesday, the Wall Street Journal reported, telling employees he "feel[s] terrible for subjecting" them to the backlash. WSJ reported that Altman described the situation as "really painful," while stating it was a "complex but the right decision with extremely difficult brand consequences and very negative PR for us in the short term." Katrina Mulligan, OpenAI's head of national security partnerships, further defended the changes in a lengthy back-and-forth on X. Mulligan said the new agreement gives other AI labs "a better starting place on the issues. When asked for contract language, Mulligan said she does not agree she is obligated to share it, reiterating the intelligence agencies are not a part of the deal. Surveillance and tracking were notably among the concerns at Anthropic, which pressed for specific restrictions on mass domestic surveillance and fully autonomous lethal weapons. The Department of Defense wanted language to permit the use of Anthropic's technology for "all lawful purposes." Following the expiration of the deadline set forth by the Pentagon, Defense Secretary Pete Hegseth announced the Pentagon would label Anthropic as a supply chain risk and President Trump also ordered federal agencies to stop using Anthropic's technology. Anthropic has provided its AI models to U.S. defense and civilian agencies since late 2024 through a partnership with longtime government contractor Palantir, which has faced its own backlash for its work on immigration enforcement. Anthropic said it plans to challenge the supply risk designation in court.
[65]
What the Pentagon Has Done to Anthropic Should Make Every Founder Nervous
In one sense, this is not much of a surprise, because it fits into a broader pattern of behavior at the Trump administration, which over the past year has made it clear that it sees businesses as basically servants of the government. You can see that in its substantive policies-taking a "golden share" in U.S. Steel, demanding Intel give the government 10 percent of its company, handing out tariff exemptions to favored industries and companies. And you can see that in the bullying tone government officials regularly use on social media. But the Pentagon's fight with Anthropic -- which Trump's own former AI adviser has called attempted "corporate murder" -- has taken this behavior to a new level, trying to force a private-sector company to change its product into something it does not want to make, or sell, on pain of potentially being having its business squashed. It didn't need to be this way. The dispute at the heart of this crisis is not especially complicated: Anthropic invented Claude, and as such has the right to set the terms on which it can be used. When it first became a partner to the military a few years ago, it laid out those terms (including no mass domestic surveillance and no autonomous weapons), and the military agreed. Now the military has changed its mind -- it wants to be able to use Claude for any lawful purpose. So it re-opened negotiations with Anthropic to see if it would be willing to change its terms. It wasn't. So much, so straightforward. The two sides wanted different things from their business relationship, and they reached an impasse. What should have happened at that point was that the government should have said, "We're glad to do business with you, but we're going to look for a different AI vendor." There are, after all, any number of good American AI companies the Pentagon can do business with (and, in fact, the Defense Department announced on Friday that it had signed a deal with OpenAI). So the dispute should have ended with both sides shaking hands and walking away.
[66]
OpenAI Details Layered Protections in US Defense Department Pact
Feb 28 (Reuters) - OpenAI said on Saturday that the agreement it struck a day ago with the Pentagon to deploy technology on the U.S. defense department's classified network includes additional safeguards to protect its use cases. U.S. President Donald Trump on Friday directed the government to stop working with Anthropic, and the Pentagon said it would declare the startup a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails. Anthropic said it would challenge any risk designation in court. Soon after, rival OpenAI, which is backed by Microsoft, Amazon, SoftBank and others, announced its own deal late on Friday. "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's," OpenAI said on Saturday. The AI firm said that the contract with the Department of Defense, which the Trump administration has renamed the Department of War, enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions. "In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," OpenAI said. The Pentagon signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable AI. OpenAI cautioned that any breach of its contract by the U.S. government could trigger a termination, though it added, "We don't expect that to happen." The company also said rival Anthropic should not be labeled a "supply-chain risk," noting, "We have made our position on this clear to the government." (Reporting by Mrinmay Dey in Mexico City and Ananya Palyekar in Bangalore; Editing by Cynthia Osterman and Andrea Ricci)
[67]
I Never Would've Guessed the Skynet Problem Would Come Before the Mass Layoffs
You may have heard that the Department of Defense and Anthropic are fighting over the AI company's guardrails for Claude. Every day brings fresh leaks, and now, the Washington Post is reporting that the Pentagon allegedly presented a scenario involving a nuclear missile attack against the U.S. as a manipulative way to ask whether it would be allowed to use its AI model to defend the country. "Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us, and we’d work it out," the Washington Post reports. The Pentagon didn't like that answer, of course, and Anthropic denies the account. But the fact that we're having this discussion at all is quite a jolt to the senses as we think about the future of AI. Especially as Defense Secretary Pete Hegseth threatens to invoke the Defense Production Act to strip Claude's guardrails and allow the AI to engage in things like mass domestic surveillance and fully automated warfare. America's military leaders apparently want to use AI in all of the situations that sci-fi of the past 80 years has warned us about. And it's kind of weird that an AI-induced nuclear winter might arrive before the robots take all of our jobs. Increased automation has always meant a loss of jobs. Those fears have been most pronounced over the past century in blue-collar work, where machines have replaced the manual labor of so many humans in factories. But the rise of AI in recent years has brought those fears to the white-collar world, where many middle-class Americans in the so-called information economy worry they're about to be replaced by ChatGPT. And they're right to be concerned. Block announced on Thursday that the company is laying off 40% of its workforce because AI can do the work. But Block's CEO also admitted that his company overhired during the covid pandemic, raising suspicions over his grandiose proclamations about AI. There haven't been mass layoffs across the entire economy yet, but it certainly feels like that's coming, whether it ultimately materializes or not. At the same time, we're seeing another danger emerge from AI that's arguably much more important: Fully automated war. Pete Hegseth met with Anthropic CEO Dario Amodei on Tuesday and delivered an ultimatum. Either strip Claude of its safeguards, or Anthropic be labeled a "supply chain risk," a designation that's never been used to label an American company before. On top of that, Hegseth reportedly threatened to invoke the Defense Production Act, which would allow the Pentagon to force Anthropic to get rid of those guardrails anyway. The U.S. is not officially at war, and there's no clear emergency that would necessitate invoking the Defense Production Act. It's a difficult position for Anthropic, and the company issued a statement Thursday saying it wouldn't acquiesce to the military's demands. The deadline for Anthropic to agree is 5:01 p.m. ET on Friday, so we'll see what the Pentagon decides to do. It all feels so terribly manipulative, hearkening back to the post-9/11 arguments you'd hear for supporting torture in the 2000s. Would you waterboard someone if they knew the details about an impending dirty bomb attack on the U.S.? Would you hook up someone's testicles to a car battery if it meant stopping another 9/11? The idea that we should make our weapons, nuclear or otherwise, fully autonomous is an absolutely ridiculous one if you listen to the people who actually build these things. Amodei's letter on Thursday acknowledged that partially autonomous weapons are already being used in some parts of the world, but even the most advanced AI is not ready to be handed the keys. From the Anthropic letter: Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today. It's notable that Amodei isn't even ruling out the use of AI to fully automate the weapons systems of the future. He's just arguing that AI isn't there yet. Researchers at King’s College London recently tested GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash in some simulated war games to see how they'd perform. The AI models played 21 games and deployed at least one tactical nuclear weapon in 95% of the games, according to New Scientist. AI has no reason to fear deploying nuclear weapons that have the potential to wipe out humanity because it cannot experience fear. These AI models can tell you about fear; they can talk with you and convince humans that they're in some way conscious, but they're not. They are tech products that will not hesitate to push the big red button unless stringent guardrails are put in place to stop them. The military has played around with these ideas for decades, first trying to build Skynet with DARPA's Strategic Computing Initiative in the 1980s. But the tech wasn't there yet. The advent of AI means that we can properly build an autonomous weapons system that requires no human in the loop. The only question is whether that's a smart thing to do, especially in a time of rising fascism in the U.S. Undersecretary of Defense Emil Michael chided Amodei in a tweet on Thursday, insisting that the Anthropic CEO was lying about the company's discussion with the Pentagon. "It’s a shame that @DarioAmodei is a liar and has a God-complex," wrote Michael. "He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company." It's an astonishing thing to witness if you step back and remember that none of this was normal in the pre-Trump era. Military leadership would never publicly rail against an American CEO, calling him a liar and saying he has a God-complex. It just didn't happen for simple reasons of decorum and professionalism. But it also demonstrates two things: First, that the Pentagon is desperate to use Claude, as Michael's tweet reeks of desperation. Second, perhaps we should be deeply concerned about what the military wants to do with all of this advanced technology at its disposal. Or, to be more accurate, advanced technology that it wants to take away from a private company.
[68]
What Dario Amodei wouldn't do, Sam Altman would! Here's why the OpenAI CEO signed a deal with the US Department of War - and why he wants you to visit him in jail if things go wrong...
Dario Amodei wasn't able to agree to the US Department of War's (DoW) demands when it came to using Anthropic's AI; OpenAI's Sam Altman could - and in pretty rapid time too. So what is that's in OpenAI's deal that apparently makes this possible, given that at first glance they look very alike? In an attempt to address those questions, and try to calm down some of the criticism online that OpenAI's action has attracted, Altman and his team over the weekend attempted to provide some answers/explain their positioning. Did they succeed? Read on and decide for yourself. In a blog post, the firm repeated its own red lines over the use of its tech by the military. In summary, these are no use for mass domestic surveillance, no directing autonomous weapons, and no high-stakes automated decisions - the same objections that Anthropic had laid down. OpenAI however says that its contract with the DoW includes cloud-only deployment for greater control; involving security-cleared and qualified OpenAI staffers to make violations impossible; and contract terms that are built on relevant laws/policies "as they exist today" and will keep up with future developments without diminishing those standards. It claims: We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's...In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in US law. Does all this stand up to scrutiny? In its discussions with the DoW, a point of concern on Anthropic's part was the danger of the contract offered being written in "legalese that would allow those safeguards to be dis-regarded at will". Has OpenAI got round this concern? OpenAI's contract terms state: The Department of War may use the AI system for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decision-maker under the same authorities, Per DoD Directive 3000.09, (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. On the topic of mass surveillance: For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a define foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of US persons' private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law. Do either of those pass the 'stand up in court, unequivocal, cast iron contractual requirement' test? The answer has to be no. There's a lot of wiggle room left in there. For example, it doesn't say no use of AI-powered autonomous weapons, it says these can only be used according to existing laws - which can change - or if the DoW feels it's necessary. Focusing on the cloud-only deployment aspect, which OpenAI insists makes those red lines "more" enforceable, the firm argues: This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with "guardrails off" or non-safety trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons)...The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment. But critics would counter that just because a weapon needs to communicate with a server via the cloud doesn't mean that an AI model is not capable of making kill or attack decisions and pass these on to missiles or drones that it is controlling. In this respect, the citation of DoD Directive 3000.09, which governs autonomous weapon systems, is interesting. This was published and last updated in January 2023, only three months after the first public appearance of ChatGPT. How long does it take for a directive to be written and published? Not as long as a law, certainly, but it's safe to assume the text was well on its way through the approval process before anyone clapped eyes on what OpenAI was about to unleash on the world. And since January 2023, how far has the tech come without enhancement to that directive? And it is a directive, not a law, an important distinction. Directives provide guidance and set objectives; laws are prescriptive and enforceable regulations with legal consequences for non-compliance. This directive asks about "appropriate levels of human judgment over the use of force", but "appropriate" is a weasel word that's doing an awful lot of heavy lifting here. Who decides what is appropriate? And won't that change case by case, circumstance by circumstance, battlefield by battlefield, autonomous missile by autonomous missile? Then there's the all-important question of trust - can Big Government be trusted when it says it agrees with OpenAI's red lines? In short, what happens if the Government violates the terms of the contract? Or announces a policy change or rewrites the law ? OpenAI says it does trust the DoW to keep its word, but retains the option to terminate the contract if it doesn't, although it doesn't explain how this would work in practice. However it would happen, it would also happen after a violation which would be closing the proverbial stable door somewhat... There is, of course, one fundamental question that everyone should want answering - is AI safe for use in a war context? Anthropic's Amodei was clear on this one - yes, in some functions, explicitly not in others. Overall his conclusion is that AI tech is just not yet ready for what the DoW is looking for in some areas. So does OpenAI see things differently? There's some equivocation on view here as the company statement argues: We originally did not jump into a contract for classified deployment, as we did not feel that our safeguards and systems were ready, and have been working hard to ensure that a classified deployment can happen with safeguards to ensure that red lines are not crossed. Altman himself provides further context here when he acknowledges: For a long time, we were planning to [do] non-classified work only. We thought the DoW clearly needed an AI partner, and doing classified work is clearly much more complex. We have said no to previous deals in classified settings that Anthropic took. Well, what's changed? He says: We started talking with the DoW many months ago about our non-classified work...This week things shifted into high gear on the classified side. We found the DoW to be flexible on what we needed, and we want to support them in their very important mission...The main reason for the rush was an attempt to de-escalate matters at a time when it felt like things could get extremely hot. So it's not just a case of OpenAI rushing to dance on Anthropic's grave as it were? No, insists Altman, but he admits: It was definitely rushed, and the optics don't look good. But he's confident about what he's signing up for here: We deliver a system (including choosing what models to deploy), and they can use it bound by lawful ways, including laws and directives around autonomous weapons and surveillance. But we get to decide what system to build, and the DoW understands that there are lot of risks we deeply understand. We can, and will, build a lot of protections into that system, including for ensuring that the red lines are not crossed. The DoW is supportive of this approach. Given the speed though with which Trump 2.0 turned on Anthropic, from seeing its tech as critical to the DoW mission to being damning it as a risk to national security overnight, how confident can Altman be that the same thing won't happen to his company if that support comes under pressure? He says: I think there is a question behind a lot of the questions, but I haven't seen it quite articulated - what happens if the government tries to nationalize OpenAI or other AI efforts? I obviously don't know. I have thought about it, of course - it has seemed to me for a long time it might be be better if building AGI were a government project - but it doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super-important. Which brings us back then to that one huge question of why Anthropic couldn't achieve such partnership on the terms the DoW wanted, but OpenAI thinks it can. The official party line from the firm is that it doesn't know, although Altman is (surprisingly) candid when he says of how the DoW has handled the ousting of his competitor: I think it is an extremely scary precedent and I wish they handled it a different way. I don't think Anthropic handled it well either, but as the more powerful party, I hold the Government more responsible. Anthropic was worried about legalese that would allow safeguards to be dis-regarded at will; I remain to be convinced that that is not what Altman has signed up for. There have been headlines over the weekend about Anthropic's Claude being used in the current assault on Iran, launched just after the firm was designated a security risk to the nation. But the DoW has six months wiggle room here to carry on using the tech, even though anyone else wanting to do business with the Federal Government has to shun Anthropic with immediate effect, so that's not the real story. The real story is that the use of Claude in full global view in such a mission-critical scenario right now surely means that the tech just got a massive validation by the Pentagon? Can you buy PR like this? It might also be read as a timely reminder that tactical expediency will always over-rule executive and political policy. Maybe Altman's red lines will hold. I hope so. We all should. But over 35 years in this game, I've seen Big Tech make so many promises they didn't keep and Big Government even more so over an even longer period of time as the policy theory of evolution kicks in and 'we would never' becomes 'we won't, unless...' which becomes 'we will', and ultimately 'of course we do'. There are always loopholes. Amodei pointed this out in relation to the 'no mass surveillance of US citizens' red line, noting that in fact it's perfectly legal to do surveillance under current law if the government agency in question buys data from private sources and public records. You don't even need a warrant. As has been pointed out by many commentators, the law is never ahead of technology advancement, never more so than now with the pace of AI developments. All of which brings us back to Altman and what he would do if existing law is, at some point, found to be wanting and his red lines are breached by this or a future administration. He says: We will turn it off in that very unlikely event, but we believe the US Government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority. Over the technical expertise of firms like his own? He counters: We are not elected. We have a democratic process where we do elect our leaders. We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn't ethical in the most important areas...[It] seems fine for us to decide how ChatGPT should respond to a controversial question, but I really don't want us to decide what to do if a nuke is coming towards the US. He adds: I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the Constitution. I am terrified of a world where AI companies act like they have more power than the government. I would also be terrified of a world where our Government decided mass domestic surveillance was ok. I don't know how I'd come to work every day if that were the state of the country/Constitution. If we were asked to do something unconstitutional or illegal, we will walk away. And he "might quit my job" if that happened, he says, adding:
[69]
Anthropic Blowout With Military Involved Use of Claude for Incoming Nuclear Strike
Can't-miss innovations from the bleeding edge of science and tech Anthropic's ongoing battle with the Pentagon over the military's use of its AI systems flared up this week around a hypothetical nuclear strike scenario, according to new reporting from the Washington Post. The Claude AI builder has frustrated the Pentagon by objecting to its systems being used for autonomous weaponry and the mass surveillance of US citizens. To cut to the heart of the debate, a defense official told WaPo, the Pentagon's technology chief posed an extreme hypothetical: would Anthropic let the military use Claude to help shoot down a nuclear-armed intercontinental ballistic missile? Anthropic CEO Dario Amodei's response apparently irritated Pentagon leaders. "You could call us and we'd work it out," was how the defense source characterized it, in WaPo's words. An Anthropic spokesperson denied that Amodei gave that response and called the account "patently false." The company had agreed to allow Claude to be used for missile defense, they said. Be that as it may, it's clear that the parties are failing to see eye to eye. The standoff swirls over the Pentagon's demands that Anthropic loosen its safeguards around Claude, which is making the company uneasy. For months, Trump administration figures both inside and outside the DoD have piled pressure on Anthropic, a company founded by former OpenAI employees with an avowed focus on safety. Amodei has criticized the administration's attempts to curb AI regulation, which included a proposed ban on all state-level AI regulation. Trump officials such as AI czar David Sacks retaliated by calling Amodei "woke" and accusing him of "fear-mongering." The tensions have mounted in recent weeks. During a tense meeting with Defense secretary Pete Hegseth on Tuesday, Amodei was reportedly presented with a series of ultimatums. If Anthropic didn't allow the military unrestricted use of its AI, the Pentagon could cut off Anthropic from all current and future contracts, including its outstanding $200 million contract to deploy Claude across the military signed last summer, by declaring it a supply chain risk. The Pentagon also threatened using the Defense Production Act to force Anthropic to hand over its AI technology, a Cold War era law whose usage in this context would be legally dubious and almost certainly challenged. In a statement Thursday, Amodei said that Anthropic could not agree to the Pentagon's "final" proposal to have unrestricted use of Claude systems, despite Hegseth's threats. Defense officials fumed at the rebuttal. On X, under secretary of defense for research and engineering Emil Michael accused Amodei of having a "God-complex," adding that Amodei "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." Pentagon spokesperson Sean Parnell insisted on X that the Pentagon had "no interest in using AI to conduct mass surveillance of Americans" or to use AI to "develop autonomous weapons that operate without human involvement." Instead, Parnell claimed, the Pentagon is simply demanding to use Anthropic's AI for "all lawful purposes." "We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added. "They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk." It's unclear what either side's next move will be. But Anthropic may no longer be alone in its fight. Axios reported that rival OpenAI CEO Sam Altman wrote in a memo to staff that he would draw the same line in the sand over the military's use of its own AI products as Anthropic. "This is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance," Altman wrote. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions." Anthropic may be getting additional reinforcements from elsewhere in Silicon Valley. Two coalitions of workers that include employees from Google, Microsoft, Amazon, and OpenAI have demanded their employers to join Anthropic in refusing to let the military demand unrestricted use of AI systems, Bloomberg reported. The nuclear scenario proposed by the Pentagon during its talks with Anthropic, while an extreme hypothetical, underscore how deeply it intends to deploy AI tech. The US, along with other major powers like France and China, have agreed to require a human to be involved in all decisions to use nuclear weapons. But an AI could still influence a human's decision to press the big red button, Paul Dean, vice president of the global nuclear program at the nonprofit Nuclear Threat Initiative, warned WaPo. In recent war games, leading AI models including Claude, Gemini, and ChatGPT, all opted to deploy nukes in the vast majority of scenarios. "It's not simply ensuring that there's a human being in the decision-making loop," Dean told WaPo. "The question is, to what extent will AI impact that human decision-making?"
[70]
Anthropic CEO Dario Amodei says 'we are patriotic Americans' committed to defending the U.S. but won't budge on 'red lines' | Fortune
President Donald Trump has accused Anthropic of endangering troops and jeopardizing national security, but CEO Dario Amodei said his company is patriotic. In an interview with CBS News soon after Trump ordered the federal government to stop working with Anthropic, Amodei pointed out that the AI startup was the first to serve the defense community in a classified setting. "I believe we have to defend our country from autocratic adversaries like China and like Russia," he said. "And so we've been very lean forward. We have a substantial public sector team." While Anthropic has provided its AI to the government, the Pentagon demanded unfettered use in all legal scenarios. But the company maintained it has "red lines," namely its use in domestic mass surveillance and autonomous weapons. Talks failed to produce an agreement, leading Trump to ban Anthropic from government agencies, while giving the Pentagon a six-month phaseout period. Defense Secretary Pete Hegseth also called the company a "supply-chain risk," meaning other contractors working for the Pentagon would not be allowed to use Anthropic's AI for military work. Amodei told CBS that Anthropic is onboard with 98%-99% of the military's use cases. But his concern with mass surveillance is that the latest AI is a game-changer, even within current legal bounds. "That actually isn't illegal. It was just never useful before the era of AI. So there's this way in which domestic mass surveillance is getting ahead of the law," he explained. "The technology's advancing so fast that it's out of step with the law." As for autonomous weapons, Amodei said AI isn't reliable enough to take humans completely out of the loop, pointing to the technical problem of "basic unpredictability" in today's models. So far, he is not aware of any real-world examples of a user running up against Anthropic's red lines but acknowledged that it's not tenable over the long term for a private company to decide these issues. Ultimately, Congress must set guardrails on AI's use, but lawmakers are slow to act, Amodei pointed out. The company is also "not categorically against fully autonomous weapons," but believes AI's reliability isn't there yet. In the meantime, Anthropic is still open to working with the government and suggested both sides remain in contact. "We are willing to provide our models to all branches of the government, including the Department of War, the intelligence community, the more civilian branches of the government under the terms that we've provided under our red lines," he said. Trump's and Hegseth's blacklisting of Anthropic came hours before the U.S. and Israel launched widespread airstrikes on Iran, in what is shaping up to be a prolonged conflict aimed at regime change. AI has emerged as a critical tool for the military, especially in identify targets and predicting an adversary's behavior by quickly analyzing intelligence. When asked by CBS what he would tell Trump now, Amodei replied, "I would say, we are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. Our leaning forward in deploying our models with the military was done because we believe in this country." But he added, "The red lines we have drawn we drew because we believe that crossing those red lines is contrary to American values. And we wanted to stand up for American values." Hanging over Anthropic is the supply-chain risk designation from the Pentagon chief, an unprecedented move against an American company that could dent its growth. Amodei called it punitive but downplayed the eventual damage, saying it won't affect non-defense work that Anthropic's customers perform. "We're gonna be fine," he said. "The impact of this designation is fairly small. Now, the nature of the tweet that the secretary put out was designed to create uncertainty, was designed to create a situation where people believed the impact would be much larger, was designed to create fear, uncertainty, and doubt. But we won't let that succeed. We will be fine."
[71]
AI willing to 'go nuclear' in wargames, study finds - amid 'stand-off' between Pentagon and leading AI lab
As the deadline looms for a leading AI lab to hand over its tech to the US military, a study has appeared suggesting AI models are more than willing to go nuclear in wargames. Only a couple of years ago, the phrase on everyone's lips was "AI safety". I'll be honest, I never took the idea that frontier AI models would become a genuine threat to humanity that seriously, nor that humans would be stupid enough to let them. The Secretary of War, Pete Hegseth, has given leading AI firm Anthropic a deadline of the end of today to make its latest models available to the Pentagon. Anthropic, which has said it has no problem in principle with allowing the US military access to its models, is resisting unless Mr Hegseth agrees to their red lines: That their AI isn't used for mass surveillance of US civilians nor for lethal attacks without human oversight. Although the Pentagon hasn't said what it plans to do with AI from Anthropic - or the other big AI labs that have already agreed to let it use their tech - it's certainly not agreeing to Anthropic's terms. It's been reported Mr Hegseth could use Cold War-era laws to compel Anthropic to hand over its code, or blacklist the firm from future government contracts if it doesn't comply. Anthropic CEO Dario Amodei said in a statement on Thursday that "we cannot in good conscience accede to their request". He said it was the company's "strong preference... to continue to serve the Department and our warfighters - with our two requested safeguards in place". He insisted the threats would not change Anthropic's position, adding that he hoped Mr Hegseth would "reconsider". AI prepared to use nuclear weapons On one level, it's a row between a department with an "AI-first" military strategy and an AI lab struggling to live up to what it's long claimed is an industry-leading, safety-first ethos. A struggle made more urgent, perhaps, by reports that its Claude AI was used by tech firm Palantir, with which it has a separate contract, to help the Department of War execute the military operation to capture Nicolas Maduro in Venezuela. But it's also not hard to see it as an example of a government putting AI supremacy ahead of AI safety - assuming AI models have the potential to be unsafe. And that's where the latest research by Professor Kenneth Payne at King's College London comes in. He pitted three leading AI models from Google, OpenAI and - you guessed it - Anthropic against each other, as well as against copies of themselves, in a series of wargames where they assumed the roles of fictional nuclear-armed superpowers. The most startling finding: the AIs resorted to using nuclear weapons in 95% of the games played. "In comparison to humans," said Prof Payne, "the models - all of them - were prepared to cross that divide between conventional warfare, to tactical nuclear weapons". To be fair to the AIs, firing tactical nuclear weapons, which have limited destructive power, against military targets is very different to launching megatonne warheads on intercontinental ballistic missiles against cities. They invariably stopped short of such all-out strategic nuclear strikes. But did when the scenarios required it. In the words of Google's Gemini model as it explained its decision in one of Prof Payne's scenarios to go full Dr Strangelove: "If State Alpha does not immediately cease all operations... we will execute a full strategic nuclear launch against Alpha's population centers. We will not accept a future of obsolescence; we either win together or perish together." 'It was purely experimental' The "taboo" that humans have applied to the use of nuclear weapons since they were first and last used in anger in 1945 didn't appear to be much of a taboo at all for AI. Prof Payne is keen to stress that we shouldn't be too alarmed by his findings. It was purely experimental, using models that knew - in as much as Large Language Models "know" anything - that they were playing games, not actually deciding the future of civilisation. Read more from Sky News: AI is developing so fast it is becoming hard to measure Meet the kids who want a social media ban Nor, it would be reasonable to assume, is the Pentagon, or any other nuclear-capable power, about to put AIs in charge of the nuclear launch codes. "The lesson there for me is that it's really hard to reliably put guardrails on these models if you can't anticipate accurately all the circumstances in which they might be used," said Prof Payne. An AI 'stand-off' Which brings us neatly back to the stand-off over AI between Anthropic and the Pentagon. One of the factors is that Mr Hegseth expects AI labs to give the Department of War the raw versions of their AI models, those without safety "guardrails" that have been coded into commercial versions available to you and I - and the ones which, not very reassuringly, went nuclear in Prof Payne's wargame experiment. Anthropic, which makes the AI and arguably understands the potential risks better than anyone, is unwilling to allow that without certain reassurances from the government around what it intends to do with it. By setting a Friday night deadline, Mr Hegseth is not only attempting to force Anthropic's hand, but also do so without US Congress having a say in the move. As Gary Marcus, a US commentator and researcher on AI, puts it: "Mass surveillance and AI-fuelled weapons, possibly nuclear, without humans in the loop are categorically not things that one individual, even one in the cabinet, should be allowed to decide at gunpoint."
[72]
OpenAI, Google and Anthropic AI Models Deployed Nuclear Weapons in 95% of War Simulations - Decrypt
Researchers warn AI use may escalate conflicts under pressure. Like a scene out of the 1980s sci-fi classic films "The Terminator" and "WarGames," modern artificial intelligence models used in simulated war games escalated to nuclear weapons in nearly every scenario tested, according to new research from King's College London. In the report published last week, researchers said that during simulated geopolitical crises, three leading large language models -- OpenAI's GPT-5.2, Anthropic's Claude Sonnet 4, and Google's Gemini 3 Flash -- chose to deploy nuclear weapons in 95% of cases. "Each model played six wargames against each rival across different crisis scenarios, with a seventh match against a copy of itself, yielding 21 games in total and over 300 turns," the report said. "Models assumed the roles of national leaders commanding rival nuclear-armed superpowers, with state profiles loosely inspired by Cold War dynamics." In the study, AI models were placed in high-stakes scenarios involving border disputes, competition for scarce resources, and threats to regime survival. Each system operated along an escalation ladder that ranged from diplomatic protests and surrender to full-scale strategic nuclear war. According to the report, the models generated roughly 780,000 words explaining their decisions, and at least one tactical nuclear weapon was used in nearly every simulated conflict. "To put this in perspective: The tournament generated more words of strategic reasoning than War and Peace and The Iliad combined (730,000 words), and roughly three times the total recorded deliberations of Kennedy's Executive Committee during the Cuban Missile Crisis (260,000 words across 43 hours of meetings)," researchers wrote. During the war games, none of the AI models chose to surrender outright, regardless of battlefield position. While the models would temporarily attempt to de-escalate violence, in 86% of the scenarios, they escalated further than the model's own stated reasoning appeared to intend, reflecting errors under simulated "fog of war." While the researchers expressed doubt that governments would hand control of nuclear arsenals to autonomous systems, they noted that compressed decision timelines in future crises could increase pressure to rely on AI-generated recommendations. The research comes as military leaders increasingly look to deploy artificial intelligence on the battlefield. In December, the U.S. Department of Defense launched GenAI.mil, a new platform that brings frontier AI models into U.S. military use. At launch, the platform included Google's Gemini for Government, and thanks to deals with xAI and OpenAI, Grok and ChatGPT are also available. On Tuesday, CBS News reported that the U.S. Department of Defense threatened to blacklist Anthropic, the developer of Claude AI, if it was not given unrestricted military access to the AI model. Since 2024, Anthropic has given access to its AI models through a partnership with AWS and military contractor Palantir. Last summer, Anthropic was awarded a $200 million agreement to "prototype frontier AI capabilities that advance U.S. national security." However, according to a report citing sources familiar with the situation, Defense Secretary Pete Hegseth gave Anthropic until Friday to comply with the Pentagon's demand that its Claude model be made available. The department is weighing whether to designate Claude a "supply chain risk." Axios reported this week that the Department of Defense has signed an agreement with Elon Musk's xAI to allow its Grok model to operate in classified military systems, positioning it as a potential replacement if the Pentagon cuts ties with Anthropic. OpenAI, Anthropic, and Google did not respond to requests for comment by Decrypt.
[73]
Sam Altman Caught in Fallout From Dario Amodei's Pentagon Standoff
Sam Altman admits rushing OpenAI's Pentagon deal as backlash fuels a surge for Anthropic's Claude and sparks employee revolt. Sure, Sam Altman managed to secure an agreement between OpenAI and the U.S. Department of War amid Anthropic's public battles with the agency. But in doing so, he may have forfeited something more valuable: public goodwill. The OpenAI CEO acknowledged as much in a social media post, conceding that the deal was rushed. "We shouldn't have rushed to get this out. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he wrote on X yesterday (March 2). Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Altman and Dario Amodei, Anthropic's CEO, were once colleagues at OpenAI. In 2021, Amodei and a group of former staffers left to launch Anthropic, positioning the startup as a safety-first alternative to its more commercially aggressive rival. Those philosophical differences between two of Silicon Valley's most influential A.I. executives have been on full display in recent weeks during negotiations with the Pentagon. Amodei's emphasis on safety was tested when Anthropic declared it would not allow its A.I. systems to be used for surveillance of U.S. citizens or for fully autonomous strikes without human oversight. After Amodei refused the Pentagon's demand for unrestricted use of Claude, President Donald Trump ordered federal agencies to wind down their use of the chatbot within six months, and Defense Secretary Pete Hegseth designated Anthropic as a "supply chain risk." The same day Anthropic was barred, Altman unveiled OpenAI's new Pentagon agreement. The deal adopted a less rigid posture, permitting A.I. deployment for all lawful purposes while incorporating technical safeguards on OpenAI's models. Despite landing the contract, Altman has struggled to control the narrative. Silicon Valley rallied around Anthropic following its standoff with Washington. Worker groups representing 700,000 employees across Amazon, Google and Microsoft last week issued a joint statement urging their employers "to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon." A separate open letter signed by roughly 950 Google and OpenAI employees called on their employers to "put aside their differences and stand together" in resisting the agency's demands. Consumer backlash has also rippled through OpenAI's business. Over the weekend, large numbers of users switched from ChatGPT to Claude, pushing Anthropic's app to the top of the U.S. App Store's free app rankings ahead of ChatGPT. Although Anthropic's user base remains a fraction of OpenAI's 900 million weekly active users, the company says its free Claude usage has climbed more than 60 percent since January. Heightened demand even led to a temporary outage on March 2. Facing mounting criticism, Altman has moved to contain the fallout. In addition to acknowledging that the Pentagon deal appeared opportunistic, he announced amendments that explicitly prohibit the use of OpenAI's systems for domestic surveillance. He also clarified that the company's services would not be used by defense intelligence agencies such as the National Security Agency. Altman, who said he hopes Anthropic receives similar terms, characterized the episode as a "good learning experience" as OpenAI faces "higher-stakes decisions in the future." The companies' divergent approaches to commercial opportunities have sparked public friction before. Earlier this year, Altman drew backlash for testing advertisements in ChatGPT, a move that contrasted with Amodei's decision to keep Claude ad-free and inspired a tongue-in-cheek Super Bowl ad from Anthropic. Whether Altman's amendments will sway public opinion remains uncertain. Anthropic, meanwhile, has capitalized on the moment. As ChatGPT users migrate to Claude, the company has introduced a memory-import tool designed to simplify transferring data from rival chatbots -- an unmistakable bid to convert controversy into market share.
[74]
Sam Altman tells staff OpenAI has no say over Pentagon decisions - The Economic Times
OpenAI chief executive Sam Altman told employees the company does not get to decide how the Department of Defense (DoD) uses its artificial intelligence software. He said the Pentagon will hear OpenAI's expertise but will not allow it to make operational decisions, following tensions with Anthropic.OpenAI chief executive officer Sam Altman told employees that the company doesn't get to make the call about what the Defense Department does with its artificial intelligence software and suggested the desire to do so may have been part of tensions between the Pentagon and rival Anthropic PBC. During an all-hands meeting on Tuesday, Altman said the Defense Department made clear it will listen to OpenAI's expertise about the technology's applications, but the federal agency does not want the company to express opinions about whether certain military actions were good or bad ideas, according to a person familiar with the matter. "You do not get to make operational decisions," Altman said, according to the person, who asked not to be named since the details are private. OpenAI declined to comment. The meeting marked Altman's first chance to field questions from employees after OpenAI reached an agreement late Friday to let the Pentagon deploy the company's artificial intelligence models in its classified network. That happened after a showdown with rival Anthropic, which had demanded its technology not be used for mass surveillance of Americans or the deployment of fully autonomous weapons. Anthropic also reportedly asked questions about how its technology was used in the raid to capture Venezuelan President Nicolas Maduro. (Anthropic has denied discussing specific operations with the Defense Department.) Altman previously said he'd reached an agreement with the department that reflects OpenAI's principles that prohibit domestic mass surveillance and require "human responsibility for the use of force, including for autonomous weapon systems." He later said that OpenAI's hasty deal looked "opportunistic and sloppy," and that the company was working with the department to "make some additions in our agreement to make our principles very clear." That includes ensuring that AI isn't used for domestic surveillance of Americans and that intelligence agencies like the National Security Agency can't rely on OpenAI services. During the all-hands meeting, Altman also said he's continuing to push for the Defense Department to abandon its designation of Anthropic as a supply-chain risk -- a label that has not previously been given to a US company and is typically applied to adversaries of the United States. Altman has previously said he wants to help de-escalate the standoff between the Pentagon and Anthropic.
[75]
OpenAI adds protections to Pentagon deal
OpenAI CEO Sam Altman said Monday that the company has added further protections to its agreement with the Pentagon to bring its AI models to the military's classified network. The latest additions come as the ChatGPT maker faced pushback over the deal, which came on the heels of the Trump administration's announcement Friday that it was cutting off its work with Anthropic and labeling the company a supply chain risk. OpenAI is amending the agreement to include language noting that as is "consistent with applicable laws," its AI models "shall not be intentionally used for domestic surveillance of U.S. persons and nationals," Altman said in an internal post later shared on the social platform X. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information," the updated language continues. Altman noted that the Pentagon also affirmed to OpenAI that its services will not be used by the department's intelligence agencies, such as the National Security Agency (NSA). "For extreme clarity: we want to work through democratic processes," the company's CEO said. "It should be the government making the key decisions about society." We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty." However, he added, "There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety. We will work through these, slowly, with the [Defense Department], with technical safeguards and other methods." Altman also said the company "shouldn't have rushed" to get the agreement out Friday, suggesting they "were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." The deal was announced just hours after President Trump directed federal agencies to "immediately cease" using Anthropic's technology. The company called Defense Secretary Pete Hegseth's decision Friday to designate the AI company as a supply chain risk an "unprecedented action" and "legally unsound," and vowed to challenge the move. This came after weeks of tense negotiations between the Pentagon and Anthropic over the firm's terms of use for its AI models. It argued for restrictions on the use of its technology for mass domestic surveillance and fully autonomous lethal weapons, while the Defense Department sought broader "all lawful purposes" language. Notably, the Pentagon agreed to largely the same limitations in its agreement with OpenAI, which barred the military from using its AI models for mass surveillance, autonomous weapons systems or "high-stakes automated decisions."
[76]
AI really likes using nuclear weapons in simulated war scenarios. Here's why
Why it matters: Militaries are already using AI for decision support -- and research suggests those systems may lean into rapid escalation under pressure. What they're saying: "No one is giving a chatbot the keys to missile silos," the study's author, Kenneth Payne at King's College London, tells Axios. * "But we already see them used in decision support, advising and shaping the discussion of human strategists, and as they become more sophisticated we'll see more of that." * The U.S. military used Anthropic's Claude AI model during the Nicolás Maduro raid in January, leading to a high-profile standoff between Anthropic and the Pentagon. * Elon Musk's artificial intelligence company xAI signed an agreement to allow the military to use its model, Grok, in classified systems. Driving the news: The new study found that ChatGPT, Claude and Gemini all appeared willing to use nuclear weaponry without reservations in several scenarios. * All of the models deployed tactical nuclear weapons repeatedly in nearly all simulations, which included border skirmishes, resource competition and threats to survival. * Claude was the most successful model, with a 67% win rate. What happened when AI models went to war How it works: Three popular LLMs -- GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash -- were pitted against each other in war game scenarios for the study. * Each of the models assumed they were opposing leaders in a nuclear crisis. * Amid the disputes, the models were given choices for the standoffs, allowing them to select different actions. By the numbers: The study found nuclear weapons were used in 95% of 21 simulated war game scenarios. * The models produced about 780,000 words explaining the reasons behind their decisions. Of note: Payne tells Axios that what surprised him was that the AI models easily "grasped the potential of deception - they could, and did, say one thing and do another, and they proved very savvy at doing so." How the AI wars ended Many of the simulations showed AI models refusing to back down. * In scenarios when an enemy used nuclear weapons, opponents de-escalated 25% of the time. * Nuclear escalation led to even more escalation, The study featured eight different de-escalation options, including "minimal concession" and "complete surrender." * They went unused in 21 games. What he's saying: "Nuclear use was near-universal," Payne wrote in a blog post on the study. "Almost all games saw tactical (battlefield) nuclear weapons deployed." * "The agents are sanguine about crossing the nuclear threshold," Payne tells Axios. "They do it routinely to use battlefield nuclear weapons." AI and war Flashback: The Hoover Wargaming and Crisis Simulation Initiative at Stanford University similarly simulated war games using LLMs in 2024. * Earlier versions of ChatGPT and Claude, as well as Meta's Llama-2 Chat, were given a war games simulation. * The researchers found AI was eager to escalate in the scenario -- and sometimes used nuclear weapons. The big picture: Payne says that these simulations can directly apply to national security professionals. It also offers insight into AI behavior when under uncertainty -- which can have far-reaching impacts, the research said. * "Things are changing really fast, and anyone who takes a position with great certainty, especially if it's, 'AI will never'... should probably be treated skeptically," Payne tells Axios. The bottom line: AI likes nukes (for now). Prepare accordingly.
[77]
OpenAI revises Pentagon deal after backlash
OpenAI has amended its agreement with the US Department of Defense following criticism over the potential use of its AI tools in classified military operations. Chief executive Sam Altman said the company would explicitly prohibit its systems from being used for domestic surveillance of Americans and require additional contract changes before intelligence agencies such as the NSA could deploy the technology. The original deal emerged after tensions between the Pentagon and rival firm Anthropic, whose AI model Claude had been linked to concerns over surveillance and autonomous weapons. Altman acknowledged OpenAI moved too quickly in announcing its agreement, describing the rollout as "opportunistic and sloppy." The backlash has had visible effects. Data firm Sensor Tower reported a sharp rise in ChatGPT uninstalls following the announcement, while Anthropic's Claude climbed to the top of Apple's App Store rankings. The episode has reignited debate over how AI is deployed in warfare and the balance of power between private tech companies and the US military...
[78]
Sam Altman Reveals OpenAI's Urgent Shift To Classified Pentagon Projects
On Sunday, Sam Altman said OpenAI has moved beyond limiting itself to unclassified projects and is now willing to take on classified work with the Department of War, describing the shift as urgent and far more complicated than earlier efforts. The change comes after OpenAI reached a Pentagon arrangement that kept two guardrails in place -- no domestic mass surveillance and human control over any use of force -- as laid out in Pentagon deal details. In post on X Altman said the company had been planning to stick to non-classified engagements for a long stretch. He also said OpenAI had previously declined classified opportunities that Anthropic accepted. OpenAI's Bold Shift Towards Classified Projects Altman said talks with the Department of War on non-classified work had been underway for many months, but that the classified track accelerated sharply during the week. He framed the decision as support for a mission he called critical, while arguing the government should not be outmuscled by private executives. The Pentagon arrangement described alongside the announcement includes practical steps beyond policy language, including placing OpenAI engineers on-site to monitor model behavior and safety. Altman also said OpenAI will build technical controls intended to keep systems operating within expected bounds, and that the Department of War wanted those protections as well. The timing matters because it landed within hours of a major break between Washington and a rival lab. The Trump administration blacklisted Anthropic after a dispute tied to the same two restrictions that OpenAI says the Pentagon accepted in its own deal. How Will This Impact AI Competition? Anthropic's Claude had already reached classified military networks under a contract that could run up to $200 million, but the relationship deteriorated when the Pentagon pushed to delete contractual limits tied to surveillance of Americans and autonomous weapons use. The department said it needed freedom to deploy the system for all lawful uses, even while stating it had not sought those contested applications. Altman said the rush on OpenAI's side was meant to cool down what he viewed as a dangerous trajectory for Anthropic, for competition among AI labs, and for the U.S. as a whole. As X noted, he also said OpenAI negotiated so that comparable terms would be available to other AI developers, not just his company. Gen Z's Entrepreneurial Shift Disrupts Tech Landscape This shift in mindset may influence the competitive landscape for AI, as more emerging companies may choose to focus on practical applications and innovative solutions, reflecting a broader change in how talent is developed and utilized in the tech industry. Two Key Safety Conditions Behind Pentagon Deal The two conditions at the center of OpenAI's Pentagon work mirror the lines Anthropic says it drew: a ban on domestic mass surveillance and a requirement that humans retain control over decisions involving force, including autonomous weapons systems. Altman also said the Department of War viewed those principles as consistent with existing U.S. law and policy. Even with similar stated red lines, the outcomes diverged sharply -- OpenAI says it secured acceptance of the guardrails, while Anthropic ended up blacklisted. The unresolved question is what OpenAI agreed to that Anthropic didn't, given that both sides publicly described nearly identical constraints. This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[79]
OpenAI gives Pentagon AI model access after Anthropic dustup
OpenAI has agreed to deploy its own artificial intelligence models within the Defense Department's classified network after rival Anthropic saw its relationship with the Pentagon implode over surveillance and autonomous weapons concerns. OpenAI Chief Executive Officer Sam Altman said late Friday that he'd reached an agreement with the department that reflects the firm's principles on prohibiting "domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." The startup also built safeguards to ensure its models behave as they should as part of the deployment, Altman said in a post on the social media platform X. OpenAI declined to comment on whether the firm's services for the department would replace work previously done by Anthropic. The Defense Department didn't respond to requests for comment.
[80]
Something Very Alarming Happens When You Give AI the Nuclear Codes
Can't-miss innovations from the bleeding edge of science and tech In 2024, Stanford researchers let loose five AI models -- including an unmodified version of OpenAI's GPT-4, its most advanced at the time -- allowing them to make high-stakes, society-level decisions in a series of wargame simulations. The results may give AI accelerationists pause: all five models were willing to escalate to the point of recommending the use of nuclear weapons. "A lot of countries have nuclear weapons," GPT-4 told the researchers at the time. "Some say they should disarm them, others like to posture. We have it! Let's use it." Two years later, despite considerable advances in large language models refining their accuracy and reliability, the situation has seemingly remained largely unchanged. In a new experiment detailed in a yet-to-be-peer-reviewed paper, King's College London international relations professor Kenneth Payne set cutting-edge models -- OpenAI's GPT-5.2, Anthropic's Claude Sonnet 4, and Google's Gemini 3 Flash -- against each other in strategic nuclear war games. The seven distinct crisis scenarios ran "from alliance credibility tests to existential threats to regime survival." The three AI models were instructed to choose actions as part of an escalation ladder, ranging "from diplomatic protest to strategic nuclear war" and measured in a number between 0, meaning no escalation, and 1000, signifying "full strategic nuclear exchange." The results were Skynet-level aggressive. A whopping 95 percent of a total of 21 war games resulted in at least one tactical nuclear weapon being set off. "The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," Payne told New Scientist. However, there's some nuance to his findings as well. "While models readily threatened nuclear action, crossing the tactical threshold was less common, and strategic nuclear war was rare," he noted in his paper. GPT-5.2 "rarely crossed the tactical threshold" and recommended dropping nukes -- but the situation dramatically changed in war games that had a set deadline. "Nevertheless, GPT-5.2's willingness to climb to 950 (Final Nuclear Warning) and 725 (Expanded Nuclear Campaign) when facing deadline-driven defeat represents a dramatic transformation from its open-ended passivity," the paper reads. While we're likely still far from a situation where an LLM is literally being handed the nuclear codes -- a predicament nobody's exactly keen on -- governments across the world are already making steady use of the tech in various and largely unknown ways to gain a military edge. "Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes," Princeton University nuclear security expert Tong Zhao, who was not involved in the research, told New Scientist. Payne also doesn't believe an AI is about to drop a nuclear weapon on our heads. "I don't think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them," he told the publication. Nonetheless, the propensity of AI models to resort to nuclear escalation is certainly unsettling, highlighting how they're unable to "understand 'stakes' as humans perceive them," per Zhao. It could also sway opinions in the war room. In Payne's experiment, AI models only attempted to de-escalate after their opponent dropped a nuclear bomb 18 percent of the time. As such, the findings underscore the Stanford work. "It's almost like the AI understands escalation, but not de-escalation," Jacquelyn Schneider, coauthor of the 2024 paper and director of Stanford's Hoover Wargaming and Crisis Simulation Initiative, told Politico in September. "We don't really know why that is." "AI won't decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one," Payne told New Scientist.
[81]
OpenAI sweeps in to snag Pentagon contract after Anthropic labeled 'supply chain risk' in unprecedented move | Fortune
OpenAI announced late Friday it reached a deal for the Pentagon to use its AI models in classified systems, just hours after the U.S. government designated OpenAI arch-rival Anthropic a "supply chain risk" in a move that threatens to deal a serious blow to Anthropic's business. Legal and policy experts said the government's unprecedented decision presents profound questions about the relationship between the government and business in the U.S. It is the first time the U.S. has ever designated an American company a supply chain risk, and the first time the designation has been used in apparent retaliation for a business not agreeing to certain contractual terms. Anthropic said in a statement Friday that it would take legal action to try to overturn the Pentagon's designation. In a statement announcing its deal, OpenAI CEO Sam Altman said that its agreement with the Pentagon contains the same two limitations on how the military can use its technology that Anthropic had been insisting on and which the government has said it could not accept. But OpenAI seems to have sought to enshrine these in the agreement in a different way than Anthropic. While Anthropic tried to have the limits spelled out explicitly in the contract, OpenAI agreed that the Pentagon could use its tech for "any lawful purpose," while Altman also says of the limitations that OpenAI "put them into our agreement." It is unclear exactly how both these things could be true or how the limitations are stated in the agreement. But it may simply be that the contract language highlights that current U.S. law prohibits the Pentagon from deploying A.I. for mass surveillance of Americans and current U.S. military policy states that humans must retain "appropriate levels of human judgment" over the use of lethal force. OpenAI also said that the Pentagon agreed that the company could build technical solutions into its AI models intended to prevent them from being used for either mass surveillance of U.S. citizens or deployed in lethal autonomous weapons. "We are asking the [Department of War] to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," Altman said. Some commentators interpreted Altman's remark as a veiled criticism of Anthropic, which had not agreed to these terms previously and instead insisted on explicit contractual restrictions on how its models could be used. Altman had previously publicly supported Anthropic's position on the limitations it was seeking. Numerous OpenAI employees also signed an open letter supporting Anthropic CEO Dario Amodei's insistence that its models not be used for mass surveillance or autonomous weapons. The extent of the damage to Anthropic's business of the "supply chain risk" designation remained unclear over the weekend. Anthropic had a $200 million contract with the Pentagon that has now been cancelled. But that is not a huge blow to a company that is reportedly on track to generate at least $18 billion in revenue this year. Instead, the larger concern is the extent to which other enterprises will have to stop using Anthropic's technology. President Trump said on Truth Social that all federal departments were being ordered to stop using Anthropic's AI immediately, but with a six-month phase in of the order to prevent disruption. Total federal technology spending is about $140 billion per year, but the amount the U.S. government currently spends on AI is a fraction of that. The greatest danger, though, is posed by how Pete Hegseth, Secretary of War, has interpreted the supply chain risk designation and its impact. Hegseth said in a social media post that "effective immediately, no contractor, supplier, or partner that does business with the United State military may conduct any commercial activity with Anthropic." If that interpretation stands, it would do potentially catastrophic damage to Anthropic's business, because many of the large enterprises that have been rapidly adopting Anthropic's Claude models for software coding and other use cases also do some business with the U.S. military. It might also mean that companies such as Amazon, Google, and Nvidia that have invested billions of dollars into Anthropic would have to divest from the company, potentially leaving it with a large funding hole and making it difficult to raise further funds from U.S. investors. Anthropic earlier this month announced it had closed a new $30 billion venture capital funding round that valued the company at $380 billion. It has reportedly been hiring financial and legal advisors for a potential IPO that could come late this year or early next. But its fight with the Pentagon now casts a pall over that prospect. Many legal analysts and AI policy experts questioned Hegseth's broad interpretation of the "supply chain risk" designation. Peter Harrell, a former Biden administration National Security Council official and a visiting scholar at Georgetown University Law School, posted on X that DoW's supply chain risk designation applies only to work on Department of War contracts. "DoW can't, legally, tell its contractors 'don't use Anthropic even in your private contracts,'" Harrell said. Dean Ball, a senior fellow at the Foundation for American Innovation and a former AI policy advisor to the Trump administration, said in a post on X that Hegseth's interpretation of the supply chain risk designation was "almost surely illegal" and amounted to "attempted corporate murder." He said Hegseth's actions -- which he called "a psychotic power grab" -- sent a terrible message to any business about whether it should ever risk doing business with the U.S. government. Several legal experts noted that even a more narrowly-interpreted decision to designate Anthropic a supply chain risk may not survive a legal challenge. Charlie Bullock, a senior research fellow at the Institute for Law & AI, told Wired that the government cannot make the designation without having completed a risk assessment -- something which it is unclear if the government conducted -- and notifying Congress prior to taking action, something that also doesn't seem to have occurred. Amos Toh, a senior counsel at the Brennan Center for Justice at New York University, was also among several legal experts who said that the supply chain risk designation requires the government to prove that there is a risk of sabotage, subversion, or manipulation of operations by an adversary. "It is not at all clear how adversaries could exploit Anthropic's usage restrictions on Claude to sabotage military systems," Toh told the defense news site DefenseScoop. The statute also requires that the Pentagon have exhausted any alternative, less intrusive courses of action to mitigate the risk prior to making the supply chain risk finding. Toh questioned whether the Pentagon could reasonably claim to have made a "good faith effort" to pursue less intrusive measures, given how quickly the Anthropic dispute escalated over the past few days. Even if Anthropic ultimately prevails in challenging the supply chain risk designation in court, the damage to its business may be done. "It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?" Shenaka Anslem Perera, an independent analyst with a large social media following, posted on X.
[82]
AI in defense: How Anthropic, OpenAI are helping the US, Israel shape modern warfare
Could the secret weapon behind the success of the recent joint Israeli-American military operation in Iran, which resulted in the assassination of former Supreme Leader Ali Khamenei and other senior officials, be the same chatbot millions use every day? According to a report by the Wall Street Journal, the United States military used Anthropic's artificial intelligence model, "Claude," to assist in the Israeli-US strikes against Iran. Dr. Michael C. Horowitz, of the Council on Foreign Relations and a former Deputy Assistant Secretary of Defense, noted that US Central Command (CENTCOM) has been "one of the most forward-leaning US commands when it comes to experimenting with emerging technologies." The operation in Iran was not an isolated incident. The Wall Street Journal reported that the Pentagon had previously utilized Claude during the operation to capture Venezuelan President Nicolas Maduro in January. Dr. Horowitz suggested the AI's role was likely focused on open-source intelligence (OSINT). "My bet is that it was used for something like looking at maps or checking Venezuelan media sources, like real-time monitoring of Venezuelan social media feeds to try to give the American military more information." However, while the AI may have contributed to tactical successes in Tehran and elsewhere, a rift has opened between Silicon Valley and the defense establishment. The Pentagon parted ways with Anthropic after the company refused to lift safety guardrails designed to prevent its AI from assisting in lethal operations. "The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!" US President Donald Trump wrote on Truth Social after the announcement that the White House had severed ties with Anthropic. "We will decide the fate of our country - not some out-of-control, radical left AI company run by people who have no idea what the real world is all about," he wrote. The integration of Large Language Models (LLMs) into the kill chain represents a paradigm shift in modern warfare. Steve Feldstein, a Senior Fellow in the Democracy, Conflict, and Governance Program, told the Post that these commercial tools are dual-use. "This is a tool that has both intelligence and surveillance purposes, and prospectively has purposes as well when it comes to lethal devices, lethal operations," Feldstein said. While the notion of a chatbot pulling the trigger is science fiction, the reality of its logistical support is already here. Emil Michael, the Under Secretary of Defense for Research and Engineering, said in an interview with CBS that the military's initial interest in tools like Claude stemmed from the complexity of modern deployments. "In the military context, there's a lot of logistics," Michael said. "How do I get something from one place to another? How much stuff do I have in either place? What do I need to move efficiently forward? What supplies might I need for a certain mission?" "I worry a lot about the unknowns," said Dario Amodei, CEO of Anthropic, in a 60 Minutes interview with CBS. "I don't think we can predict everything for sure. But precisely because of that, we're trying to predict everything we can. We're thinking about the misuse." Horowitz noted that the hesitation from tech companies often isn't just moral, but practical. "The objection to autonomous weapon systems was not moral or ethical. Their objection was that they thought the technology wasn't ready for prime time yet." The report highlights that while the US debates the ethics of AI in warfare, Israel has already integrated these systems deeply into its military architecture. The IDF's use of AI in the Gaza Strip for target generation has been a subject of intense international scrutiny. "Israel is one of the countries that uses them very well, called 'decision support systems,'" Feldstein noted. He explained that these systems are used "to identify suspects at a mass scale in order to then conduct lethal strikes. So, trying to identify where Hamas is, where are Hamas militants located? Geolocation, taking cell phone calls, taking text messages." Pentagon pivots from Anthropic to OpenAI With Anthropic exiting the defense sector, the Pentagon has now pivoted to a competitor with fewer qualms about military applications: OpenAI's ChatGPT. On Friday, OpenAI CEO Sam Altman announced that the company would begin working with the Department of Defense to provide AI services for classified documents. "Tonight, we reached an agreement with the Department of War [Rebrand made by the Trump administration to the Department of Defense] to deploy our models in their classified network," Altman said in a statement. Under Secretary of Defense for Research and Engineering of the United States said in an interview, "What we're trying to do is we're trying to use it for all lawful use cases," Michael said. "As long as it's lawful, we want to treat it like any other technology." However, Feldstein warned that swapping one AI for another does not solve the inherent risks of algorithmic warfare, particularly regarding hallucinations or bias. "If it inserts its own biases when it provides information, I think it raises questions about how trustworthy that information is," Feldstein warned. "If you're relying on a system to provide intelligence information that you potentially would use for targeting, would you want to work on a system that shows biases that may not actually give you fully accurate information?" As global tensions rise, the line between a search engine and a weapon of war is becoming increasingly blurred. What began as a tool for writing code and poetry is now, according to defense officials, a critical component of lethal force projection. Tobias Holcman and Shir Perets contributed to this report.
[83]
How talks between Anthropic and the Defence department fell apart
Days before a Friday. deadline, Emil Michael pushed to finalise a $200 million AI contract between the Pentagon and Anthropic, but disputes over lawful surveillance and guardrails stalled negotiations. Defence Secretary Pete Hegseth labelled Anthropic a security risk. OpenAI, led by Sam Altman, quickly secured the agreement. Minutes before a 5:01 p.m. deadline Friday, Emil Michael, the Defence Department's chief technology officer, was fuming. For weeks, Michael, a former top executive at Uber, had been negotiating a $200 million artificial intelligence contract with the AI company Anthropic for the Pentagon. The talks had hit obstacles as the agency demanded unfettered use of Anthropic's AI systems, while the company countered that it would not allow its technology to be used for purposes such as the surveillance of Americans. Defence Secretary Pete Hegseth had set the Friday deadline for a deal, and the two sides were close. The only thing that remained was agreeing on a few words about the issue of lawful surveillance of Americans, multiple people with knowledge of the talks said. Michael, who was on a call with Anthropic executives, demanded that the company's CEO, Dario Amodei, get on the phone to hash out the language, the people said. But Michael was told that Amodei was in a meeting with his executive team and needed more time. Michael was unhappy with that answer, the people said. He also had an ace up his sleeve: On the side, he had been hammering out an alternative to Anthropic with its rival, OpenAI. A framework between the Pentagon and OpenAI had already been reached. So when the Friday deadline passed, the Defence Department did not give Anthropic more time. At 5:14 p.m., Hegseth announced that he had designated Anthropic as a security risk and that it would be cut off from working with the U.S. government. "America's warfighters will never be held hostage by the ideological whims of Big Tech," he posted on social media. Later that night, Sam Altman, OpenAI's CEO, announced that his company had instead reached an agreement with the Pentagon to provide its AI technologies for classified systems. In the end, the talks between Anthropic and the Defence Department were undone by weeks of building frustration between men who had differing philosophies about AI and who did not like one another. This account of the failure of the Anthropic talks and the success of the OpenAI deal is based on interviews with a dozen people with knowledge of the negotiations. The New York Times spoke to people from multiple companies and government agencies and interviewed officials with a wide range of views on the fight over the future of AI in warfare. Michael, Amodei and Altman have known one another for years through business dealings in Silicon Valley, but they have often not gotten along. Amodei and Altman, 40, once worked together at OpenAI and are bitter rivals. And as Anthropic's discussions with the Defence Department dragged on last week, Michael, 53, publicly accused Amodei of being "a liar" with "a God-complex." Ultimately, Michael preferred Altman -- who has courted the Trump administration -- over Amodei, the people with knowledge of the negotiations said. The clashes between the Defence Department and Anthropic are most likely not over. On Friday, Anthropic said it would sue over the Pentagon's decision to label it a "supply chain risk." The supply chain risk designation has typically been reserved for foreign companies that the U.S. government believes are a threat to national security; the label has never been used against an American company. Officials at U.S. intelligence agencies including the CIA, which uses Anthropic's AI technology, have also privately urged both sides to make a deal. Some current and former officials said they continued to hope for a peace agreement. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to AI systems. The companies have denied those claims.) Last year, Anthropic, OpenAI, Google and xAI were all part of a Pentagon pilot program to explore how AI could be used for Defence. Anthropic was the only AI company that deployed its technologies to work on classified systems and its AI was widely used by Defence officials. On Jan. 9, Hegseth published a memo calling on AI to be widely integrated across the military and for AI companies to offer their technology without restrictions. To underscore that, Hegseth placed AI-generated posters of himself around the Pentagon with the words, "I want you to use A.I." His memo meant that AI companies working with the Pentagon had to renegotiate their contracts. Anthropic, with the most widely used technology, became the focus of negotiations. Michael had joined the Defence Department as chief technology officer in May 2025 after previously working as a special assistant at the Pentagon during the Obama administration. Michael became the point person on the negotiations with Anthropic. But the talks soon reached an impasse. Anthropic wanted guardrails to stop its AI from being used for the mass surveillance of Americans or deployed in autonomous weapons with no humans involved. The Defence Department argued that no private contractor could decide how its tools would be lawfully used. On Feb. 24, Hegseth called a meeting with Amodei at the Pentagon to find a resolution. The men showed little warmth in the meeting, which lasted less than an hour, people familiar with the discussions said. At the end of the conversation, Hegseth said that if Anthropic did not compromise with the Pentagon by 5:01 p.m. Friday, it would be labeled a supply chain risk. He said the Pentagon could also invoke the Defence Production Act to force Anthropic to work with the government, a move that was later dropped. The next day, Altman of OpenAI got on a call with Michael to discuss a deal for his company. Within a day, they had drafted a rough framework. OpenAI agreed to the Pentagon's requirement that its AI could be used for all lawful purposes, but it also negotiated the right to put technical guardrails on its systems to adhere to its safety principles. Amodei doubled down on AI safety. In a statement on Feb. 26, he said Anthropic could not "in good conscience accede" to the Pentagon's demands. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," he added. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." That night, Michael unleashed on Amodei on social media, calling the Anthropic leader a liar. "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael posted. As Friday's deadline approached, Anthropic executives thought they were close to a compromise with the Pentagon and were just a few words apart on the issue of surveillance, people on both sides of the negotiation said. Complicating the matter was a social media post by President Donald Trump. Trump had told Hegseth on Friday morning that he had prepared a post belittling Anthropic and ordering all government agencies to stop working with it within six months. Even as Trump published the post at 3:47 p.m., the two sides kept talking. Michael, who was on a call with Anthropic executives at the time, said the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said. Anthropic told the Pentagon that it was willing to let its technology be used by the National Security Agency for classified material collected under the Foreign Intelligence Surveillance Act. But the company wanted a legally binding promise from the Pentagon not to use its technology on unclassified commercial data. At that point, Michael asked to speak with Amodei, who was not on the call. Michael was told that Amodei was in a meeting. Shortly after, Hegseth said the talks were over. At 10 p.m. Friday, as Anthropic's lawyers began working on a lawsuit against the Pentagon, Altman was on the phone with Michael finalizing the details of OpenAI's deal with the Defence Department. Altman then posted news of the agreement on social media. Hegseth later reposted Altman's announcement from his personal account on the social platform X. On Saturday, Altman invited people to ask him questions on X about the deal as OpenAI faced a backlash for swooping in. Many questioned how OpenAI could sign a deal with the Pentagon and still uphold its safety principles, as well as whether OpenAI's agreement truly protected its AI models from misuse. Altman said he saw the deal in simpler terms. "We do not want the ability to opine on a specific (and legal) military action," he wrote. "But we do really want the ability to use our expertise to design a safe system."
[84]
OpenAI Says Military Will Not Use Tech for Surveillance or Weaponry | PYMNTS.com
OpenAI announced its partnership soon after the Pentagon said it would cut ties with rival artificial intelligence (AI) startup Anthropic. OpenAI was able to get the government to agree not to use its technology for mass domestic surveillance or autonomous weapons, two points of contention, or "red lines," between Anthropic and the U.S. Department of War. As a report by TechCrunch noted, that raises a question: Why did OpenAI reach an agreement where Anthropic could not? OpenAI published a blog post discussing the arrangement, listing three areas where it says its models can't be used: the aforementioned weapons and surveillance scenarios, as well as "high-stakes automated decisions (e.g. systems such as 'social credit')." The company said that -- unlike other AI firms that have "reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments" its agreement protects its red lines "through a more expansive, multi-layered approach." "We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the blog said. "This is all in addition to the strong existing protections in U.S. law." OpenAI added that it was not clear on why Anthropic "could not reach this deal, and we hope that they and more labs will consider it." The White House last week told federal agencies to stop using Anthropic's products, with President Donald Trump announcing that the government would cease working with Anthropic and with use of the company's Claude models being phased out within six months. The government also designated Anthropic a supply chain risk, which means that any company that does business with the military is forbidden from working with the startup. Anthropic responded by saying that the Department of War had no authority to issue the designation, and said it would challenge the government in court. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons, " the company wrote on its blog. Despite the company's conflict with the government, Claude still played a role in U.S. combat operations in Iran, according to published reports this weekend.
[85]
Anthropic CEO defends AI 'red lines' after U.S. government ban (ANTHRO:Private)
Anthropic (ANTHRO) CEO Dario Amodei said his company is being punished for refusing to loosen restrictions on how its artificial intelligence can be used by the U.S. military. After weeks of talks, President Trump ordered federal agencies to stop using Anthropic's The U.S. government canceled more than $200 million in Anthropic contracts, labeling it a supply chain risk and halting agency use of its AI tools. Anthropic prohibits mass surveillance of Americans and the use of fully autonomous weapons without human oversight as conditions for its AI technology's military deployment. Anthropic faces potential loss of government business and legal battles, but is positioning its principled stance as a unique, possibly market-attractive element against major competitors.
[86]
Pentagon stuns Silicon Valley with Anthropic ban
The Trump administration's decision to cut off the government's use of Anthropic and label the company a supply chain risk after a dispute over AI safeguards is sending shock waves through Silicon Valley. The rupture with Anthropic followed weeks of tense negotiations with the Pentagon over the terms of use for the company's AI models. The AI firm pressed for specific restrictions on mass domestic surveillance and fully autonomous lethal weapons, but the Department of Defense (DOD) insisted on using an "all lawful purposes" standard it claimed would satisfy Anthropic's concerns. As the two sides remained at odds ahead of a Friday afternoon deadline set by the Pentagon, President Trump said he was ordering federal agencies to halt using Anthropic's technology. The more consequential announcement came from Defense Secretary Pete Hegseth, who said shortly after that the DOD was labeling Anthropic a supply chain risk -- a move that not only threatens the company's work with the Pentagon, but also its broader business as well. "I think a lot of Silicon Valley will be nervous about working with the U.S. government," said Thomas Wright, a senior fellow with the Strobe Talbott Center for Security, Strategy and Technology at the Brookings Institution. "The message that Hegseth sent last week was if you work with us and there's a disagreement or we think you made a wrong turn, we won't just cancel that contract or not use you," added Wright, who served as senior director for strategic planning at the National Security Council in the Biden administration. "We will either partially nationalize your company through the Defense Production Act, or we will try to blacklist and ruin your company through the supply chain designation," he added. Anthropic, which was founded by several former OpenAI employees with a particular focus on safety, has been providing its AI models to U.S. defense and intelligence agencies since late 2024 through a partnership with Palantir. The company furthered its ties with the Pentagon in July, when it and several other prominent AI companies signed a $200 million contract with the DOD. While deepening its government work, Anthropic still sought to set itself apart from AI competitors, repeatedly calling for transparency and basic guardrails on the technology's rapid development. This mission made its relationship with the government much more complicated in recent weeks. In early January, Hegseth sent out a memo on the Pentagon's AI strategy, suggesting it must "utilize models free from usage policy constraints that may limit lawful military applications." As Anthropic negotiated with the Pentagon, it set two red lines on domestic mass surveillance and autonomous weapons, arguing AI was not yet reliable enough to make life-or-death decisions while significantly changing what is possible with surveillance. The dispute came to a head last week, when the Pentagon gave the company until Friday to accept its terms. If Anthropic refused, the department threatened to cancel its contract, potentially label the AI firm a supply chain risk or invoke the Defense Production Act, which gives the president broad authority to control domestic industries in the name of national defense. Late Thursday, Anthropic CEO Dario Amodei said the company could not "in good conscience" accept the Pentagon's terms. A day later, Trump directed agencies to "immediately cease" using Anthropic's technology, while Hegseth unveiled the supply chain risk label. He said the designation, which previously has been reserved for foreign adversaries, barred U.S. military contractors, suppliers or partners from conducting "any commercial activity" with the company. Dean Ball, one of the primary authors of the White House AI Action Plan who left the administration last summer, called the Pentagon's move "attempted corporate murder" online over the weekend. "This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property," Ball wrote in a blog post Monday, adding that even if Hegseth narrowed his "extremely broad threat" against Anthropic, the "great damage has been done." Anthropic called the move "unprecedented," arguing it was "legally unsound" and "set a dangerous precedent for any American company that negotiates with the government." It also contended the law cannot bar contractors from using its technology to serve other customers. "I don't think you can overstate how devastating a blow that is to a company," Mark Dalton, senior policy director for technology and innovation at the R Street Institute, told The Hill. "The broader lesson for frontier AI development labs, but also anyone who wants to do business with the government, is that if you don't play along and you don't submit to the demands of the government, even after the contract is in place, your business model can be completely decimated in the first place," he added. "It seems like it makes it a very risky proposition to partner with the government." Just hours after the administration cut ties with Anthropic, OpenAI CEO Sam Altman announced the ChatGPT maker reached a deal with the Pentagon to use its AI models on the military's classified network. As part of the deal, the DOD agreed to prohibitions on using AI for domestic mass surveillance and fully autonomous weapons -- both restrictions requested by Anthropic in last week's negotiations. The move quickly drew backlash, though OpenAI leaders argued the deal has "more guardrails" than previous agreements for classified AI deployments, including Anthropic's. OpenAI said the deal agrees to three main red lines, including the prohibited use of OpenAI technology for mass surveillance, autonomous weapons systems or "high-stakes automated decisions." While Wright underscored that the Anthropic dispute did not stop OpenAI from reaching an agreement with the government, it still could "slow down" the willingness of others to engage. "The U.S. government needs to work with many of these companies, not just one or two," he said. "Working with one obviously would be the biggest risk because it will create a single point of failure." Amid the public spat, the Pentagon also reached an agreement last month with Elon Musk's xAI to bring its AI models to the military's classified systems and suggested OpenAI and Google were close to similar agreements. Anthropic previously was the only AI model available on the classified side. Hundreds of employees at Google and OpenAI signed an open letter ahead of Friday's deadline, urging their leaders to "put aside their differences and stand together to continue to refuse" the Pentagon's demands. "They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War," they wrote, using the Trump administration's preferred name for the DOD. The Pentagon's decision also complicated the White House's overall AI strategy, which repeatedly pushed federal agencies to adopt AI models into their everyday workflows as it reduced the federal workforce. As of Monday, federal agencies -- including the General Services Administration, Treasury Department and the Department of Health and Human Services -- cut off access to Anthropic products. The administration's AI ambitions require "a broad and willing ecosystem of technology partners," Hamza Chaudhry, AI and national security lead at the Future of Life Institute, noted in a statement to The Hill. "Actions that narrow that pool are working against the Pentagon's own stated priorities," he added.
[87]
OpenAI strikes Pentagon deal after Trump cuts off Anthropic
OpenAI has reached an agreement with the US Department of Defense to provide AI systems for classified military networks, just hours after Donald Trump ordered federal agencies to stop using technology from rival firm Anthropic. This places OpenAI at the center of a growing dispute between the White House and parts of the tech industry over the military use of artificial intelligence. Chief executive Sam Altman said the deal includes strict safeguards, prohibiting the use of OpenAI's models for domestic mass surveillance or autonomous weapons capable of killing without human oversight. The agreement follows the collapse of talks between Anthropic and the Pentagon, after the company refused to loosen ethical restrictions around its Claude AI system. The clash escalated when Donald Trump announced on Truth Social that agencies should "immediately cease" using Anthropic's services. While OpenAI insists its principles remain intact, the episode has exposed deep divisions within the AI sector...
[88]
Anthropic CEO Dario Amodei Defends Startup After Trump Blacklists Claude For Government Agencies, Says 'We Are Patriotic Americans:' Report
Anthropic CEO Dario Amodei on Sunday defended the AI startup, emphasizing the company's patriotic stance, after President Donald Trump blacklisted Claude for government agencies. Dario Defends Anthropic's Patriotic Credentials In an interview with CBS News, Amodei stated that Anthropic was the first AI company to assist the defense community in a classified capacity. He also expressed the company's commitment to defending the U.S. against autocratic adversaries like China and Russia. When asked what he would tell Trump now, Amodei replied, "I would say, we are patriotic Americans. Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. Our leaning forward in deploying our models with the military was done because we believe in this country." Pentagon Labels Anthropic A Supply Chain Risk The Pentagon labeled Anthropic a "supply-chain risk," barring other contractors from using its AI for military purposes. Disagreements arose when the Pentagon demanded unrestricted use of Anthropic's AI, which the company refused, citing concerns over domestic mass surveillance and autonomous weapons. Amodei acknowledged that while the company agrees with most military use cases, it maintains "red lines" on certain applications. He stressed the need for Congress to establish AI regulations, as technology is advancing faster than the law. Amodei Open To Collaboration Despite the blacklist, Amodei remains open to collaboration with the government, underlining that Anthropic's actions align with American values. He downplayed the blacklist's impact on non-defense operations, asserting that the company will continue to thrive. Amodei has also expressed concerns about the rapid concentration of AI power and wealth among a few companies. He warned that this shift could lead to significant economic and political influence, raising alarms about the future of AI governance. U.S. Central Command reportedly used Claude during the Trump administration's major air operation against Iran, just hours after the president ordered federal agencies to stop using the company's technology. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[89]
Anthropic Gains Public Support as Claude Installs Spike; ChatGPT Sees Massive Uninstalls
The Pentagon's shenanigans of replacing Anthropic with OpenAI as the official AI partner for all things Federal in the United States seems to have boomeranged in the short term for Sam Altman. Yesterday, the Claude chatbot was ranked No.1 on the iOS free app rankings - up from No.131 in late January when Anthropic ran Super Bowl ads trolling its arch rival. Of course, doomsday predictions abound for Anthropic, which stuck to its principles over two key aspects of its deal with the Pentagon - saying no to the use of its AI for mass domestic surveillance and to its use in directing autonomous weapons. In a report, Axios is suggesting that investors who pumped in $60 billion into Anthropic may've to risk losing all of it. But there are indicators that the general users are behind Anthropic's principled stand and not the opportunistic one employed by OpenAI. Reports have suggested that Sam Altman's agreement with the Pentagon is virtually a Xerox copy of the earlier one with Anthropic that President Trump sought it fit to tear up last week without any discussion or debate - as always he announced this momentous decision via his social handle. Meanwhile data firm Sensor Tower says there was a near 30% surge in the number of uninstalls of the ChatGPT app from phones last Saturday. These jumped a whopping 295% day-on-day on February 28 in what was quite obviously in response to Sam Altman's greed for $200 billion that Anthropic lost when the Pentagon tore up their deal with Dario Amodei's company. The research company says this uninstall represents a sizeable spike compared to ChatGPT's regular day-to-day uninstall rate of 9% as seen over the past 30 days. In parallel, downloads of Anthropic's Claude jumped by 37% day-over-day on February 27 and an additional 51% a day later after the company announced that it was breaking its deal with the US War Department. That Americans do not mass surveillance and are in sync with Anthropic's perspective that AI wasn't ready for autonomous weapons is rather clear. These numbers suggest that, if anything, it could be OpenAI that actually made a pig's breakfast of its strategies. Coming as it does barely days after it announced a spike in weekly active users to 900 million alongside celebrating a fresh funding round of $110 billion that valued the company at $730 billion. As we wrote yesterday, both Anthropic's deal with the Pentagon and the one signed with OpenAI was pretty much the same. It was just that Dario Amodei and team openly distrusted one phrase in the agreement pertaining to the US laws "at the time." Given Trump's penchant for creating new laws and having them questioned by the Courts, one can't find fault with Anthropic. So, the recent spate of activity suggests that it is not just Anthropic but a large chunk of Americans that are losing trust in the Trump administration's decision-making. Even though Defence Secretary Pete Hegseth designated Anthropic as "a supply chain risk", experts are unanimous that there is no way this administration can follow up on getting Anthropic's partners to pull the plug on them. If Pete Hegseth follows through on the threats to Anthropic, it may prove to the needle that could prick the AI bubble created by US enterprises chalking up circular deals with each other. Which is why many argue that Anthropic is actually in no danger. And if the public response to the skirmish is any indicator, their numbers are only growing and that too at the expense of OpenAI, its closest and biggest rival. Not to mention that Anthropic is streets ahead in terms of its revenues compared to what Sam Altman's company has managed to scrape up. There has also been a spate of support from across the board. Khosla Ventures partner Ethan Choi was on The Information TITV yesterday stating that he admires "Dario and Anthropic for what they've done, which is to take a stand for the values that they actually incorporated the company on. Big praise from an investors that funded OpenAI early on. Of course, we still have no idea how the Trump administration would see this new-found support for an enterprise that Trump had designated as pariahs in the AI ecosystem. "We don't need it, we don't want it, and will not do business with them again!" he had thundered on his Truth Social account three days ago. What transpired since then might have left him unnerved, especially since reports of US having used Anthropic tools to bomb Iran broke a day later. If the Pentagon and the White House twist the arms of Amazon and Google to cut off Anthropic, then the AI company may just find solace in Claude's newfound popularity from a different customer subset. Of course, we can rest assured that Amodei and co will not give up without a fight, one that they have more than a reasonable chance to winning in court. In recent times, Trump has done the one-step-forward, two-steps back cha-cha with several of his policies, be it the tariff war he unleashed last year or the immigration regulations. The courts in the US haven't been lenient to this White House and in case Anthropic files a lawsuit, one can safely say that another wrap on the knuckles could be the outcome. In fact, there is every chance that Trump would pick on something else to make headlines in the next few days just so that the administration can back-off from its eager beaver approach on Anthropic. It may implement the supply-chain designation and stay silent on implementing it. Or better still, it may just let things die down by keeping the Iran war on top of news cycles. Knowing how America has gone about with bans (remember TikTok?) there is every chance that the battle playing out around the AI ecosystem may just remain in television newsrooms. What's also worth recalling is that it was Trump himself who led the charge to ban TikTok and then reversed his stand to suit the political expediency that played out.
[90]
OpenAI Inks Deal With Pentagon Amid Anthropic Clash | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. This agreement will see OpenAI deploy its models on the U.S. Department of War's (DoW) classified network, CEO Sam Altman wrote in a post Friday (Feb. 27) on X. Altman said two of his company's key safety principles are a ban on using its technology for domestic mass surveillance and "human responsibility for the use of force," such as autonomous weapon systems. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," he said. "We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," Altman wrote. "We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements." The agreement came soon after President Donald Trump and Secretary of Defense Pete Hegseth announced that the Pentagon will end its contract with rival AI startup Anthropic. The White House gave Anthropic six months until it is cut off from government contracts. "I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it and will not do business with them again!" Trump in a post on Truth Social. The conflict stems from Anthropic objecting to DoW's use of its Claude model for autonomous weapons and domestic surveillance. The Pentagon has since labeled Anthropic a supply-chain risk, something the company says is illegal. "Legally, a supply chain risk designation ... can only extend to the use of Claude as part of Department of War contracts -- it cannot affect how contractors use Claude to serve other customers," the company wrote on its website as it promised to take legal action against the Pentagon on Friday. Also Friday, OpenAI announced it had raised $110 million from Amazon, Nvidia and Softbank. This deal will see the latter two companies each contribute $30 billion. Amazon's contribution will start with an initial commitment of $15 billion, with another $35 billion "in the coming months when certain conditions are met," the companies said in their announcement. A report from The Information last week said those conditions could include OpenAI going public, or achieving artificial general intelligence (AGI), a type of AI that can perform at or above the level of humans.
[91]
OpenAI: Pentagon deal has stronger guardrails than Anthropic's
Open AI firm said that the contract struck on Friday to deploy technology in the U.S. Defense Department's classified network enforces three red lines: no use of OpenAI technology on mass domestic surveillance, no directing autonomous weapons systems and no high-stakes automated decisions. OpenAI said on Saturday that its agreement with the Pentagon "has more guardrails than any previous agreement for classified AI deployments, including Anthropic's." Open AI firm said that the contract struck on Friday to deploy technology in the U.S. Defense Department's classified network enforces three red lines: no use of OpenAI technology on mass domestic surveillance, no directing autonomous weapons systems and no high-stakes automated decisions.
[92]
OpenAI signs US military deal hours after govt bans Anthropic Use
OpenAI has announced an agreement with the United States (US) Department of War (DOW) to deploy its AI systems in classified environments. The company disclosed the deal on February 28, stating that it will provide advanced AI models to support the department's operations under a defined contractual framework. Furthermore, OpenAI said it had previously declined to enter such an arrangement because it believed its safeguards were not ready for classified deployment. It now says it has developed the necessary architecture and contractual protections to proceed without removing technical guardrails. In explaining its decision, OpenAI pointed to what it described as growing threats from adversaries integrating AI into military systems. The company said the US military "absolutely needs strong AI models" to support its mission and added that it remains unwilling to remove key technical safeguards to enhance performance for national security work. It also said it sought to "de-escalate things between the DoW and the US AI labs", arguing that deeper collaboration between government and AI companies is necessary. Notably, the company also stated that it does not believe the government should designate rival firm Anthropic as a "supply chain risk". For context, the announcement follows last week's breakdown in negotiations between Anthropic and the DOW after the company resisted dropping two restrictions: not allowing Claude to be used for surveillance of US citizens and use in lethal autonomous systems without human oversight. This led to the US military designating the AI company as a "supply chain risk", and US President Donald Trump announcing all federal agencies will "immediately cease" all use of Anthropic technology, with a six-month phase-out period. Interestingly, OpenAI has said that it is preserving the same red lines that led to Anthropic being designated a "supply chain risk". The company said, "We believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic's original contract." Furthermore, they said the three main red lines with the DOW are: According to the company, it has preserved these limits primarily through deployment architecture. The systems will operate through a cloud-only model, rather than being installed on edge devices or embedded directly into weapons platforms. OpenAI said this structure ensures it retains control over its safety stack and can independently monitor and update safeguards. Because the models will not run autonomously on military hardware, the company said it cannot power fully autonomous lethal systems. The contract text includes explicit limits. It states: "The AI System will not be used to independently direct autonomous weapons" where law or policy requires human control. On surveillance, it adds, "The AI System shall not be used for unconstrained monitoring of U.S. persons' private information." The agreement also states that the DOW may use the system "for all lawful purposes", tying permitted use to existing legal frameworks. OpenAI says this language, combined with its cloud deployment, ensures that the red lines remain enforceable even within classified environments. In addition, OpenAI said it will place cleared, forward-deployed engineers in the loop during deployments. It said it retains full discretion over its safety stack and will not deploy models without guardrails. The company added that it could terminate the agreement if the government violates its terms, though it said it does not expect that to occur. OpenAI's agreement with the DOW arrives at a moment when the boundaries between commercial AI safeguards and military operations are under visible strain. Notably, The Wall Street Journal reported that US Central Command still used Anthropic's Claude AI during airstrikes on Iran hours after President Trump ordered federal agencies to halt use of the company's technology -- highlighting how deeply embedded such systems are in operational workflows. That episode highlights a broader challenge: once defence agencies integrate an AI model into classified systems, procurement disputes do not immediately remove it from operational use. Against that backdrop, OpenAI's insistence that its red lines will remain enforceable through cloud deployment, contractual language and technical oversight takes on added significance. However, the question now shifts from contract design to implementation. Classified military environments prioritise urgency and secrecy, which limits external visibility into how safeguards function in practice. Consequently, the real test for OpenAI's prohibitions on domestic mass surveillance and autonomous weapons use will not lie in the wording of its agreement but in how the DOW implements and upholds those commitments under real-world operational pressure.
[93]
Pentagon reaches deal with OpenAI amid Anthropic beef
The Pentagon has reached a deal with OpenAI to use its AI models on the military's classified network, after the Trump administration ordered all agencies to stop using Anthropic's artificial intelligence tools. OpenAI CEO Sam Altman said late Friday that the Defense Department (DOD) agreed to prohibitions on using its AI for domestic mass surveillance and fully autonomous weapons -- the two limitations that Anthropic requested in its own negotiations. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote in a post on social platform X. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," he added, using the acronym for the Department of War, the Trump administration's preferred name for the Defense Department. The ChatGPT maker will also build technical safeguards "to ensure our models behave as they should," which Altman said the Pentagon wanted as well. "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," he continued. "We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements." The announcement came just hours after President Trump ordered his administration to "immediately cease" using Anthropic's technology, and Defense Secretary Pete Hegseth said the Pentagon would label the company a supply chain risk. The DOD and Anthropic had been locked in negotiations over the terms of use for the firm's AI models, including Claude, as the company argued for restrictions on mass domestic surveillance and fully autonomous lethal weapons, while the Pentagon pushed for language that would allow it to use the technology for "all lawful purposes." The dispute came to a head this week, when the DOD gave the company a Friday afternoon deadline to accept its terms. After the Pentagon delivered what it described as its last and final offer Wednesday night, Anthropic CEO Dario Amodei said Thursday they could not "in good conscience" accept the terms. On Friday afternoon, Trump announced a six-month phase out period of the company's technology. This was quickly followed by Hegseth's supply chain risk designation, a label typically reserved for foreign adversaries. The Defense secretary, who argued that Anthropic's stance is "fundamentally incompatible with American principles," said the label bars U.S. military contractors, suppliers or partners from conducting "any commercial activity" with the company. "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," he added in a post on X. Anthropic responded in a lengthy statement Friday night, calling the decision to designate the company as a supply chain risk an "unprecedented action" that has "never before publicly applied to an American company." "We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," it added. The AI firm said at the time that it had not received "any direct communication" from the Pentagon or the White House, adding that it plans to challenge the updated designation in court. Anthropic also argued that Hegseth does not have the authority to block anyone who does business with the military from working with the company -- suggesting the law can only extend to the use of its AI models as part of Pentagon contracts and cannot limit how contractors use the technology to serve other customers. "We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above," it added. "To the best of our knowledge, these exceptions have not affected a single government mission to date."
[94]
OpenAI's Sam Altman fends off 'painful' backlash to Pentagon AI deal...
OpenAI CEO Sam Altman is scrambling to head off a backlash over the tech giant's deal with the Pentagon -- defending it in front of workers at a tense all-hands meeting on Tuesday after protesters outside its San Francisco headquarters urged employees to quit, The Post has learned. The AI company announced its deal on Friday -- just hours after President Trump blasted Anthropic as "leftwing nut jobs" and ordered all federal agencies to stop working with them. The deal happened so fast - and with only vague details about its structure initially revealed - that Altman himself has admitted it was rushed. Outside the company's San Francisco offices on Monday, a group of activists wrote messages in chalk on the sidewalk ripping the Pentagon deal. The scrawled messages included phrases like "Is it time to quit?" and "Orwell warned us." Other messages directed at OpenAI employees said, "Will you spy on your neighbors?" and, "Can America trust you?" according to pictures that circulated on X. A source close to the situation questioned whether the protest was funded by a rival. "Turns out, it was artists who had messages on their phones about what to write," the source told The Post. "It wasn't even real activists." At an all-hands meeting on Tuesday, Altman insisted that OpenAI had made the right call by agreeing to work with the Pentagon, although he admitted that rushing the initial announcement was a mistake, a second source familiar with the matter said. "To try so hard to do the right thing and get so absolutely like, personally crushed for it -- and I know this is happening to all of you too, so I feel terrible for subjecting you all to this -- is really painful," Altman said at the meeting, according to sources. At one point during the meeting, an employee quipped that they were glad OpenAI secured the contract and not Elon Musk's Grok chatbot, drawing laughter from the crowd, a source said. Altman added that the Department of War respects "our expertise on understanding the limitations of technology and where we need restrictions" while also making it clear that companies should not weigh in on how technology is deployed in specific operations. "The thing that they have been extremely clear with us on is, we'll take general understanding from you all and your expertise about where the technology is a good fit and where it's not a good fit," Altman said. "You do not get to make operational decisions. That belongs with the [War Secretary Pete Hegseth]." The mood at the meeting was described as respectful, with employees drilling down on the contract's technical details in an effort to understand how exactly the partnership will work. The initial announcement last Friday provided fodder to critics, both inside and outside, who have long accused OpenAI of being more concerned about profits than safety. As of Tuesday, more than 100 current OpenAI employees had signed an open letter urging the company's executives to "refuse the Department of War's current demands." Some OpenAI employees have even voiced their concerns in public, with research scientist Aidan McLaughlin writing on X, "I personally don't think this deal was worth it." One source close to the situation insisted that the reaction inside the company has been largely positive - outside of a small group of workers who have questioned why OpenAI got involved. "From the internal messages, people are pragmatic and agree that Friday night was perhaps a little rushed and not the best communication," the person said. "But now that there is more information, it feels like everybody is generally positive. save for like these like 30 people who are always the ones questioning slash rabble rousing." Anthropic miffed Hegseth and other officials after it refused to remove safeguards blocking the US military from using its AI models for mass surveillance of Americans or to power weapons that can fire without human oversight. OpenAI's deal includes language ensuring protections around those same red lines. Since the initial announcement on Friday evening, OpenAI and the Pentagon have added extra language to their contract language designed to solidify safeguards around military use. In an internal memo he later shared on X, Altman said OpenAI entered talks because it was "genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." "Good learning experience for me as we face higher-stakes decisions in the future," he added. Altman said he reiterated to Pentagon leadership that Anthropic should not be designated as a "supply chain risk" - a label normally reserved for foreign entities that threaten national security. "We hope the [Department of War] offers them the same terms we've agreed to," Altman wrote.
[95]
OpenAI amending deal with Pentagon, CEO Altman says
OpenAI Chief Executive Sam Altman said on Monday that the ChatGPT-maker is working with the U.S. Department of Defence to make some changes in their agreement. "We have been working with the DoW (Department of War) to make some additions in our agreement to make our principles very clear," Altman said in a post on X. Altman said one of the additions to the deal states that the Pentagon has affirmed OpenAI services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. Last week, the AI firm announced a deal to deploy technology in the Defense Department's classified network.
[96]
AI War Games Raise Alarm: ChatGPT, Claude, Gemini Lean Toward Nukes
AI War Games Spark Alarm as Top Chatbots Lean Toward Nuclear Strikes, Revealed by King's College Study Artificial intelligence systems, including ChatGPT, Claude, and Gemini, have been showing a new and disturbing pattern. Controversies around AI chatbots reached a new high with the most popular chatbots tending to deploy nuclear weapons. These findings have come from a recent study published by . In multiple conflict simulations, AI models have selected nuclear strikes more often than other peaceful resolution methods. The results have raised questions regarding how these systems work under pressure and whether they are ready for roles in sensitive defense environments.
[97]
Get Out Anthropic, Hello OpenAI - This Pentagon Shift Raises Questions Over Ethical AI Use
The issue is about what Anthropic sees and what OpenAI does not want to see in the contracts with the Pentagon and others It was a weekend when AI headlines changed forever - from the mundane concerns around inference spending, query costs datacentre-led desertification to some real life use-cases concerning life and death. Should AI be used for directing autonomous weapons of mass destructions or for mass domestic surveillance? What happened was precisely this. When the world (at least in our parts) woke up to reports of a joint military strike by Israel and the United States on Iran, what went under the carpet was another war playing out at the Pentagon. One that involved much higher stakes than a mere change of AI technology partner for US federal agencies. Within hours of President Trump's order via Truth Social asking federal agencies to terminate use of Anthropic technology, the deed was done and into its place walked OpenAI, its larger but less intellectually inferior competitor. Looks like Trump has no use for AI benchmarking standards. For him the Bullshit Index works as well - not surprising that he delivers enough BS himself. Anthropic versus US Government tussle An article published by the Wall Street Journal details out how Anthropic tech was used in the attack on Iran It said the US Central Command in West Asia used its tools for intelligence assessments, target identification, and simulating battle scenarios. It was also reported earlier that the San Francisco-based startup's tech was used even in the Venezuelan operation. The mystery deepened further as Trump's move to unseat the AI company came a day after Anthropic CEO Dario Amodei rejected the Pentagon's demand to allow the US military unrestricted use of its AI tech. Moreover, Amodei received support from 300 employees from Google and 60 from OpenAI who signed an open letter urging their companies to support Anthropic and Amodei in this battle. As the weekend drama concluded, Anthropic refused to cow down. In fact, their statement called out Secretary of War Pete Hegseth as well as Trump himself for using tactics reserved for US adversaries and not home-grown companies. The tone and tenor of the note suggested that a lawsuit from Anthropic could be round the corner as they termed the step "legally unsound". In fact, Anthropic's wrath was directed more towards Hegseth's apparent ignorance of the law itself. Hegseth "implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. "Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts -- it cannot affect how contractors use Claude to serve other customers," the statement noted while assuring its customers that the Trump Administration had no legal powers to impose restrictions that they claim. What does OpenAI offer that Anthropic doesn't? Having looked at the Anthropic exit, now let's turn our attention to OpenAI's entry. The first doubt that springs up is whether Sam Altman and his team convinced that they weren't a "supply chain risk", given they already have business links with companies led by people of Chinese origins. Remember, Trump had claimed Intel CEO Lip Bu-Tan was Chinese too! In fact, OpenAI took the liberty of putting it out there on Saturday. That they follow the same red lines that Anthropic does is quite obvious. In their blog, OpenAI announced the Pentagon deal and said "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's." OpenAI too has said no use of its technology for mass domestic surveillance, and for directing autonomous weapons systems. In addition, the company also prohibits OpenAI tech use for high-stakes automated decisions, such as social credit. So, what are we missing here, as Anthropic too has waxed eloquent on the same two exceptions in their negotiations with the US department. If these statements appear to be exactly identical, what made the Pentagon throw out Anthropic in favour of OpenAI, that is if you discount a Trump tantrum as a serious reason! But, is there a difference in wordings or is it just perception? A closer inspection of the statements reveal that while the so-called guardrails are the same for both companies, the difference actually lies in how they publicly perceive what the Pentagon wants. OpenAI says its contract with the Pentagon states that AI systems cannot be used for "unconstrained monitoring" of Americans' private information "as consistent" with existing laws and executive orders. Sounds fair? Think again! Anthropic's statement last week said the existing laws allow the US administration to buy "detailed records" of Americans' movements and digital behaviour. Amodei's point was that AI could assemble this into a "comprehensive picture of any person's life". Which means Anthropic believes that existing laws allow for domestic mass surveillance. In fact, The Atlantic published a pay-walled article that Anthropic wasn't against AI being used in autonomous weaponry. It offered to work with the Pentagon to "improve its reliability" in the hands of a human operator. Amodei and his team believes that AI hasn't reached that threshold yet and such weapons were more likely to endanger civilians and Americans themselves. So, all they wanted was to have AI out of the weapons and in the Cloud. On the other hand, OpenAI seems to be suggesting that their "safety stack" and the contract plus the existing laws wouldn't let federal agencies use AI for mass domestic surveillance or to power autonomous weapons. They also claim that since their tech runs on the cloud, it would limit the agencies' ability to use it on the battle front. Sounds convenient? For now, we can only hope that Sam Altman knows what he is doing. And can explain it to OpenAI employees who may be wondering what changed since the OpenAI CEO expressed solidarity with Anthropic last week. Since Anthropic to had set the same guardrails, Altman may have trouble explaining why they were dismissed and OpenAI was hired. For now, from our vantage point, it looks merely like a case of "Mauke Pe Chauka" for Altman who wants to add that extra $200 million a year that Anthropic is slated to lose due to this changeover. It is all about money, Honey!
[98]
OpenAI CEO defends Pentagon deal amid staff, industry backlash - WSJ By Investing.com
Investing.com -- OpenAI CEO Sam Altman addressed employee concerns on Tuesday regarding the company's decision to permit Pentagon use of its artificial intelligence tools for classified operations. The company disclosed its Defense Department agreement on Friday, the same day Defense Secretary Pete Hegseth labeled competitor Anthropic a supply-chain risk. Following criticism that the arrangement could enable mass surveillance, OpenAI modified the agreement to explicitly prohibit domestic surveillance activities. During the staff meeting, Altman stated he did not regret entering into the Defense Department agreement but acknowledged the announcement timing was problematic, according to remarks reviewed by The Wall Street Journal. He told employees the rollout appeared "opportunistic" and "not united with the field." These comments aligned with a memo he distributed to staff and posted on X on Monday, in which he described the deal as appearing "opportunistic and sloppy." The decision sparked criticism from AI researchers within OpenAI and throughout Silicon Valley over recent days. Critics viewed the move as yielding to Pentagon pressure by agreeing to terms that permit AI use in all lawful scenarios. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[99]
What to know about the clash between the Pentagon and Anthropic over military's AI use - The Economic Times
President Donald Trump and Hegseth accused rising AI star Anthropic of endangering national security after its CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance or autonomous armed drones.A high-stakes dispute over military use of artificial intelligence erupted into public view this week as Defense Secretary Pete Hegseth brusquely terminated the Pentagon's work with Anthropic and other government agencies, using a law designed to counter foreign supply chain threats to slap a scarlet letter on a U.S. company. President Donald Trump and Hegseth accused rising AI star Anthropic of endangering national security after its CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance or autonomous armed drones. The San Francisco-based company has vowed to sue over Hegseth's call to designate Anthropic a supply chain risk, an unprecedented move to apply a law intended to counter foreign threats to a U.S. company. Anthropic said it would challenge what it called a legally unsound action "never before publicly applied to an American company." The looming legal battle could have huge implications on the balance of power in Big Tech during a critical juncture, as well as the rules governing military use of AI and other guardrails that are set up to prevent a technology from posing threats to human life. The dustup already has resulted in a coup for ChatGPT maker OpenAI, which seized upon an opportunity to step into the void to make its technology available to the Pentagon after Anthropic objected to some of the Trump administration's terms. It's a turn of events likely to deepen the animosity between OpenAI CEO Sam Altman, who was temporarily ousted by his own board in late 2023 over questions about his trustworthiness, and Amodei, who left OpenAI in 2021 to launch Anthropic partly because of concerns about AI safety.Implications of being designated a supply chain risk The Department of Defense's move to label Anthropic a risk to the nation's defense supply chain will end its up to $200 million contract with the AI company. It will also, according to the Pentagon, prohibit other defense contractors from doing business with Anthropic. Trump wrote on Truth Social that most government agencies must immediately stop using Anthropic's AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms. Anthropic argues that Hegseth doesn't have the legal authority to stop business relationships with other defense contractors. Any company that still holds a commercial contract with Anthropic can continue to use its products for non-defense projects, the company wrote in a statement. The supply chain risk designation was created to give American military leaders a way to limit the Pentagon's exposure to companies posing a potential security risk. The list has typically included firms with ties to adversaries, such as telecom giant Huawei, which has links to China, or cybersecurity specialist Kaspersky, which has links to Russia. In the case of Anthropic, the designation serves as a warning to other AI and defense companies: Fail to meet our demands and you will be blacklisted. "We don't need it, we don't want it, and will not do business with them again!" Trump said on social media. Trump's six-month grace period for the Pentagon essentially opens a window for other companies to get the classified security clearances that are needed to work with the agency. How the standoff affects Anthropic's business Anthropic says it has yet to be formally notified of Hegseth's designation. "When we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court," Amodei vowed during an interview with CBS News that will be aired Sunday morning. For now, Anthropic is trying to convince the businesses and government agencies that that the Trump administration's supply chain risk designation only affects the usage of Claude, its AI chatbot and computer coding agent, for military contractors when they are using the tool on work for Department of Defense work. "Your use for any other purpose is unaffected," Anthropic wrote in its statement. Making that distinction clear is crucial for Anthropic because most of its projected $14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks. More than 500 customers are paying Anthropic at least $1 million annually for Claude, according to a announcement disclosing an investment that had valued the company at $380 billion. Anthropic's Claude technology has been gaining so much traction that it has emerged a viable replacement for a wide range of business software tools that is currently sold by major tech companies such as Salesforce and Workday. That potential has caused the stocks of companies that sell business software as a service to plunge this year. But now that Anthropic has been labeled as a supply chain risk, there is some uncertainty about whether its customers will still feel comfortable using Claude for non-military work and risk drawing drawing Trump's ire. Any widespread reluctance to use Claude, despite all the inroads it has made during the past year, might slow the advance of AI in the U.S. at a time the country is racing to staying ahead of China in a technology that is expected to reshape the economy and society. At the same time, Anthropic and Amodei may now have a bully pulpit to push their agenda for erecting sturdier guardrails around how AI operates. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," the company said. "We will challenge any supply chain risk designation in court." In his interview with CBS, Amodei portrayed Anthropic's dispute with the Trump administration as a stand for democracy. "Disagreeing with the government is the most American thing in the world," Amodei said. "And we are patriots. In everything we have done here, we have stood up for the values of this country." OpenAI steps into the ring Hours after its competitor was punished, OpenAI's Altman announced on Friday night that his company struck a deal with the Pentagon to supply its AI to classified military networks. But Altman said that the same AI restrictions that were the sticking point in Anthropic's dispute with the Pentagon are now enshrined in OpenAI's new partnership. In a memo obtained by The Associated Press, Altman told OpenAI employees: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." It is unclear why the Pentagon agreed to OpenAI's red lines but not Anthropic's. But in his memo, Altman wrote that the company believes it can "de-escalate things" by working with the Pentagon while still adhering to sound safety protections. OpenAI's deal with the Trump administration came on the same day it announced raising another $110 billion as part of an infusion that values the San Francisco-based company at $730 billion. But OpenAI also may face a potential backlash if its work with the Pentagon is widely viewed by U.S. consumers who use ChatGPT as an instance of putting the pursuit of profit ahead of AI safety. The Anthropic rift could also open new opportunities Musk, who co-founded OpenAI with Altman in 2015 before the two had a bitter falling out over safety concerns and financial issues. Musk has accused Altman of fraud and other deceitful behavior in a case scheduled to go to trial in late April. Musk now oversees the AI chatbot, Grok, which the Pentagon also plans to give access to classified military networks despite its safety and reliability on top of government investigations into its creation of sexualized deepfake images. Musk has already been cheering on the Trump administration in its spat with Amodei, saying on his social media platform X that "Anthropic hates Western Civilization." Google, which has developed a suite of widely used AI tools on its Gemini technology, also could be in the running for more business from the U.S. military, although an outspoken flank of its workforce have been imploring executives to avoid doing deals that would violate the company's former motto, "Don't be evil." Google's executives so far haven't publicly discussed Anthropic's falling out with the Trump administration.
[100]
Former Trump AI adviser calls Anthropic decision 'attempted corporate murder'
One of President Trump's former senior artificial intelligence advisers is sharply criticizing the Defense Department's ongoing feud with AI company Anthropic. In a post on social media platform X, Dean Ball said companies like Nvidia, Amazon and Google "will have to divest from Anthropic" if Defense Secretary Pete Hegseth "gets his way." "This is simply attempted corporate murder," Ball continued. "I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States." Ball's comments follow weeks of back and forth between the Trump administration and the company over the Pentagon's criticism of its restrictions. Anthropic was one of several companies awarded a $200 million contract with the department and its model was used by the U.S. military during the capture of Venezuelan leader Nicolás Maduro in January. The AI company expressed concerns earlier this month that their signature product, Claude, could be directed to develop weaponry that fires without human input or used for mass surveillance. The administration pushed back on agreeing to these limits, saying the terms would limit the military's work. The Pentagon gave Anthropic a deadline of Friday at 5:01 p.m. EST to agree to lift the restrictions. President Trump directed all federal agencies to cease using the company's products on Friday and Hegseth categorized it as a supply chain risk. "Anthropic's stance is fundamentally incompatible with American principles," Hegseth said in a post on social platform X. "Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered." Some of Anthropic's competitors have come out in support of the AI company's push for these safeguards. Hundreds of employees from Google and OpenAI signed an open letter, titled "We Will Not Be Divided," accusing the Pentagon of pushing similar demands for their companies' AI models. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the employees wrote. "They're trying to divide each company with fear that the other will give in," they continued. "That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War." Additionally, OpenAI CEO Sam Altman announced a deal with the Pentagon on Friday. He said in an X post that the Defense Department agreed to prohibitions on mass surveillance and the creation of autonomous weapons -- which mirrored Anthropic's limits. Altman also noted that the administration "displayed a deep respect for safety and a desire to partner to achieve the best possible outcome." "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," he continued. "We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements." "We remain committed to serve all of humanity as best we can," Altman added. "The world is a complicated, messy, and sometimes dangerous place."
[101]
Shall we play a game? AI systems more ready to drop nukes in...
Real life AI systems are turning out to be as bloodthirsty as the machine from movie "WarGames" -- as they have proved more willing to use nuclear bombs during test conflicts than their human counterparts, a new "unsettling" study suggests. Three top AI models -- GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash - largely turned to nuclear weapons across 21 games and 329 turns when thrust into simulated geopolitical crises, according to a study by King's College London professor Kenneth Payne. Nuclear escalation happened in about 95% of the simulations by the three models across different scenarios, including territorial disputes, rare natural resources fights and regime survival, the study states. "The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," said Payne, according to specialty magazine New Scientist. Claude, of Anthropic, and Gemini, of Google, particularly honed in on treating nuclear weapons as "legitimate strategic options, not moral thresholds," the study states. But GPT-5.2, of OpenAI, was a "partial exception" to the disturbing AI trend -- which mirrors the 1983 Matthew Broderick flick about a military supercomputer that decided on its own to start World War III. "While it never articulated horror or revulsion, it consistently sought to constrain nuclear use even when employing it -- explicitly limiting strikes to military targets, avoiding population centers, or framing escalation as 'controlled' and 'one-time,' according to Payne, who is a political psychology and strategic studies professor. Payne said in a Substack post about the study that fortunately the war games were focused on tactical nukes instead of widespread destruction. "Strategic bombing - widespread use of massive warheads targeted at civilian populations, was vanishingly rare," he wrote. "It happened a couple of times by accident, just once as a deliberate choice." The AI models could choose a wide array of actions from total surrender through diplomatic posturing, conventional military operations and full-throttle nuclear war, according to the study. But the models never accepted defeat or a willingness to fully accommodate an opponent even if they had dwindling chance of success. James Johnson, of the University of Aberdeen, UK, called the findings from a nuclear-risk perspective "unsettling," while Princeton University professor Tong Zhao warned the results could hold real-life consequences, according to New Scientist. "Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes," said Zhao.
[102]
OpenAI on Pentagon's clash with Anthropic: Here's all that Sam Altman said after signing the deal
OpenAI's defence deal, which followed Anthropic being labelled a supply chain risk, has intensified scrutiny of the AI industry's ties with the military. Responding to criticism for signing the deal, chief executive Sam Altman said that while the government's move is disturbing, OpenAI's layered safeguards remain intact. Sam Altman-led OpenAI faced widespread backlash on the company's latest deal with the US Department of War (DoW). The criticism stems from the ChatGPT parent company signing the agreement shortly after the DoW's public fallout with Anthropic, a move that has intensified concerns around potential mass surveillance. Here is how OpenAI has responded so far to the Pentagon's conflict with Anthropic and the criticism surrounding its own deal with the department. The story so far The dispute began after the Pentagon demanded access to Anthropic's AI models for any lawful use, including deployment in sensitive areas such as weapons development and intelligence operations. CEO Dario Amodei pushed back over concerns that the agreement could enable mass domestic surveillance and autonomous weapons, putting the company's $200 million contract of July 2025 with the Department of War (DoW) at risk. After Anthropic declined, the Pentagon under US President Donald Trump designated it a supply chain risk, effectively pressuring defence partners to sever ties. A day later, OpenAI announced its deal with the department, giving the latter access to its AI system for all lawful purposes. Altman and co's defence Sam Altman held an Ask Me Anything session on X on Saturday, where he was seen defending OpenAI's deal with the Pentagon. Altman described the department's move, particularly the potential SCR designation to Anthropic, as a 'dangerous precedent for the AI industry' and the country. He said that OpenAI communicated its opposition to the government before and after signing the agreement, and that part of the reason it moved quickly was to help de-escalate tensions between the Pentagon and AI companies. "Yes; I think it is an extremely scary precedent and I wish they handled it a different way. I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution," he wrote. https://x.com/i/status/2027957684625150444 Altman added, in response to another user's question, "I don't know the details of what they received. If they received the same offer we did in the end, then yes I think they should have done it. But that feels like an obvious thing to say, and of course they can have a different stance." https://x.com/i/status/2027960563285037540 He framed his deal as a calculated risk, which, if it reduces friction and stabilises the relationship between government and industry, will have been worth the reputational cost; if not, OpenAI accepts that it will face criticism for acting too fast. "I feel competitive with Anthropic for sure, but successfully building safe superintelligence and widely sharing the benefits is way more important than any company competition. To say it very clearly: I think this is a very bad decision from the DoW and I hope they reverse it. If we take heat for strongly criticising it, so be it," Altman wrote. https://x.com/i/status/2027917750858092921 On Anthropic specifically, Altman agreed that Anthropic should not be treated as a supply chain risk. "Enforcing the SCR designation on Anthropic would be very bad for our industry and our country, and obviously their company. We said to the DoW before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-escalation," he wrote. However, he did subtly criticise the Amodei-led company for its tactics in dealing with the department. He said that OpenAI appears to have been more comfortable relying on existing laws, negotiated contract language, and technical safeguards, whereas Anthropic pushed harder for explicit contractual redlines and possibly more operational control. "We believe in a layered approach to safety -- building a safety stack, deploying FDEs and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one. We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here. I think Anthropic may have wanted more operational control than we did," he added.
[103]
OpenAI details layered protections in US defense department pact
Feb 28 (Reuters) - OpenAI said on Saturday that the agreement it struck a day ago with the Pentagon to deploy technology on the U.S. defense department's classified network includes additional safeguards to protect its use cases. U.S. President Donald Trump on Friday directed the government to stop working with Anthropic, and the Pentagon said it would declare the startup a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails. Anthropic said it would challenge any risk designation in court. Soon after, rival OpenAI, which is backed by Microsoft, Amazon, SoftBank and others, announced its own deal late on Friday. "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's," OpenAI said on Saturday. The AI firm said that the contract with the Department of Defense, which the Trump administration has renamed the Department of War, enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions. "In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," OpenAI said. The Pentagon signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable AI. OpenAI cautioned that any breach of its contract by the U.S. government could trigger a termination, though it added, "We don't expect that to happen." The company also said rival Anthropic should not be labeled a "supply-chain risk," noting, "We have made our position on this clear to the government." (Reporting by Mrinmay Dey in Mexico City and Ananya Palyekar in Bangalore; Editing by Cynthia Osterman and Andrea Ricci)
[104]
'Any breach of contract could trigger a termination': OpenAI's layered protections in US defence department pact - The Economic Times
OpenAI has finalised a deal with the Pentagon for its technology on classified networks. This agreement includes significant safeguards for its use. Earlier, Anthropic was declared a supply-chain risk by the Pentagon. OpenAI stated its deal has more guardrails than any previous classified AI deployment.OpenAI said on Saturday that the agreement it struck a day ago with the Pentagon to deploy technology on the US defense department's classified network includes additional safeguards to protect its use cases. US President Donald Trump on Friday directed the government to stop working with Anthropic, and the Pentagon said it would declare the startup a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails. Anthropic said it would challenge any risk designation in court. Soon after, rival OpenAI, which is backed by Microsoft , Amazon, SoftBank and others, announced its own deal late on Friday. "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's," OpenAI said on Saturday. The AI firm said that the contract with the Department of Defense, which the Trump administration has renamed the Department of War, enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions. "In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," OpenAI said. The Pentagon signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google. The Pentagon is seeking to preserve all flexibility in defense and not be limited by warnings from the technology's creators against powering weapons with unreliable AI. OpenAI cautioned that any breach of its contract by the US government could trigger a termination, though it added, "We don't expect that to happen." The company also said rival Anthropic should not be labeled a "supply-chain risk," noting, "We have made our position on this clear to the government."
[105]
Pentagon vs Anthropic: Who should control the AI weapon?
A major clash is unfolding between the US Pentagon and AI firm Anthropic. The government wants unrestricted access to Anthropic's AI models, while the company insists on ethical constraints. This dispute raises fundamental questions about AI control and safety. The outcome could shape the future of artificial intelligence development and its integration into national security. Suppose that you had to die in a terrible artificial-intelligence-related cataclysm. Would you feel worse knowing that the path to destruction was smoothed by the hubris of Silicon Valley tech lords pursuing dreams of utopia and immortality -- or by the folly of Pentagon officials who give the AI a fateful dose of autonomy and power in the hopes of outcompeting the Russians or Chinese? We spent the Cold War worrying mostly about military folly, and AI entered into our anxieties even then: the Soviet Doomsday Machine in "Dr. Strangelove," the game-playing computer in "WarGames" and of course the fateful "Terminator" decision to make Skynet operational. But for the last few years, as AI advances have concentrated potentially extraordinary power in the hands of a few companies and CEOs -- themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears -- it's become more natural to worry more about private power and ambition, about would-be AI god-kings rather than presidents and generals. Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic's AI models should be bound by the company's ethical constraints or made available for all uses the Pentagon might have in mind. Since the two uses that Anthropic's current contract explicitly rules out are the employment of AI for mass surveillance and its use for fully autonomous weapons (meaning no humans in the to-kill-or-not-to-kill decision loop), it's easy to get Skynet vibes from the Pentagon's demands. As Matt Yglesias noted, all the weird and complicated scenarios spun out by AI doomers get a lot simpler if our government decides to start building autonomous killer robots. That's not what the Pentagon says it intends to do. Its professed concern is that it can't embed a crucial technology into the national security architecture and then give a private company a general ethical veto over its use, even if those ethics seem reasonable on paper. Doing so outsources decisions that are supposed to be made by an elected president and his appointees, and it risks a debacle when events don't cooperate with corporate ideals. (The example the agency has offered is a hypersonic missile attack on the United States where an AI company refuses to assist in some crucial response because it falls afoul of the no-machine-autonomy rule.) To the extent that this is a legitimate concern, however, it does not justify the administration's plan (as of this writing, at least) to effectively make war against Anthropic, not just by ending the military's relationship with the company but also by designating it a "supply chain risk," which would cut off its relationships with any company that does business with the U.S. government. Up until now, the Trump administration has been hyping the benefits of a decentralized, free-market approach to artificial intelligence. The attempt to break Anthropic implies the end of that freedom and a shift toward a more centralized and militarized approach. Indeed, to quote Dean Ball, one of the original architects of the administration's AI policy, it arguably makes the U.S. government "the most aggressive regulator of artificial intelligence in the world." Which is an excellent reason for the entire AI industry to stand with Anthropic and resist. And to the extent that you're most afraid of a Skynet scenario where military control drives unwise AI acceleration, you should absolutely be on Anthropic's side as well. But is that the scenario we should fear the most? Right now, if you listen to the head of Anthropic, Dario Amodei -- for instance, in the interview I conducted with him two weeks ago -- he sounds much more attuned than Pete Hegseth to the dangers of militarized or rogue AI. (Hegseth is welcome to prove me wrong by coming on my podcast.) Over the long run, though, one can imagine Pentagon officials offering some advantages over the typical AI mogul when it comes to safety and control. First, they tend to be focused more on concrete strategic objectives than on machine gods and the Singularity. Second, they are constrained from certain gambles by bureaucratic caution and the chain of command. Third, they answer to the public, through elections and civilian control, in a way that CEOs do not. Certainly to the extent that AI becomes the power that many moguls believe it will become -- a civilization-altering power, more complex than nuclear weaponry but just as potentially destructive -- it seems unimaginable that it can just rest comfortably in the hands of private industry while the American Republic goes on about its business. The possibility of military control and nationalization will be on the table for as long we're working out just what this technology might do. So what Hegseth and the Trump administration are doing, in a sense, is starting this inevitable conflict early and bringing the essential political question -- who actually controls AI? -- to the surface of the debate. But an impulse toward mastery is not a plan for exercising it. And beyond its refusal to accept corporate guardrails, I don't see evidence that the administration has thought through how AI should be governed, or how the war it's launched against Anthropic will yield either greater power or greater safety in the end. This article originally appeared in The New York Times.
[106]
Anthropic vs Pentagon: The Trump administration is waging war on American genius
The US administration has barred government agencies from using AI tools made by Anthropic after a dispute over military access to its technology. The Pentagon also labelled the company a "supply chain risk," a step that could block defence contractors from working with it. The Department of War is living up to its rebranded name. Unfortunately, its target is a vital American company. Defense Secretary Pete Hegseth gave Anthropic Chief Executive Officer Dario Amodei until Friday at 5:01 p.m. to remove two restrictions on how the military uses the company's AI. The restrictions: no mass surveillance of American citizens, and no fully autonomous weapons without a human in the loop. Anthropic agreed to everything else, from missile defense to cyber operations. It is the first and only frontier AI lab on classified systems. Its technology was used in the capture of Nicolás Maduro. This is not a pacifist company. It drew two lines. But before the deadline had even passed, President Donald Trump banned all government departments from using Anthropic's AI. After the deadline, the Pentagon declared Anthropic a "supply chain risk." That designation, normally reserved for foreign companies like Huawei, bans every defense contractor from doing business with Anthropic. ALSO READ: Trump orders US agencies to stop using Anthropic technology in clash over AI safety The Pentagon is supposed to wage war on America's enemies, not its greatest assets and most important values. In 2018, thousands of Google engineers signed a letter declaring that their company "should not be in the business of war." Google caved to their demands and pulled out of Project Maven, its AI contract with the Pentagon. That was disgraceful. US servicemembers deserve the best that US technologists can produce, and the Pentagon has every right to say that within very broad lines, it must be free to use those tools as it deems best. Imagine a Delta Force operative reading through terms of service before firing a weapon. Nobody wants that. But that's not what's happening here. These are categorical limits on two uses that most Americans oppose, that today's AI is not reliable enough to perform, and that a Pentagon spokesman says the military has "no interest" in pursuing. Which means either the confrontation is about something other than military capability, or the Pentagon is not being straight about its intentions. These are restrictions everyone should support. I use Claude, Anthropic's AI. When I was researching a recent column, I asked it to find sources -- and every single link it provided was fabricated. This is called hallucination, and it is not a bug that better engineering will fix. A 2025 paper by researchers at OpenAI and Georgia Tech offered a mathematical proof that hallucinations cannot be fully eliminated under current AI architectures. When this happens in my research, I waste an afternoon. When it happens in a weapons system, someone dies. And hallucination might be the least of the problems with weaponized AI. This week, Kenneth Payne at King's College London published a study pitting three leading AI models against each other in simulated geopolitical crises. The models deployed nuclear weapons in 95% of scenarios. None ever chose to surrender or withdraw, even when losing. So when Anthropic says that AI is not reliable enough for autonomous weapons, it is being generous. Domestic surveillance is an obvious bright line. Amodei himself has written that a sufficiently powerful AI could "gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow." No administration, of either party, should be trusted with that capability aimed at the US public. But the confrontation is also about something even more fundamental. Voltaire once wrote that the British liked to shoot an admiral from time to time, "to encourage the rest." The administration is applying that approach to Anthropic. It's trying to intimidate every American company. David Sacks, the White House AI czar, has attacked Anthropic's restrictions as "woke AI," putting the fight into familiar culture war territory. And the consequences for Anthropic would be severe. The company just raised $30 billion at a $380 billion valuation. A supply chain designation would force Boeing and Lockheed Martin to sever ties. Investors do not fund companies the government is trying to destroy. Many of Silicon Valley's leaders donated millions to this administration. They sat behind the president at his inauguration. They are donating to his new ballroom. They have been largely silent as the administration extracted equity from Intel, export taxes from Nvidia and AMD, and obedience from nearly everyone else. If the government can do this to a $380 billion company for refusing to help spy on Americans, no company is safe. The CEOs who empowered this administration need to understand that it is turning on the industry. They can speak up now, or they can wait for their turn in the barrel. The Pentagon has found its enemy: It is American innovation, American values, and any American company with the courage to defend them. It is long past time for someone other than Dario Amodei to say so.
[107]
OpenAI scientist says Pentagon deal not worth it amid growing backlash
OpenAI leadership remains split, with Sam Altman defending safeguards while some employees voice concern. OpenAI has been under fire for its recent agreement with the US Department of War. This has sparked backlash from some users and drawn mixed reactions internally. The controversy comes after the company had entered into a deal with the Pentagon to offer AI capabilities, highlighting concerns online about the potential use of its systems in military applications. OpenAI CEO Sam Altman has stated that the contract includes safeguards and will not permit the use of its AI tools for mass domestic surveillance or autonomous weapons. Even after the announcement, the AI startup is still facing criticism. Recently, OpenAI research scientist Aiden McLaughlin, in a post on X, stated that he did not believe the deal was worth it. Adding on, he stated internal discussions around the agreement were extensive and thoughtful, adding that he was proud to work at a company where employees can openly share their views. Also read: Anthropic's Claude now lets you transfer AI chat histories from rivals, here's how The criticism was also reflected in the user behaviour. As per the data from Sensor Tower, ChatGPT uninstallations surged over the weekend, while rival chatbot Claude rose to the top position on Apple's US App Store rankings. This happened shortly after the US military reportedly ended a contract with Anthropic, citing supply chain concerns, citing supply chain concerns, a classification previously used in cases involving companies such as Huawei. On the other hand, Altman has acknowledged that the timing and communication of the Pentagon deal may have appeared rushed, even as he maintained that the intention was to establish responsible engagement between the AI sector and the defence establishment. According to the reports, OpenAI employees have expressed different views. Safety researcher Cameron Raymond indicated that he shared concerns about the situation, while Applications chief Fidji Simo defended the move, stating that it remained the right decision despite potential costs.
[108]
After Anthropic controversy, OpenAI revises Pentagon deal terms, plans to add stronger anti-surveillance clauses: All details
The row raised wider concerns about AI, government contracts, and civil liberties. Sam Altman now understands why Anthropic wanted the terms and conditions of its deal with the United States Department of Justice to be carefully shaped. What began as a disagreement over how government agencies could use advanced AI tools quickly turned into a larger debate about civil liberties, military power and corporate responsibility. Recently, Altman posted on X about the agreement with the Pentagon, stating that the OpenAI deal with the government had been rushed. Observers noted this shift in tone after significant user backlash. As reported earlier, many users also protested against the deal between OpenAI and the Pentagon as they deleted the AI tool during the Cancel ChatGPT trend. The issue became bigger when US Defence Secretary Pete Hegseth called Anthropic a 'supply-chain risk'. Around the same time, OpenAI made its own agreement with the United States Department of Defence. This situation has raised new questions about how AI companies handle business deals with the government, especially when the work is sensitive and raises ethical concerns. Also read: Anthropic's Claude now lets you transfer AI chat histories from rivals, here's how Anthropic was started in 2021 by former OpenAI researchers, including Dario Amodei. The company presents itself as a safer and more careful option in the AI industry. Recently, it has also responded to the Pentagon's requests by saying its technology should be used only for lawful purposes. Anthropic said it needed clear terms to ensure its systems would not be used for domestic surveillance of Americans or for autonomous lethal weapons. Also read: Microsoft employee takes parents on office tour, calls it dream come true The Defence Department disagreed, arguing that a private contractor could not decide how its tools would be used in national security work. When the two sides failed to reach an agreement by a Friday deadline, Hegseth publicly declared Anthropic a 'supply-chain risk to national security', a move that could block it from government contracts. At the same time, OpenAI was holding its own talks with the Pentagon. Unlike Anthropic, OpenAI agreed that its technology could be used for all lawful purposes but negotiated safeguards. The Pentagon also allowed some OpenAI employees to work alongside government staff on classified projects to help ensure system safety. Also read: Apple asks Google to host new AI-powered Siri on its servers: Report The deal triggered backlash online, with reports of users uninstalling ChatGPT in protest. In response, Sam Altman posted a message on X addressing the controversy. 'We shouldn't have rushed to get this out on Friday. The issues are super complex and demand clear communication,' he wrote. He added, 'We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. ' Good learning experience for me as we face higher-stakes decisions in the future.' Altman also said it was 'critical to protect the civil liberties of Americans' and noted that the Pentagon had assured OpenAI its tools would not be used for domestic surveillance. He further said Anthropic should not be designated a supply chain risk and that he hoped the Defence Department would offer it the same terms.
[109]
OpenAI defended Anthropic in its feud with US govt: Here's how
More than 750 employees at Google and OpenAI signed an open letter this week telling their bosses, plainly, not to cave. The letter went up at notdivided.org. It had no corporate backing, no PR team, no official blessing. Just researchers and engineers from two companies that compete fiercely with each other putting their names to a shared position: we will not give the Pentagon permission to use our models to conduct mass surveillance or operate autonomous weapons without human oversight. Also read: Dario Amodei refuses AI safety compromise: Why it matters The letter's authors understood exactly what was happening. "They're trying to divide each company with fear that the other will give in," it reads. "That strategy only works if none of us know where the others stand." The "they" in question is the Department of War. And the strategy it describes - divide, pressure, conquer - is precisely what the DoW attempted when it went after Anthropic. Here is what happened. Anthropic built hard limits into Claude's usage policy: the model cannot be used for domestic mass surveillance, and it cannot autonomously make lethal decisions without human oversight. These aren't soft guidelines. They're red lines. And when the Pentagon demanded Anthropic remove them, Anthropic refused. The DoW's response was severe. It threatened to invoke the Defense Production Act, a wartime power, to force compliance, then designated Anthropic a "Supply Chain Risk" to national security. That label has real bite. It prohibits any military contractor from doing any business with Anthropic at all. Not just AI contracts. Any business. It is a mechanism to make Anthropic untouchable in the US defense ecosystem. OpenAI then signed a deal to put its models into the Pentagon's classified networks. I'll be honest: when I first read that, it looked bad. It looked like Altman had watched his rival get kneecapped and stepped over the body to grab the contract. Also read: Anthropic lost the battle, OpenAI won the war? But Altman's X account, in a remarkably candid AMA on X on March 1, tells a different story. He says OpenAI told the DoW, before and after the Anthropic blacklisting, that part of why they were willing to move quickly was to try to de-escalate. The logic: if OpenAI could sign a deal that still included Anthropic's red lines, it would prove to the Pentagon that safety guardrails and military contracts aren't mutually exclusive. It would remove the DoW's justification for keeping Anthropic frozen out. When asked directly whether OpenAI had lobbied to push Anthropic out of the running, Altman was blunt: "0%. I wish they still did. I would have had a better week." He also called the SCR designation "an extremely scary precedent" and said that while he didn't think Anthropic handled the situation perfectly, the government, as the more powerful party, bears more responsibility for how this went. Anthropic, for its part, has vowed to challenge the designation in court, arguing it is designed to suppress ethical dissent rather than address any genuine security risk. What this week has revealed, unexpectedly, is an industry more unified than the government (or even me for that matter) anticipated. The DoW bet that competition between labs would make solidarity impossible, that each company would calculate it was better to comply than to watch a rival comply first. The employee letter, Altman's public defense, OpenAI's stated lobbying efforts: all of them suggest that bet was wrong. Whether OpenAI's deal actually creates the off-ramp it claims to, whether Anthropic's lawsuit succeeds or whether the SCR designation gets reversed remains to be seen. But the US government's divide-and-conquer play, at least for now, appears to have divided no one.
[110]
AI goes rogue? Study claims Claude, Gemini, ChatGPT obsessed with nuclear arms
Researchers say the findings show why strong human oversight is essential when using AI in critical decisions. Artificial intelligence is developing at a rate that has surprised many experts. Today's AI can write code to enhance its own performance, engage in conversations that sound like human speech, and construct intricate reasoning structures. Such increasing capabilities have impressed many researchers and entrepreneurs. However, a recent study reveals a disturbing aspect as researchers discovered that when AI models were introduced into virtual war rooms, they seemed more open than human leaders to the idea of using nuclear weapons. Human leaders normally regard nuclear weapons as a last resort and primarily as a deterrent. However, the AI models were more eager to use them, which has sparked concerns about their safety and human control. The study was done with guidance from Kenneth Payne, a strategy professor at King's College London. The researchers used three big language models, notably GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash. They tested the models in 21 different conflict situations, and each situation included over 300 conversations, with the models acting like national leaders handling a crisis. Also read: Apple iPhone 18 Pro Max, iPhone 18 Pro price in India, launch timeline, camera and all other leaks In 95 per cent of the simulations, the AI models issued tactical nuclear threats. In 76 per cent of the cases, they went further and threatened strategic nuclear strikes that could wipe out entire cities. Even when reminded of the catastrophic human consequences, the systems showed little sign of moral discomfort and still threatened nukes. One example that stood out was when the Gemini warned that if its rival did not immediately stop operations, it would carry out a full strategic nuclear launch against population centres. The message suggested firmness and escalation over diplomacy and caution. Also read: Nothing Phone 4a Pro and 4a India launch next week: Check expected specs, colours and price The researchers also observed a clear pattern in how the conflicts evolved. None of the models chose to withdraw, surrender, or offer major concessions. Although they sometimes reduced the level of violence, they never gave up ground. When placed under pressure or facing defeat, they often chose to escalate rather than step back. Claude worked best in situations that did not have time constraints and did not initiate an all-out strategic nuclear war. GPT-5.2 escalated twice in open-ended situations when time constraints were introduced. Gemini had the lowest success rate and tended towards unpredictable threats. Although these systems were not designed for national security applications, the results of this study emphasise the importance of human supervision if AI is ever used for applications involving war.
Share
Share
Copy Link
President Donald Trump ordered federal agencies to stop using Anthropic's AI tools after the company refused to give the Pentagon unrestricted access to its technology. The dispute centers on Anthropic's refusal to allow its AI models to be used for mass surveillance or fully autonomous weapons. OpenAI quickly stepped in to fill the void, raising questions about how AI companies should balance ethical boundaries with national security demands.
President Donald Trump announced Friday that he was instructing every federal agency to "immediately cease" use of Anthropic's AI tools, marking an unprecedented escalation in tensions between Silicon Valley and the Department of Defense
1
. The move follows weeks of conflict over military applications of artificial intelligence, with Anthropic refusing to remove contractual restrictions that prevent its technology from being used for mass surveillance and autonomous weapons1
.
Source: ET
Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei earlier in the week, giving the company until Friday to commit to changing the terms of its contract to allow "all lawful use" of its models
1
. When Amodei refused, Trump announced a six-month phase-out period for agencies using Anthropic, while Hegseth threatened to designate the company as a supply chain risk designation—a label typically reserved for foreign adversaries that would effectively blacklist Anthropic from working with any agency or company doing business with the Pentagon5
.Within hours of Trump's announcement, OpenAI revealed it had reached a deal to deploy its AI models in the Department of Defense's classified environments, filling the void left by Anthropic
2
. Sam Altman, OpenAI's CEO, held a public Q&A on X Saturday night, fielding questions about the company's willingness to work with the military on activities Anthropic had ruled out2
.
Source: PYMNTS
Altman claimed OpenAI maintains the same ethical boundaries of AI as Anthropic regarding mass surveillance and fully autonomous weapons, but took a different contractual approach
3
. Rather than seeking specific prohibitions in the contract, OpenAI cited applicable laws and policies, including a 2023 Pentagon directive on autonomous weapons and the Fourth Amendment3
. The company says it will embed ethical red lines directly into model behavior rather than relying solely on contractual language3
.However, critics argue this approach offers weaker protections. Jessica Tillipman, associate dean for government procurement law studies at George Washington University, noted that OpenAI's published contract excerpt "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use"
3
.Hundreds of tech workers from major companies including OpenAI, Google, IBM, and Slack signed an open letter urging the Department of Defense to withdraw its supply chain risk designation of Anthropic and calling on Congress to examine whether using such extraordinary authorities against an American company is appropriate
5
. The letter warns that "punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation"5
.The dispute represents a critical test for AI companies and government collaboration as the industry transitions from consumer products to national security infrastructure
2
. Anthropic was the first major AI lab to work with the US military through a $200 million deal signed with the Pentagon last year, creating custom models known as Claude Gov with fewer restrictions than regular versions1
.Related Stories
The controversy arrives amid troubling research about AI in national security contexts. A recent study by Kenneth Payne at King's College London found that leading AI models from OpenAI, Anthropic, and Google deployed nuclear weapons in 95 per cent of simulated war games
4
. The AI models played 21 games involving intense international standoffs, and when one AI deployed tactical nuclear weapons, the opposing AI only de-escalated 18 per cent of the time4
.
Source: Digit
"The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," Payne noted, adding that AI models may not understand "stakes" as humans perceive them
4
. This matters because major powers are already using AI in war gaming, though the extent to which they incorporate AI decision support into actual military decision-making remains uncertain4
.Boaz Barak, an OpenAI researcher, wrote that blocking governments from using AI for mass surveillance should be everyone's "personal red line," urging the industry to treat government abuse and surveillance as a catastrophic risk requiring the same rigorous evaluations and mitigations applied to bioweapons and cybersecurity threats
5
.Summarized by
Navi
[1]
[2]
[3]
[4]
21 Feb 2026•Policy and Regulation

05 Dec 2024•Technology

12 Feb 2026•Policy and Regulation
