2 Sources
2 Sources
[1]
The 'QuitGPT' movement gets a surge of activity after OpenAI strikes a deal with the Pentagon
* Trump cuts Anthropic; OpenAI struck a DoW deal to tailor AI, igniting public controversy * Users launch 'QuitGPT' movement, canceling subscriptions over military deal and perceived quality/politics concerns * Many recommend switching to Claude, praising Anthropic for keeping safeguards and standing on values The last day or two have been very tumultuous in the world of American AI. On Friday, we saw Donald Trump declare that he was cutting the use of Anthropic within governmental agencies. A few hours later, OpenAI announced that it had struck a deal with the Department of War to tailor an AI model that fits the military's needs without breaking down OpenAI's guardrails. Unfortunately for OpenAI, the move has caused people to claim that they've cancelled their ChatGPT subscription, citing concerns over what the company has potentially planned for the future. As people began to develop a movement to promote the idea, they quickly discovered that people had already made a home for them. Anthropic just dropped its core AI safety promise, and that should worry you History doesn't repeat itself, but AI companies sure do. Posts 1 By Mahnoor Faisal Disgruntled users are joining the 'QuitGPT' movement to put pressure on OpenAI Yes, 'joining,' not 'starting' As spotted by Tom's Guide, users are unhappy with ChatGPT's dealings with the Pentagon. If you're a little confused as to why people are cancelling their subscription to ChatGPT over its deal with the military, we need to take a look at why Trump got rid of Anthropic in the first place. Trump claimed that Anthropic disallowed the military from performing specific actions as per the company's Terms of Service. On the Anthropic blog, the company explains that two use-cases weren't in the original deal with the Department of Law: using Claude to perform mass surveillance, and adopting it to power automatic weaponry. However, the company claims that these were the DoW's major pain points: The Department of War has stated they will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk" -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal. While Sam Altman claims that OpenAI has also made an agreement with the DoW to not use its AI models for surveillance or weaponry, people aren't trusting it. As such, they began calling for people to cancel their ChatGPT subscriptions, only to discover that the movement had already begun before them. People began expressing their distaste for OpenAI for several reasons. These include a claim that OpenAI president Greg Brockman had political ties, a claim that ChatGPT's quality had fallen over the last few months, and people lamenting when OpenAI axed support for ChatGPT 4o, a beloved model that people pleaded the company not to get rid of. These groups come together under the 'QuitGPT' banner, and the current recommendation is for everyone to download and use Claude, citing that the company stood for its values during this whole mess.
[2]
The 'QuitGPT' movement gains steam as OpenAI's Department of War deal has users saying 'Cancel ChatGPT'
This comes as Anthropic refuses to surveil American citizens The AI landscape is highly competitive, with several companies fighting for users' attention (and ultimately money). While ChatGPT has become the household name in the AI space (much like Google is to search), the power dynamic could be shifting, with a "Cancel ChatGPT" movement gaining attention. OpenAI's CEO, Sam Altman, posted on X last night that his company has reached an agreement with the United States Department of War "to deploy our models in their classified network." He continued, "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome." But users don't seem excited to take his claim at face value, and it's hard to blame them. OpenAI just claimed solidarity with rival Anthropic when it refused to allow its products to be used for "Mass domestic surveillance" or "Fully autonomous weapons." But it's possible this solidarity was just an opportunity for OpenAI to strike its own deal and potentially let the DOW run wild with its tech in ways that could include surveillance of U.S. citizens. In a blog post, Anthropic said, "The Department of War has stated they will only contract with AI companies who accede to 'any lawful use' and remove safeguards in the cases mentioned above," and Altman's post implies that OpenAI is okay with the government using its tools, which under certain segments of the Patriot Act could quite easily lead to the mass surveillance of U.S. citizens as part of provisions on surveiling foreign citizens. Cancel your ChatGPT Plus, burn their compute on the way out, and switch to Claude from r/ChatGPT So users are responding in the only way that can actually hurt OpenAI: with their wallets. The "Cancel ChatGPT" movement is spreading and seemingly hitting the massive AI firm in its bank account. Of course, it's hard to gauge how widespread the cancellations are -- it could be the vocal minority posting to Reddit and X while the bulk of ChatGPT users carry on, blissfully unaware that their data could be used by the Department of War. But while OpenAI is the internet's crosshairs at the moment (and Anthropic is getting all of the praise), it's worth noting that OpenAI isn't the only one okay with letting its AI services be used for potential surveillance and autonomous weapons. For example, Google removed an explicit ban on the technology from its internal rules last year, leaving Gemini open to such potential uses. Amazon only offers a vague "responsible use" language in its documentation. The leaders of the AI race have a lot of power in their hands, and while Altman said, "We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place," it's hard to take him at his word with decisions like these. I don't know about you, but the idea of ChatGPT or any other AI model deciding it's seen me commit a crime when it hallucinates, even with some of the most basic prompts, is rather scary. And the idea that it would control missiles and determine targets is even scarier. Sure, Altman claims, "We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted," but does that make you feel any better about what's happening here? It sure doesn't help me sleep any better. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
Share
Share
Copy Link
OpenAI announced a deal with the Department of War to deploy AI models on classified networks, sparking the QuitGPT movement. Users are canceling ChatGPT subscriptions over concerns about mass domestic surveillance and autonomous weapons, despite CEO Sam Altman's assurances about safeguards. Many are switching to Anthropic's Claude, praising the company for maintaining its AI safety values.
OpenAI CEO Sam Altman announced that his company reached an agreement with the United States Department of War to deploy AI models on classified networks, triggering immediate backlash from users concerned about military applications
1
. The Pentagon deal comes amid a turbulent period for American AI companies, following President Donald Trump's decision to cut Anthropic's use within governmental agencies1
. Altman posted on X that the Department of War "displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," but users are skeptical about taking his claims at face value2
.
Source: Tom's Guide
The QuitGPT movement has experienced a surge of activity, with disgruntled users canceling ChatGPT subscriptions in response to the military deal
1
. Users expressing their intent to cancel ChatGPT discovered that a movement already existed, uniting various groups concerned about OpenAI's direction1
. The ethical debate within the AI industry intensified as users questioned whether OpenAI would allow its technology to be used for mass domestic surveillance and autonomous weapons, despite Altman's assurances about technical safeguards2
. Users are responding with their wallets, though it remains difficult to gauge how widespread the cancellations are beyond vocal communities on Reddit and X2
.
Source: XDA-Developers
The Department of War stated it would only contract with AI companies who agree to "any lawful use" and remove safeguards against mass domestic surveillance and fully autonomous weapons, according to Anthropic's blog
1
. Anthropic refused to allow Claude to be used for these purposes, despite threats from the Department of War to designate the company a "supply chain risk"βa label typically reserved for US adversaries and never before applied to an American company1
. Users rallying under the QuitGPT movement are now recommending Claude as an alternative, praising Anthropic for standing by its values during this controversy1
.Related Stories
OpenAI's agreement potentially allows the government to use its tools in ways that could enable surveillance of U.S. citizens under certain provisions of the Patriot Act related to surveiling foreign citizens
2
. While Sam Altman claims OpenAI has agreed with the Department of War not to use its AI model for military use involving surveillance or weaponry, users aren't trusting these assurances1
. The movement also draws support from users citing other grievances, including claims about OpenAI president Greg Brockman's political ties and complaints that ChatGPT's quality has declined in recent months1
. OpenAI isn't alone in accepting potential military applicationsβGoogle removed an explicit ban on such technology from its internal rules last year, leaving Gemini open to similar uses, while Amazon only offers vague "responsible use" language in its documentation2
. The concern extends beyond guardrails and Terms of Service to fundamental questions about AI hallucinations determining criminal activity or controlling missiles and selecting targets2
.Summarized by
Navi
[1]
11 Feb 2026β’Entertainment and Society

10 Feb 2026β’Technology

07 Aug 2025β’Technology

1
Business and Economy

2
Policy and Regulation

3
Policy and Regulation
