7 Sources
[1]
Anthropic users face a new choice - opt out or share your data for AI training | TechCrunch
Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we've formed some theories of our own. But first, what's changing: previously, Anthropic didn't use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations and coding sessions, and it said it's extending data retention to five years for those who don't opt out. That is a massive update. Previously, users of Anthropic's consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic's back end within 30 days "unless legally or policy‑required to keep them longer" or their input was flagged as violating its policies, in which case a user's inputs and outputs might be retained for up to two years. By consumer, we mean the new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected, which is how OpenAI similarly protects enterprise customers from data training policies. So why is this happening? In that post about the update, Anthropic frames the changes around user choice, saying that by not opting out, users will "help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations." Users will "also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users." In short, help us help you. But the full truth is probably a little less selfless. Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand. Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic's competitive positioning against rivals like OpenAI and Google. Beyond the competitive pressures of AI development, the changes would also seem to reflect broader industry shifts in data policies, as companies like Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI, for instance, is currently fighting a court order that forces the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, because of a lawsuit filed by The New York Times and other publishers. In June, OpenAI COO Brad Lightcap called this "a sweeping and unnecessary demand" that "fundamentally conflicts with the privacy commitments we have made to our users." The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected. What's alarming is how much confusion all of these changing usage policies are creating for users, many of whom remain oblivious to them. In fairness, everything is moving quickly now, so as the tech changes, privacy policies are bound to change. But many of these changes are fairly sweeping and mentioned only fleetingly amid the companies' other news. (You wouldn't think Tuesday's policy changes for Anthropic users were very big news based on where the company placed this update on its press page.) But many users don't realize the guidelines to which they've agreed have changed because the design practically guarantees it. Most ChatGPT users keep clicking on "delete" toggles that aren't technically deleting anything. Meanwhile, Anthropic's implementation of its new policy follows a familiar pattern. How so? New users will choose their preference during signup, but existing users face a pop-up with "Updates to Consumer Terms and Policies" in large text and a prominent black "Accept" button with a much tinier toggle switch for training permissions below in smaller print - and automatically set to "On." As observed earlier today by The Verge, the design raises concerns that users might quickly click "Accept" without noticing they're agreeing to data sharing. Meanwhile, the stakes for user awareness couldn't be higher. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent nearly unattainable. Under the Biden Administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in "surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print." Whether the commission -- now operating with just three of its five commissioners -- still has its eye on these practices today is an open question, one we've put directly to the FTC.
[2]
Anthropic Wants to Use Your Chats With Claude for AI Training: Here's How to Opt Out
Anthropic will soon begin using your chat transcripts to train its popular chatbot, Claude. The announcement came on Thursday as an update to the company's Consumer Terms and Privacy Policy. New users will see an option to "Help improve Claude" that can be toggled on or off as part of the sign-up flow, where existing users will begin to see a notification explaining the change. Users have until Sep 28 to opt out of the new change, as it will be enabled by default. You can still turn the option off in Claude's privacy settings. A representative for Anthropic declined to comment. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. Individual users using Claude Free, Pro, Max or Code sessions from one of the aforementioned plans will be affected by this change when it goes into effect in late September. With the option enabled, AI training will only take place on new and resumed chat and coding sessions. Older chats that aren't revisited won't be affected -- at least, for now. There are exceptions to the updated policy. Claude for Work (Team and Enterprise plans), Claude Gov and Claude Education will not be affected. This also extends to Application Programming Interface use for third parties, including Amazon Bedrock and Google Cloud's Vertex AI. Until the deadline on Sept. 28, users can ignore the notification by closing it or choosing the "not now" option. After the deadline, users must make a choice in order to continue using Claude. Read more: Anthropic's Claude Extension Bakes AI Right Into the Chrome rowser Users who opt in to AI model training will also have their data held for significantly longer than the previous 30-day window. Opted-in users will have data stored for 5 years, and like the update training policy, will only apply to new or resumed chats. Anthropic says the extended data retention will allow the company to identify misuse and detect harmful usage patterns. If you opted into the new changes by accident or changed your mind for any reason, you can always opt out. Here's how. If you previously opted in and decide to opt out, your new and resumed chats will no longer be used for future AI model training. Any qualifying data will still be included in training that has already started and models that have already been trained, but it won't be used in future training.
[3]
Anthropic will start training its AI models on chat transcripts
In smaller print below that, a few lines say, "Allow the use of your chats and coding sessions to train and improve Anthropic AI models," with a toggle on / off switch next to it. It's automatically set to "On." Ostensibly, many users will immediately click the large "Accept" button without changing the toggle switch, even if they haven't read it. If you want to opt out, you can toggle the switch to "Off" when you see the pop-up. If you already accepted without realizing and want to change your decision, navigate to your Settings, then the Privacy tab, then the Privacy Settings section, and, finally, toggle to "Off" under the "Help improve Claude" option. Consumers can change their decision anytime via their privacy settings, but that new decision will just apply to future data -- you can't take back the data that the system has already been trained on.
[4]
Anthropic changing default storage for Claude chats to 5 yrs
Claude creator Anthropic has given customers using its Free, Pro, and Max plans one month to prevent the engine from storing their chats for five years by default and using them for training. A popup will show for existing users, asking if they want to opt out of a new "Help improve Claude" function, and Anthropic will prompt new users with a similar question during app setup. But if you opt in, the data retention window is being extended from 30 days to 1,826 days, give or take leap years. Even if customers opt out, they'll still have their convos stored for 30 days. After September 28, users will no longer see the popup, and will have to tweak these retention settings in Claude's privacy settings. Any conversations that the user deletes will not be retained, Anthropic said, but, if the chats are flagged for containing objectionable content, then they could be retained for seven years. For example, discussions about nuclear weapons would trigger the process. Pro and Max users currently pay $20 and $100 a month respectively to access Claude's AI engine, but that won't buy them out of Anthropic's data collection grab. However, the new data collection policy will not affect commercial, educational, or government customers, nor will API use with Amazon Bedrock, Google Cloud's Vertex AI, and other commercial partners. The exceptions for commercial or government contracts are notable. In the latter case, the AI biz is in the running for a potentially lucrative deal with the US General Services Administration to integrate AI into government systems and reduce the nation's reliance on humans to carry the workload of dealing with citizens. A spokesperson declined to expand on the original statement, other than to tell The Register that the "updated retention length will only apply to new or resumed chats and coding sessions, and will allow us to better support model development and safety improvements." It has been a busy week for Anthropic, as it launched a new Chrome extension to try and get more people using Claude for search investigations. Anthropic has limited the rollout to 1,000 users, but may enlarge the program once it has sorted out the technical issues. Anthropic also admitted that criminals online are harnessing the AI bot's capacity to help with computer intrusions and remote worker fraud. The California biz says it's blocked one North Korean attempt to turn its AI engine to malevolent ends and is on guard for more people trying to abuse its tech. ®
[5]
Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out
Anthropic announced today that it is changing its Consumer Terms and Privacy Policy, with plans to train its AI chatbot Claude with user data. New users will be able to opt out at signup. Existing users will receive a popup that allows them to opt out of Anthropic using their data for AI training purposes. The popup is labeled "Updates to Consumer Terms and Policies," and when it shows up, unchecking the "You can help improve Claude" toggle will disallow the use of chats. Choosing to accept the policy now will allow all new or resumed chats to be used by Anthropic. Users will need to opt in or opt out by September 28, 2025, to continue using Claude. Opting out can also be done by going to Claude's Settings, selecting the Privacy option, and toggling off "Help improve Claude." Anthropic says that the new training policy will allow it to deliver "even more capable, useful AI models" and strengthen safeguards against harmful usage like scams and abuse. The updated terms apply to all users on Claude Free, Pro, and Max plans, but not to services under commercial terms like Claude for Work or Claude for Education. In addition to using chat transcripts to train Claude, Anthropic is extending data retention to five years. So if you opt in to allowing Claude to be trained with your data, Anthropic will keep your information for a five year period. Deleted conversations will not be used for future model training, and for those that do not opt in to sharing data for training, Anthropic will continue keeping information for 30 days as it does now. Anthropic says that a "combination of tools and automated processes" will be used to filter sensitive data, with no information provided to third-parties. Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.
[6]
How to Stop Anthropic From Training Its AI Models on Your Conversations
Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source. You should never assume what you say to a chatbot is private. When you interact with one of these tools, the company behind it likely scrapes the data from the session, often using it to train the underlying AI models. Unless you explicitly opt out of this practice, you've probably unwittingly trained many models in your time using AI. Anthropic, the company behind Claude, has taken a different approach. The company's privacy policy has stated that Anthropic does not collect user inputs or outputs to train Claude, unless you either report the material to the company, or opt in to training. While that doesn't mean Anthropic was abstaining from collecting data in general, you could rest easy knowing your conversations weren't feeding future versions of Claude. That's now changing. As reported by The Verge, Anthropic will now start training its AI models, Claude, on user data. That means new chats or coding sessions you engage with Claude on will be fed to Anthropic to adjust and improve the models' performances. This will not affect past sessions if you leave them be. However, if you re-engage with a past chat or coding sessions following the change, Anthropic will scrape any new data generated from the session for its training purposes. This won't just happen without your permission -- at least, not right away. Anthropic is giving users until Sept. 28 to make a decision. New users will see the option when they set up their accounts, while existing users will see a permission popup when they login. However, it's reasonable to think that some of us will be clicking through these menus and popups too quickly, and accidentally agree to data collection that we might not otherwise mean to. To Anthropic's credit, the company says it does try to hide sensitive user data through "a combination of tools and automated processes," and that it does not sell your data to third parties. Still, I certainly don't want my conversations with AI to train future models. If you feel the same, here's how to opt out. If you're an existing Claude user, you'll see a popup warning the next time you log into your account. This popup, titled "Updates to Consumer Terms and Policies," explains the new rules, and, by default, opts you into the training. To opt out, make sure the toggle next to "You can help improve Claude" is turned off. (The toggle will be set to the left with an (X), rather than to the right with a checkmark.) Hit "Accept" to lock in your choice. If you've already accepted this popup and aren't sure if you opted in to this data collection, you can still opt out. To check, open Claude and head to Settings > Privacy > Privacy Settings, then make sure the "Help improve Claude" toggle is turned off. Note that this setting will not undo any data that Anthropic has collected since you opted in.
[7]
Anthropic updates Claude AI, gives users control over data-sharing for the first time
The updates apply to all users on Claude Free, Pro and Max plans. Anthropic, the company behind the Claude AI, has announced updates to its Consumer Terms and Privacy Policy that give users more control over their data. The new changes let users decide if they want their data to be used to improve Claude and enhance protections against harmful activity, such as scams or abusive content. The updates apply to all users on Claude Free, Pro and Max plans, including when they use Claude Code. However, they do not affect services under Anthropic's Commercial Terms, such as Claude for Work, Claude Gov, Claude for Education, or API use through third-party platforms like Amazon Bedrock and Google Cloud's Vertex AI. Also read: Microsoft breaks free from OpenAI reliance, launches two homegrown AI models By opting in, users can help Anthropic make Claude safer and more capable. The company says the shared data will help improve systems that detect harmful content, reducing the chances of mistakenly flagging harmless conversations. It will also help future Claude models get better at tasks like coding, analysing information, and reasoning. Users have full control over this setting and can update their preferences at any time. New users will be asked about their choice during the signup process. Existing users will receive a notification prompting them to review the updated terms and make a decision. Also read: Govt's online gaming ban lands in court as Indian firm mounts legal challenge Existing users have until September 28, 2025 to accept the new Consumer Terms and decide whether to allow their data to be used. If users accept the policies, they will go into effect immediately. After the September 28 deadline, users will need to make a choice in the model training setting to continue using Claude. This move comes as more AI companies look to balance safety, usability, and privacy while continuing to enhance the capabilities of their models.
Share
Copy Link
Anthropic announces significant changes to its data retention and usage policies for Claude AI users, sparking discussions about privacy, consent, and the future of AI development.
Anthropic, the company behind the popular AI chatbot Claude, has announced significant changes to its data retention and usage policies. Starting September 28, 2025, users of Claude Free, Pro, and Max will face a crucial decision: opt out or allow their conversations to be used for AI model training 1.
Source: Digit
The most notable changes include:
Source: MacRumors
Users have until September 28 to make their decision. The opt-in toggle is set to "On" by default, raising concerns about user awareness 3. To opt out, users can:
It's important to note that opting out after September 28 will only affect future data collection, not data already used for training 2.
Anthropic frames these changes as beneficial for users, stating they will help improve model safety and enhance skills like coding, analysis, and reasoning 1. However, industry observers suggest the move is driven by competitive pressures and the need for high-quality conversational data to improve Anthropic's position against rivals like OpenAI and Google.
The policy shift also reflects broader industry trends, as AI companies face increasing scrutiny over data retention practices. OpenAI, for instance, is currently battling a court order to retain all consumer ChatGPT conversations indefinitely due to a lawsuit filed by publishers 1.
The changes have sparked discussions about user privacy and meaningful consent in the AI era. Privacy experts argue that the complexity of AI technologies makes it challenging for users to provide informed consent 1.
The Federal Trade Commission has previously warned AI companies about potential enforcement actions for surreptitious changes to terms of service or burying disclosures in fine print. However, it remains unclear whether the FTC will scrutinize these latest policy changes 1.
Source: CNET
The new policy does not affect Claude for Work (Team and Enterprise plans), Claude Gov, Claude Education, or API use through partners like Amazon Bedrock and Google Cloud's Vertex AI 2. This exemption is particularly notable as Anthropic competes for government contracts, including a potential deal with the US General Services Administration 4.
Microsoft introduces its first homegrown AI models, MAI-Voice-1 for speech generation and MAI-1-preview for text, signaling a potential shift in its AI strategy and relationship with OpenAI.
8 Sources
Technology
7 hrs ago
8 Sources
Technology
7 hrs ago
Nvidia reports a record-breaking Q2 FY2026 with $46.7B revenue, showcasing the company's dominance in AI hardware and continued success in gaming, despite challenges in the Chinese market.
10 Sources
Technology
15 hrs ago
10 Sources
Technology
15 hrs ago
Dell Technologies has increased its annual revenue and profit forecasts, driven by strong demand for AI-optimized servers. The company now expects $20 billion in fiscal 2026 revenue from AI server shipments, up from its previous forecast of $15 billion.
8 Sources
Technology
7 hrs ago
8 Sources
Technology
7 hrs ago
China's State Council has released a comprehensive 10-year plan for AI development, aiming to establish a fully AI-powered economy by 2035. The plan outlines aggressive targets for AI integration across various sectors of society and the economy.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
NVIDIA's latest earnings report showcases its pivotal role in the AI industry, with record-breaking revenue and ambitious projections for AI infrastructure spending.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago