Anthropic Shifts Gears: User Data Now Fuels AI Training, Opt-Out Available

Reviewed byNidhi Govil

16 Sources

Share

Anthropic announces a significant change in its data policy, allowing user conversations to be used for AI model training. Users have until September 28 to opt out, with data retention extended to five years for those who opt in.

Anthropic's New Data Policy: A Shift in AI Training Approach

Anthropic, a leading AI company, has announced a significant change to its data policy, allowing user conversations to be used for training its AI models

1

. This marks a departure from the company's previous stance, which prioritized consumer data privacy by not using chat data for model training

4

.

Source: Digit

Source: Digit

Key Changes and User Options

The new policy affects users of Claude Free, Pro, and Max plans, including those using Claude Code

1

. Users have until September 28 to decide whether they want their conversations used for AI training

2

. Those who opt in will have their data retained for five years, a significant increase from the previous 30-day retention period

1

.

Existing users will see a pop-up notification explaining the change, with the option to "Help improve Claude" automatically set to "On"

3

. Users can opt out by toggling this switch off or by navigating to their privacy settings

3

.

Source: MacRumors

Source: MacRumors

Implications and Industry Context

Anthropic frames this change as beneficial for users, stating that it will help improve model safety and enhance skills like coding, analysis, and reasoning

1

. However, the move also reflects the competitive pressures in AI development, where access to high-quality conversational data is crucial

1

.

This policy shift aligns Anthropic more closely with industry practices. Other major players like Google have made similar moves, with Google recently renaming "Gemini Apps Activity" to "Keep Activity" and using user data to improve its services

4

.

Privacy Concerns and Regulatory Landscape

The change raises concerns about user privacy and meaningful consent in AI. Privacy experts have long warned that the complexity of AI makes it difficult for users to provide informed consent

1

. The Federal Trade Commission has previously warned AI companies about potential enforcement actions for unclear or deceptive privacy policies

1

.

Source: Analytics Insight

Source: Analytics Insight

Enterprise and API Users Unaffected

It's important to note that business customers using Claude Gov, Claude for Work, Claude for Education, or API access will not be affected by these changes

1

. This approach mirrors OpenAI's policy of protecting enterprise customers from data training policies

1

.

User Awareness and Control

Anthropic emphasizes that users can change their decision at any time via privacy settings

3

. However, once data has been used for training, it cannot be retroactively removed from the models

3

. The company also states that it will not sell user data to third parties and uses tools to filter or obfuscate sensitive information

5

.

As AI continues to evolve rapidly, this policy change highlights the ongoing challenges in balancing technological advancement with user privacy and consent. It underscores the importance of user awareness and the need for clear, transparent communication from AI companies about their data practices.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo