Anthropic introduces identity verification for Claude, sparking user privacy concerns

Reviewed byNidhi Govil

3 Sources

Share

Anthropic has begun rolling out identity verification requirements for its Claude AI chatbot, requiring certain users to submit government-issued photo IDs and live selfies through third-party partner Persona Identities. The move has sparked backlash from privacy-conscious users who migrated to Claude after OpenAI's Pentagon deal, raising questions about data security and the necessity of such measures for AI chatbot access.

News article

Anthropic Introduces Mandatory ID Verification for Claude Users

Anthropic has started rolling out identity verification requirements for its Claude AI chatbot, marking a significant shift in how users access certain platform capabilities

1

. The company announced that users might encounter verification prompts when accessing specific features, during routine platform integrity checks, or as part of safety and compliance measures

2

. The verification process requires a valid, physical government-issued photo IDβ€”such as a passport, driver's license, or national identity cardβ€”along with a live selfie captured via phone or webcam

3

.

While Anthropic hasn't publicly specified which exact capabilities trigger verification, the company states the process typically takes under five minutes to complete

3

. Photocopies, mobile IDs, and student credentials are not accepted under the new policy

2

.

User Privacy Concerns Emerge Over Persona Identities Partnership

The decision to implement mandatory ID verification has generated substantial backlash, particularly regarding Anthropic's choice of verification partner. The company selected Persona Identities, the same KYC infrastructure used across financial services, to handle the verification process

2

. This choice has raised user privacy concerns due to Persona's investor connectionsβ€”venture firm Founders Fund, co-founded by Peter Thiel, is one of Persona's major investors

1

. Thiel also co-founded and chairs Palantir, a surveillance company whose customers include the FBI, CIA, and US Immigration and Customs Enforcement

1

.

Many users have questioned the necessity of identity verification for AI chatbot access, especially when Anthropic already maintains credit card information for paying subscribers

1

. The timing appears particularly ironic given that millions of users fled OpenAI for Anthropic in February after OpenAI signed a Pentagon deal to deploy AI on classified networksβ€”a contract Anthropic turned down over surveillance fears and concerns about autonomous weapons

2

. Daily signups broke records at the time, with free users up 60% since January

2

.

Data Security Promises and Third-Party Storage Risks

Anthropic has emphasized that verification data goes to Persona's servers rather than its own systems

2

. The company states it will not copy or store ID images and selfies, though it retains access to verification records

3

. All data processing occurs with encryption in transit and at rest, and Persona is contractually limited in how it can use the informationβ€”restricted to identity verification and fraud detection improvement

1

.

Anthropic has promised that biometric data will not be used for model training, will not be shared with third parties for marketing purposes, and will not be sold to advertisers

3

. However, the third-party custody of government documents carries inherent risks. An October 2025 breach at Discord exposed roughly 70,000 government IDs that users had submitted for age verification, demonstrating that no platform is immune to data breaches

2

.

Broader Implications for AI Platform Accountability

The introduction of identity verification reflects a broader industry shift toward accountability in AI access. Anthropic has cited the need to address policy violations, underage access, and platform abuseβ€”problems that simple email sign-ups fail to prevent

3

. The company has been building toward tighter controls; in December, it announced classifiers to detect users who self-identify as minors, though multiple adult users reported account suspensions and lost project histories due to incorrect flags

2

.

Accounts registered from regions Anthropic doesn't formally serve are also subject to bansβ€”a detail that particularly affects Chinese users accessing Claude through intermediaries, since matching a live selfie against a physical government document is difficult to circumvent

2

. While identity verification isn't a perfect solution to AI misuse, its introduction signals that the era of frictionless, consequence-free AI access is ending

3

. Whether users find this reassuring or intrusive will likely depend on their usage patterns and trust in Anthropic's data handling commitments.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo