3 Sources
3 Sources
[1]
Anthropic will ask Claude users to verify their identities 'for a few use cases'
Anthropic has started rolling out identity verification on Claude "for a few use cases." The company didn't list out those use cases in its announcement, but we've asked it for details and will update this post when we hear back. Anthropic says you might see a verification prompt upon "accessing certain capabilities," asking you to verify your identity. You would have to show a valid and physical government-issued photo ID. You'd also have take a selfie with your phone or computer camera that the system will compare against the ID you present. The news, as you'd expect, wasn't well-received. Many users are questioning the necessity of identity verification to be able to use an AI chatbot, especially if Anthropic already has their credit cards on file as paying subscribers. People are also criticizing Anthropic's decision to use Persona Identities, which also provides age verification services for OpenAI and Roblox. One of Persona's major investors is venture firm Founders Fund, which was co-founded by Peter Thiel, who's also the co-founder and chairman of surveillance company Palantir. Palantir's customers are mostly federal agencies and government offices, including the FBI, the CIA and US Immigration and Customs Enforcement. Most criticisms against the company center around the services it provides those customers, as they're mainly used to expand government surveillance using its facial recognition and AI technologies. In its announcement, Anthropic said that Persona will be the one handling your IDs and selfies. It will not copy and store those images. It also said that Persona is "contractually limited" in how it can use your data and that all data passing through its process is "encrypted in transit and at rest." Anthropic emphasized that it will not use your identity data to train its models and that it will not share your data with anyone else.
[2]
You Switched to Claude Over Surveillance Fears. Now It Wants Your Passport - Decrypt
Verification data goes to Persona's servers, not Anthropic's, and won't be used to train models. Anthropic quietly published identity verification requirements for Claude this week, asking certain users to hand over a government-issued photo ID and a live selfie. Something its competitors don't require. "We are rolling out identity verification for a few use cases, and you might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures," Anthropic said. "We only use your verification data to confirm who you are and not for any other purposes." Millions of users fled OpenAI for Anthropic in February after OpenAI signed a deal to deploy AI on Pentagon classified networks -- a contract Anthropic turned down over concerns about mass surveillance and autonomous weapons. Daily signups broke records, and free users were up 60% since January, Anthropic said at the time. The privacy-conscious crowd had found its home. That crowd, it seems, may now have some documents to prepare if it wants to continue using Claude. The reactions so far have been quite negative, pointing out that it's a deliberate decision and not a regulation or a mandatory order imposed by a government on Anthropic as a service provider. According to the help center page, which went live on April 14, Anthropic selected Persona Identities as its verification partner -- the same KYC infrastructure used across financial services -- and requires a physical, undamaged passport, driver's license, or national identity card. Photocopies, mobile IDs, and student credentials don't count. A live selfie may also be required. The policy isn't universal yet. Verification will trigger when accessing "certain capabilities," during "routine platform integrity checks," or as part of safety and compliance measures. Anthropic hasn't said publicly which features are gated, or what user behavior might prompt a check. The company did not immediately respond to Decrypt's request for additional details. On data handling, Anthropic draws a careful line: your ID and selfie go to Persona's servers, not Anthropic's own systems. The company says it is the data controller setting the terms, and that Persona can use the information to verify identity and improve fraud detection. The data is encrypted in transit and at rest, excluded from model training, and won't be shared with third parties for marketing, something Anthropic has been careful to promise since its earliest commercial policies. Careful promises, though, have a history of meeting careless infrastructure. An October 2025 breach at Discord exposed roughly 70,000 government IDs users had submitted for age verification. Persona is a serious player in this space, but third-party custody of government documents has demonstrated repeatedly that no third party is immune. Tighter identity controls also fit a pattern Anthropic has been building toward. In December, the company announced classifiers to detect users who self-identify as minors. Multiple adult users had their accounts suspended anyway, reporting that entire project histories were wiped while they tried to appeal incorrect flags. Accounts registered from regions Anthropic doesn't formally serve are also subject to bans -- a detail that lands hardest on Chinese users accessing Claude through intermediaries, since a live selfie matched against a physical government document is hard to fake your way through.
[3]
Claude announces ID verification: What it means for your account and privacy
Your AI chatbot now wants to see your passport. Anthropic wants Claude users to verify using their ID. So, if you use Claude and haven't seen the prompt yet, there's a good chance you will be getting it soon. AI tools have become very deeply embedded in our daily work and life and anonymity at scale has potential to create real problems - abuse, policy violations, underage access - and a simple email sign-up offers almost no friction against any of them. Also read: Anthropic uses AI agents for AI alignment breakthrough, but at what cost? The verification process is handled through Persona Identities, a third-party identity verification partner. Users are asked to present a valid, government-issued photo ID like a passport, driver's licence, or national identity card, along with a live selfie taken via phone or webcam. According to Anthropic, the entire process typically takes under five minutes. More importantly, your ID images and biometric data are stored by Persona, not on Anthropic's own systems. Anthropic retains access to verification records but does not copy or hold those images independently. All data is encrypted. Also read: This mom is running 11 OpenClaw instances to manage her entire family This is what I have been thinking about since I saw the announcement. Handing a government-issued ID to an AI company feels like a step too far. Anthropic has been explicit about its constraints that verification data will not be used to train models, will not be shared with advertisers, and will not be sold to third parties. Persona is contractually limited to using the data solely for fraud prevention and verification improvement. Whether you trust these commitments is another question entirely but it is a fair one to ask. There's something uncomfortable about handing a government-issued ID to an organisation whose entire existence is built on consuming, processing, and learning from data. Anthropic has always appeared to be more principled than most, but "we promise not to misuse it" is not that good of a guarantee. I do hope though the data policies are as well intentioned as they are made out to be. Anthropic says verification will be prompted for certain capabilities, routine platform integrity checks, and other safety and compliance measures. It's also to enforce age restrictions, with under-18 access listed as a grounds for account suspension. Identity verification isn't a perfect solution to AI misuse. However, its introduction also means that the era of frictionless and consequence free AI access is ending. Platforms are trying to introduce accountability - for users and providers alike. Whether that feels reassuring or intrusive probably says something about how you've been using Claude.
Share
Share
Copy Link
Anthropic has begun rolling out identity verification requirements for its Claude AI chatbot, requiring certain users to submit government-issued photo IDs and live selfies through third-party partner Persona Identities. The move has sparked backlash from privacy-conscious users who migrated to Claude after OpenAI's Pentagon deal, raising questions about data security and the necessity of such measures for AI chatbot access.

Anthropic has started rolling out identity verification requirements for its Claude AI chatbot, marking a significant shift in how users access certain platform capabilities
1
. The company announced that users might encounter verification prompts when accessing specific features, during routine platform integrity checks, or as part of safety and compliance measures2
. The verification process requires a valid, physical government-issued photo IDβsuch as a passport, driver's license, or national identity cardβalong with a live selfie captured via phone or webcam3
.While Anthropic hasn't publicly specified which exact capabilities trigger verification, the company states the process typically takes under five minutes to complete
3
. Photocopies, mobile IDs, and student credentials are not accepted under the new policy2
.The decision to implement mandatory ID verification has generated substantial backlash, particularly regarding Anthropic's choice of verification partner. The company selected Persona Identities, the same KYC infrastructure used across financial services, to handle the verification process
2
. This choice has raised user privacy concerns due to Persona's investor connectionsβventure firm Founders Fund, co-founded by Peter Thiel, is one of Persona's major investors1
. Thiel also co-founded and chairs Palantir, a surveillance company whose customers include the FBI, CIA, and US Immigration and Customs Enforcement1
.Many users have questioned the necessity of identity verification for AI chatbot access, especially when Anthropic already maintains credit card information for paying subscribers
1
. The timing appears particularly ironic given that millions of users fled OpenAI for Anthropic in February after OpenAI signed a Pentagon deal to deploy AI on classified networksβa contract Anthropic turned down over surveillance fears and concerns about autonomous weapons2
. Daily signups broke records at the time, with free users up 60% since January2
.Anthropic has emphasized that verification data goes to Persona's servers rather than its own systems
2
. The company states it will not copy or store ID images and selfies, though it retains access to verification records3
. All data processing occurs with encryption in transit and at rest, and Persona is contractually limited in how it can use the informationβrestricted to identity verification and fraud detection improvement1
.Anthropic has promised that biometric data will not be used for model training, will not be shared with third parties for marketing purposes, and will not be sold to advertisers
3
. However, the third-party custody of government documents carries inherent risks. An October 2025 breach at Discord exposed roughly 70,000 government IDs that users had submitted for age verification, demonstrating that no platform is immune to data breaches2
.Related Stories
The introduction of identity verification reflects a broader industry shift toward accountability in AI access. Anthropic has cited the need to address policy violations, underage access, and platform abuseβproblems that simple email sign-ups fail to prevent
3
. The company has been building toward tighter controls; in December, it announced classifiers to detect users who self-identify as minors, though multiple adult users reported account suspensions and lost project histories due to incorrect flags2
.Accounts registered from regions Anthropic doesn't formally serve are also subject to bansβa detail that particularly affects Chinese users accessing Claude through intermediaries, since matching a live selfie against a physical government document is difficult to circumvent
2
. While identity verification isn't a perfect solution to AI misuse, its introduction signals that the era of frictionless, consequence-free AI access is ending3
. Whether users find this reassuring or intrusive will likely depend on their usage patterns and trust in Anthropic's data handling commitments.Summarized by
Navi
29 Aug 2025β’Technology

04 Apr 2026β’Technology

23 May 2025β’Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
