4 Sources
[1]
Anthropic will ask Claude users to verify their identities 'for a few use cases'
Anthropic has started rolling out identity verification on Claude "for a few use cases." The company didn't list out those use cases in its announcement, but we've asked it for details and will update this post when we hear back. Anthropic says you might see a verification prompt upon "accessing certain capabilities," asking you to verify your identity. You would have to show a valid and physical government-issued photo ID. You'd also have take a selfie with your phone or computer camera that the system will compare against the ID you present. The news, as you'd expect, wasn't well-received. Many users are questioning the necessity of identity verification to be able to use an AI chatbot, especially if Anthropic already has their credit cards on file as paying subscribers. People are also criticizing Anthropic's decision to use Persona Identities, which also provides age verification services for OpenAI and Roblox. One of Persona's major investors is venture firm Founders Fund, which was co-founded by Peter Thiel, who's also the co-founder and chairman of surveillance company Palantir. Palantir's customers are mostly federal agencies and government offices, including the FBI, the CIA and US Immigration and Customs Enforcement. Most criticisms against the company center around the services it provides those customers, as they're mainly used to expand government surveillance using its facial recognition and AI technologies. In its announcement, Anthropic said that Persona will be the one handling your IDs and selfies. It will not copy and store those images. It also said that Persona is "contractually limited" in how it can use your data and that all data passing through its process is "encrypted in transit and at rest." Anthropic emphasized that it will not use your identity data to train its models and that it will not share your data with anyone else.
[2]
Claude wants your passport as identity verification to access 'certain capabilities' -- but promises it isn't using your face to train its models
Anthropic says your verification data will only be used to confirm identity Users of Anthropic's Claude AI platform will soon need to provide government-issued ID such as a passport or driving licence in order to use the service. The company confirmed users will need to provide ID to access "certain capabilities" within Claude AI tools, but claimed it was part of its "routine platform integrity checks". The move, rolled out in partnership with Persona Identities, has already worried some users, who are concerned their data may be stored and used for other purposes, despite Anthropic denying this is the case. Claude user data "Being responsible with powerful technology starts with knowing who is using it," a Claude Support post outlining the news explained. "Identity verification helps us prevent abuse, enforce our usage policies, and comply with legal obligations." "We are rolling out identity verification for a few use cases, and you might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures." As part of the verification process, Anthropic says users may also be asked to take a live selfie with your phone, or your webcam, to prove their identity, with the company saying the whole process "typically takes under five minutes". Government-issued ID "from most countries" will be accepted, so long as it includes a photo of the user - for example, a passport, driver's license or state/provincial ID card, or national identity card. Photocopies, screenshots, scans, or photos of a photo will not be accepted, nor will digital or mobile IDs, student IDs, employee badges, library cards, bank cards, or temporary paper IDs. Providing ID is typical for accessing some technology platforms, but given the recent controversies around some of Anthropic's services, users may be concerned about the security of their data. The company says as it is the data controller for the information, it sets the rules for how it is used and how long it is stored. The data will be held on Persona systems, processing on Anthropic's behalf, meaning the latter can access verification records through Persona's platform when needed, eg if it needs to review an appeal, but it does not copy or store those images. Anthropic was also very clear to outline it is not using the identity data to train its models, noting, "verification data is used solely to confirm who you are and to meet our legal and safety obligations" and that it is asking for the minimum amount required to verify a user's identity. "We are not sharing your identity data with anyone else," it added. "Verification data stays between you, Persona, and Anthropic, except where we're legally required to respond to valid legal processes. Your verification data is never shared with third parties for marketing, advertising, or any purpose unrelated to verification and compliance." Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[3]
You Switched to Claude Over Surveillance Fears. Now It Wants Your Passport - Decrypt
Verification data goes to Persona's servers, not Anthropic's, and won't be used to train models. Anthropic quietly published identity verification requirements for Claude this week, asking certain users to hand over a government-issued photo ID and a live selfie. Something its competitors don't require. "We are rolling out identity verification for a few use cases, and you might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures," Anthropic said. "We only use your verification data to confirm who you are and not for any other purposes." Millions of users fled OpenAI for Anthropic in February after OpenAI signed a deal to deploy AI on Pentagon classified networks -- a contract Anthropic turned down over concerns about mass surveillance and autonomous weapons. Daily signups broke records, and free users were up 60% since January, Anthropic said at the time. The privacy-conscious crowd had found its home. That crowd, it seems, may now have some documents to prepare if it wants to continue using Claude. The reactions so far have been quite negative, pointing out that it's a deliberate decision and not a regulation or a mandatory order imposed by a government on Anthropic as a service provider. According to the help center page, which went live on April 14, Anthropic selected Persona Identities as its verification partner -- the same KYC infrastructure used across financial services -- and requires a physical, undamaged passport, driver's license, or national identity card. Photocopies, mobile IDs, and student credentials don't count. A live selfie may also be required. The policy isn't universal yet. Verification will trigger when accessing "certain capabilities," during "routine platform integrity checks," or as part of safety and compliance measures. Anthropic hasn't said publicly which features are gated, or what user behavior might prompt a check. The company did not immediately respond to Decrypt's request for additional details. On data handling, Anthropic draws a careful line: your ID and selfie go to Persona's servers, not Anthropic's own systems. The company says it is the data controller setting the terms, and that Persona can use the information to verify identity and improve fraud detection. The data is encrypted in transit and at rest, excluded from model training, and won't be shared with third parties for marketing, something Anthropic has been careful to promise since its earliest commercial policies. Careful promises, though, have a history of meeting careless infrastructure. An October 2025 breach at Discord exposed roughly 70,000 government IDs users had submitted for age verification. Persona is a serious player in this space, but third-party custody of government documents has demonstrated repeatedly that no third party is immune. Tighter identity controls also fit a pattern Anthropic has been building toward. In December, the company announced classifiers to detect users who self-identify as minors. Multiple adult users had their accounts suspended anyway, reporting that entire project histories were wiped while they tried to appeal incorrect flags. Accounts registered from regions Anthropic doesn't formally serve are also subject to bans -- a detail that lands hardest on Chinese users accessing Claude through intermediaries, since a live selfie matched against a physical government document is hard to fake your way through.
[4]
Claude announces ID verification: What it means for your account and privacy
Your AI chatbot now wants to see your passport. Anthropic wants Claude users to verify using their ID. So, if you use Claude and haven't seen the prompt yet, there's a good chance you will be getting it soon. AI tools have become very deeply embedded in our daily work and life and anonymity at scale has potential to create real problems - abuse, policy violations, underage access - and a simple email sign-up offers almost no friction against any of them. Also read: Anthropic uses AI agents for AI alignment breakthrough, but at what cost? The verification process is handled through Persona Identities, a third-party identity verification partner. Users are asked to present a valid, government-issued photo ID like a passport, driver's licence, or national identity card, along with a live selfie taken via phone or webcam. According to Anthropic, the entire process typically takes under five minutes. More importantly, your ID images and biometric data are stored by Persona, not on Anthropic's own systems. Anthropic retains access to verification records but does not copy or hold those images independently. All data is encrypted. Also read: This mom is running 11 OpenClaw instances to manage her entire family This is what I have been thinking about since I saw the announcement. Handing a government-issued ID to an AI company feels like a step too far. Anthropic has been explicit about its constraints that verification data will not be used to train models, will not be shared with advertisers, and will not be sold to third parties. Persona is contractually limited to using the data solely for fraud prevention and verification improvement. Whether you trust these commitments is another question entirely but it is a fair one to ask. There's something uncomfortable about handing a government-issued ID to an organisation whose entire existence is built on consuming, processing, and learning from data. Anthropic has always appeared to be more principled than most, but "we promise not to misuse it" is not that good of a guarantee. I do hope though the data policies are as well intentioned as they are made out to be. Anthropic says verification will be prompted for certain capabilities, routine platform integrity checks, and other safety and compliance measures. It's also to enforce age restrictions, with under-18 access listed as a grounds for account suspension. Identity verification isn't a perfect solution to AI misuse. However, its introduction also means that the era of frictionless and consequence free AI access is ending. Platforms are trying to introduce accountability - for users and providers alike. Whether that feels reassuring or intrusive probably says something about how you've been using Claude.
Share
Copy Link
Anthropic has begun rolling out identity verification requirements for Claude AI users, requiring government-issued photo IDs and live selfies for certain capabilities. The move, handled through third-party partner Persona Identities, has sparked user privacy concerns despite assurances that verification data won't be used for model training. The decision marks a shift for a platform that attracted millions fleeing OpenAI over surveillance fears.
Anthropic has started rolling out identity verification requirements for its Claude AI platform, asking users to provide a government-issued photo ID and live selfie when accessing certain capabilities
1
. The company announced the change this week, stating that verification prompts may appear during routine platform integrity checks or as part of safety and compliance measures2
. While Anthropic hasn't publicly specified which features trigger verification, the company emphasizes that the process helps prevent abuse, enforce usage policies, and meet legal obligations2
.
Source: Digit
The verification process, which typically takes under five minutes, accepts physical documents from most countries including passports, driver's licenses, state or provincial ID cards, and national identity cards
2
. Photocopies, screenshots, digital IDs, student credentials, and temporary paper IDs are not accepted2
. Users must also take a live selfie using their phone or webcam, which the system compares against the submitted ID1
.
Source: TechRadar
The announcement has generated significant backlash from users questioning why an AI chatbot requires identity verification, particularly for paying subscribers whose credit cards are already on file
1
. The timing proves especially awkward given that millions of users migrated to Claude AI from OpenAI in February after OpenAI signed a Pentagon deal to deploy AI on classified networks—a contract Anthropic declined over surveillance fears and concerns about autonomous weapons3
. Daily signups broke records at the time, with free users up 60% since January as privacy-conscious individuals sought an alternative3
.
Source: Decrypt
Anthropic selected Persona Identities as its verification partner, the same KYC infrastructure used across financial services and by companies like OpenAI and Roblox
1
3
. Criticism intensified when users discovered that Persona counts venture firm Founders Fund as a major investor—co-founded by Peter Thiel, who also co-founded and chairs Palantir, a surveillance company serving federal agencies including the FBI, CIA, and US Immigration and Customs Enforcement1
. Palantir's services primarily expand government surveillance using facial recognition and AI technologies1
.Under the arrangement, Persona handles ID images and biometric data on its servers, not Anthropic's systems
3
. Anthropic functions as the data controller, setting rules for how information is used and stored, while Persona processes data on Anthropic's behalf2
. Anthropic can access verification records through Persona's platform when needed, such as reviewing appeals, but does not copy or store those images2
.Related Stories
Anthropic has issued explicit assurances about data handling. The company states that verification data is used solely to confirm identity and meet legal and safety obligations, not for model training
2
. All data passing through the verification process is encrypted in transit and at rest1
. Anthropic emphasizes it will not share identity data with third parties for marketing, advertising, or any purpose unrelated to verification and compliance, except where legally required to respond to valid legal processes2
. Persona is contractually limited in how it can use the data, restricted to fraud prevention and verification improvement3
.However, third-party custody of government documents carries inherent risks. An October 2025 breach at Discord exposed roughly 70,000 government IDs that users had submitted for age verification
3
. While Persona maintains serious infrastructure, the incident demonstrates that no third party proves immune to security failures3
.The move fits a pattern Anthropic has been building toward greater accountability. In December, the company deployed classifiers to detect users who self-identify as minors, though multiple adult users reported account suspensions and lost project histories while appealing incorrect flags
3
. The verification system also enforces age restrictions, with under-18 access listed as grounds for account suspension4
. Accounts registered from regions Anthropic doesn't formally serve face bans—a detail affecting Chinese users accessing the AI chatbot through intermediaries, since matching a live selfie against a physical government document proves difficult to circumvent3
.The shift signals that frictionless, consequence-free AI access is ending
4
. AI tools have become deeply embedded in daily work and life, and anonymity at scale creates potential for abuse and policy violations that simple email sign-ups cannot prevent4
. Whether users find this development reassuring or intrusive likely depends on their usage patterns and trust in corporate data promises4
. What remains clear is that platforms are introducing accountability measures for both users and providers, though reactions suggest many question whether "we promise not to misuse it" constitutes sufficient guarantee when handing government credentials to an organization built on consuming and processing data4
.Summarized by
Navi
29 Aug 2025•Technology

23 May 2025•Technology

04 Apr 2026•Technology

1
Health

2
Technology

3
Technology
