2 Sources
2 Sources
[1]
Online age-verification tools spread across U.S. for child safety, but adults are being surveilled
Large volumes of sensitive identity data can become targets for government demands and hackers. But at a more fundamental level, the surveillance strikes at the foundation of the free and open internet, say civil liberties advocates, and last week a court decision in Virginia citing the First Amendment agreed. New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms -- including adult content sites, online gaming services, and social media apps -- to block underage users, forcing companies to screen everyone who approaches these digital gates. "There's a big spectrum," said Joe Kaufman, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. "The regulations are moving in many different directions at once," he said. Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user's device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year. "Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings," Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post. Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints -- often run by specialized identity-verification vendors on behalf of websites -- rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records. Vendors say a challenge is balancing safety with how much friction users will tolerate. "We're in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible," said Rivka Gerwitz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. Still, many users perceive mandatory identity checks as invasive. "Having another way to be forced to provide that information is intrusive to people," said Heidi Howard Tandy, a partner at Berger Singerman who specializes in intellectual property and internet law. Some users may attempt workarounds -- including prepaid cards or alternative credentials -- or turn to unauthorized distribution channels. "It's going to cause a piracy situation," she added. In many implementations, verification vendors -- not the websites themselves -- process and retain the identity information, returning only a pass-fail signal to the platform. Gerwitz Little said Socure does not sell verification data and that in lightweight age-estimation scenarios, where platforms use quick facial analysis or other signals rather than government documentation, the company may store little or no information. But in fuller identity-verification contexts, such as gaming and fraud prevention that require ID scans, certain adult verification records may be retained to document compliance. She said Socure can keep some adult verification data for up to three years while following applicable privacy and purging rules. Civil liberties' advocates warn that concentrating large volumes of identity data among a small number of verification vendors can create attractive targets for hackers and government demands. Earlier this year, Discord disclosed a data breach that exposed ID images belonging to approximately 70,000 users through a compromised third-party service, highlighting the security risks associated with storing sensitive identity information. In addition, they warn that expanding age-verification systems represent not only a usability challenge but a structural shift in how identity becomes tied to online behavior. Age verification risks tying users' "most sensitive and immutable data" -- names, faces, birthdays, home addresses -- to their online activity, according to Molly Buckley, a legislative analyst at the Electronic Frontier Foundation. "Age verification strikes at the foundation of the free and open internet," she said. Even when vendors promise to safeguard personal information, users ultimately rely on contractual terms they rarely read or fully understand. "There's language in their terms-of-use policies that says if the information is requested by law enforcement, they'll hand it over. They can't confirm that they will always forever be the only entity who has all of this information. Everyone needs to understand that their baseline information is not something under their control," Tandy said. As more platforms route age checks through third-party vendors, that concentration of identity data is also creating new legal exposure for the companies that rely on them. "A company is going to have some of that information passing through their own servers," Tandy said. "And you can't offload that kind of liability to a third party." Companies can distribute risk through contracts and insurance, she said, but they remain responsible for how identity systems interact with their infrastructure. "What you can do is have really good insurance and require really good insurance from the entities that you're contracting with," she said. Tandy also cautioned that retention promises can be more complex than they appear. "If they say they're holding it for three years, that's the minimum amount of time they're holding it for," she said. "I wouldn't feel comfortable trusting a company that says, 'We delete everything one day after three years.' That is not going to happen," she added. Federal and state regulators argue that age-verification laws are primarily a response to documented harms to minors and insist the rules must operate under strict privacy and security safeguards. An FTC spokesperson told CNBC that companies must limit how collected information is used. While age-verification technologies can help parents protect children online, the agency said firms are still bound by existing consumer protection rules governing data minimization, retention, and security. The agency pointed to existing rules requiring firms to retain personal information only as long as reasonably necessary and to safeguard its confidentiality and integrity.
[2]
Amid wave of kids' online safety laws, age-checking tech comes of age
NEW YORK/STOCKHOLM/SYDNEY, March 9 (Reuters) - For years, tech companies successfully resisted pressure from child safety advocates to do more to keep kids off their services, claiming technical limitations would make any attempt to restrict access for teens impractical, overly broad or a security risk. Now, a growing list of governments is concluding those hurdles are not insurmountable, and pushing ahead with aggressive new age-checking requirements for social networks, AI chatbots and porn purveyors alike. Three months after Australia launched a landmark ban on teen social media accounts, regulators across Europe, in Brazil and in a handful of U.S. states are moving to emulate it. California Governor Gavin Newsom - seen as a likely Democratic candidate for president in 2028 - joined the call last month, while Republican President Donald Trump is also reportedly "taking an interest" in age limits, according to his daughter-in-law. Spurring them along are escalating concerns over online abuse and teen mental health, and a recent outcry over the spread of AI-generated child sexual images, as well as increased confidence in the capabilities of "age assurance" software that backers say can suss out a person's approximate age using facial analysis, parental approval, ID checks and other digital clues. Recent advancements in artificial intelligence have boosted the effectiveness and slashed the cost of those age-gating tools, according to Reuters interviews with more than a dozen regulators, child safety advocates, independent researchers and vendors who perform the age checks for big tech companies, including TikTok, Facebook owner Meta and OpenAI. "The age-assurance market has matured a lot in the last couple of years," said Ariel Fox Johnson, a senior adviser to San Francisco-based Common Sense Media, a children's online advocacy group. She pointed to improving technology, as well as the establishment of trade groups, technical protocols and certification schemes standardizing evaluation of the various tools' effectiveness. AGE-ASSURANCE MARKET MATURES Social media companies now can often confidently guess a person's age group using digital breadcrumbs like the year an account was established or the type of content it views, they said, while a burgeoning industry of age assurance vendors like Yoti, k-ID and Persona offer additional layers of checks via automated tools like face scans and machine-based analysis of government IDs. At the app-store level, too, Apple and Alphabet's Google have rolled out tools that allow parents to indicate their child's age range to app developers. "The tech definitely has gotten better, not just for age verification specifically but for overall identity verification," said Merritt Maxim, a vice president at Massachusetts-based research firm Forrester. "That, in turn, has driven down the average cost of verification, so that where you were using it five years ago only for higher-value types of transactions, now you can use it for pretty much anything without a significant financial impact." Vendors generally charge well under $1 per check for basic machine-only age assurance tools, though for large volumes the price is often as low as single-digit cents, said industry executives. More costly traditional processes like human confirmation and triangulation of personal data that were standard a decade ago are still available at a premium, but are needed less frequently, the executives said. Independent evaluations back up executives' descriptions of rapid progress. According to an ongoing study run by the U.S. National Institute of Standards and Technology (NIST), face-scanning software from firms including Yoti - which performs checks for TikTok and Meta's Facebook, Instagram and Threads - were off in their age estimations by an average of 4.1 years as of initial testing in 2014, while by 2024 that average had dropped to 3.1 years, and is currently 2.5 years. FACE-SCANNING GAINS PRECISION UK-based Yoti said the performance of its latest face analysis model due out in April surpasses that of models it submitted for the NIST and Australian studies, with an average error of only 1.04 years for kids in regulators' target age range of 14 to 18. Persona, a San Francisco-based identity verification firm used by OpenAI and Reddit, touts a similar average error of 1.77 years for the 13-to-17-year-old age range. A report commissioned by the Australian government likewise determined last year that photo-based age estimation products were broadly accurate, although it acknowledged that users within three years of the law's age cutoff of 16 were in a "grey zone where system uncertainty is higher" and recommended they be diverted to "supplementary assurance methods, such as ID-based verification or parental consent." The systems also struggle more with certain skin types, with grainier imagery captured by older phones and when using privacy-protective "on-device" data processing, which entails performing a check entirely on the person's phone without sending their data out to a cloud server, executives said. For instance, systems using on-device processing are less likely to catch attempts by enterprising youngsters to appear older than they are, said Rick Song, CEO of San Francisco-based Persona. Common tricks used by teens include donning masks, applying heavy makeup or fake facial hair, or scanning the plastic faces of action figures instead of their own, he said. Still, said executives, facial age estimation can provide a digital version of the kind of screening performed daily at bars and liquor stores in the offline world. "If you look young, you can be challenged, and you may have to provide your ID," said Robin Tombs, CEO of London-based Yoti. He added that social media services generally require fewer face scans and ID checks than porn or gambling sites because they already have reams of personal information on their users. This means they can lean more on an age assurance method called "inference" -- involving analysis of online activities, connected financial information and other signals -- to satisfy regulators' requirements. The 10 social media companies included in Australia's teen ban all declined Reuters requests for data on the effectiveness of their age assurance tools. EARLY IMPLEMENTATION RESULTS Australia's internet regulator, the eSafety commissioner, has said it will collect population data for two years to assess the ban's impact and publish first results later this year. Already, companies have locked 4.7 million suspected underage accounts since the law came into effect in December, it said, although industry participants have told Reuters that some of the accounts were likely underage Google accounts that were prevented from logging in to YouTube, regardless of whether they were active. Meta said it took down about 550,000 Instagram, Facebook and Threads accounts suspected to be underage in the first weeks of the Australian ban. Snapchat said it took down about 415,000. Regulators elsewhere are watching carefully. European Commission President Ursula von der Leyen is set to discuss age verification during an upcoming visit to Canberra, according to a European lawmaker briefed on her agenda. The United Kingdom, which requires age verification for porn websites and is considering tightening child safety rules for social media and AI chatbots as well, is likewise swapping notes with Australian counterparts. Early results from the Australian experiment should be taken with a grain of salt, as companies affected by the ban generally were doing the bare minimum to comply with legal requirements, said Iain Corby, the executive director of the Age Verification Providers Association, a trade association that represents about three dozen vendors including Yoti and Persona. In some cases, he added, the social media companies asked AVPA member firms to turn off controls that make the age checks more robust. "They are extremely worried this is going to be contagious and be a policy that is adopted around the world, so they are not really motivated for it to be a glowing success," he said. "They are testing the regulator's patience to see what they can get away with." (Reporting by Katie Paul in New York, Supantha Mukherjee in Stockholm and Byron Kaye in Sydney; Editing by Kenneth Li and Matthew Lewis)
Share
Share
Copy Link
New state laws requiring age verification to protect minors are forcing millions of adults through digital checkpoints using AI-powered facial recognition. Half of U.S. states now mandate platforms verify users, while vendors claim technology has improved with costs dropping to single-digit cents per check. Privacy advocates warn the systems create surveillance infrastructure and data security risks.
Roughly half of U.S. states have enacted or are advancing kids' online safety laws that require platforms to block underage users, creating a sweeping transformation in how Americans access digital content
1
. The legislation forces companies operating adult content sites, online gaming services, and social media platforms to implement age verification checkpoints that screen everyone attempting to enter, pulling millions of adult users into mandatory identity checks they never requested. "The regulations are moving in many different directions at once," said Joe Kaufman, global head of privacy at Jumio, one of the largest digital identity-verification platforms, describing the patchwork of state laws that vary in technical demands and compliance expectations1
.Source: Market Screener
The movement extends beyond U.S. borders. Three months after Australia launched a landmark ban on teen social media accounts, regulators across Europe, in Brazil, and in multiple U.S. states are moving to emulate it
2
. California Governor Gavin Newsom joined the call last month, while Republican President Donald Trump is reportedly "taking an interest" in age limits, according to his daughter-in-law2
. Social media company Discord announced plans in February to roll out mandatory age verification globally, though the proposal quickly drew backlash from users concerned about submitting selfies or government IDs, leading the company to delay the launch until the second half of this year1
.Online age-verification systems now rely heavily on artificial intelligence, particularly facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone meets age requirements
1
. Recent advancements in artificial intelligence have boosted effectiveness and slashed costs of age-gating tools, according to interviews with regulators, child safety advocates, independent researchers, and vendors who perform checks for major platforms including TikTok, Facebook owner Meta, and OpenAI2
.Vendors generally charge well under $1 per check for basic machine-only age assurance technology, though for large volumes the price often drops to single-digit cents
2
. "The tech definitely has gotten better, not just for age verification specifically but for overall identity verification," said Merritt Maxim, a vice president at Forrester, adding that declining costs mean verification can now be used "for pretty much anything without a significant financial impact"2
.According to an ongoing study by the U.S. National Institute of Standards and Technology, face-scanning software from firms including Yoti showed dramatic improvement, with average age estimation errors dropping from 4.1 years in 2014 to 2.5 years currently
2
. UK-based Yoti, which performs checks for TikTok and Meta's Facebook, Instagram and Threads, claims its latest model due in April achieves an average error of only 1.04 years for the 14-to-18 age range2
.Civil liberties advocates warn that expanding online age-verification systems represent a structural shift in how identity becomes tied to online behavior, with age verification tying users' "most sensitive and immutable data" β names, faces, birthdays, home addresses β to their online activity
1
. "Having another way to be forced to provide that information is intrusive to people," said Heidi Howard Tandy, a partner at Berger Singerman specializing in intellectual property and internet law1
.While vendors emphasize privacy protections β Discord said facial analysis would occur on users' devices with submitted data deleted immediately
1
β data collection practices vary widely. Rivka Gerwitz Little, chief growth officer at identity-verification platform Socure, said the company may store little or no information in lightweight age-estimation scenarios, but in fuller identity verification contexts such as gaming and fraud prevention requiring ID scans, certain adult verification records may be retained for up to three years1
.Concentrating large volumes of identity data among a small number of verification vendors creates attractive targets for hackers and government demands, advocates warn
1
. Earlier this year, Discord disclosed a data breach that exposed ID images belonging to approximately 70,000 users through a compromised third-party service, highlighting data security risks associated with storing sensitive identity information1
. Last week, a court decision in Virginia citing the First Amendment sided with concerns about surveillance striking at the foundation of the free and open internet1
.Related Stories
Vendors acknowledge the challenge of balancing child safety with user friction. "We're in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible," said Gerwitz Little, adding that excessive data collection creates friction that users resist
1
. Many users perceive mandatory identity checks as invasive, potentially driving them toward workarounds including prepaid cards or alternative credentials, or toward unauthorized distribution channels1
. "It's going to cause a piracy situation," Tandy warned .Technical limitations persist despite improvements. Systems struggle more with certain skin types, grainier imagery captured by older phones, and when using privacy-protective "on-device" data processing
2
. A report commissioned by the Australian government determined users within three years of age cutoffs remain in a "grey zone where system uncertainty is higher," recommending supplementary methods like ID-based verification or parental consent2
. As state laws continue proliferating and political momentum builds across party lines, the tension between protecting minors online and preserving adult privacy will likely intensify, with courts, regulators, and technology vendors racing to define acceptable boundaries for digital identity in an increasingly age-gated internet.Summarized by
Navi
[1]
[2]
29 Aug 2025β’Policy and Regulation

16 Sept 2025β’Policy and Regulation

02 Sept 2025β’Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
