5 Sources
5 Sources
[1]
Focus: Amid wave of kids' online safety laws, age-checking tech comes of age
NEW YORK/STOCKHOLM/SYDNEY, March 9 (Reuters) - For years, tech companies successfully resisted pressure from child safety advocates to do more to keep kids off their services, claiming technical limitations would make any attempt to restrict access for teens impractical, overly broad or a security risk. Now, a growing list of governments is concluding those hurdles are not insurmountable, and pushing ahead with aggressive new age-checking requirements for social networks, AI chatbots and porn purveyors alike. Three months after Australia launched a landmark ban on teen social media accounts, regulators across Europe, in Brazil and in a handful of U.S. states are moving to emulate it. California Governor Gavin Newsom - seen as a likely Democratic candidate for president in 2028 - joined the call, opens new tab last month, while Republican President Donald Trump is also reportedly "taking an interest", opens new tab in age limits, according to his daughter-in-law. Spurring them along are escalating concerns over online abuse and teen mental health, and a recent outcry over the spread of AI-generated child sexual images, as well as increased confidence in the capabilities of "age assurance" software that backers say can suss out a person's approximate age using facial analysis, parental approval, ID checks and other digital clues. Recent advancements in artificial intelligence have boosted the effectiveness and slashed the cost of those age-gating tools, according to Reuters interviews with more than a dozen regulators, child safety advocates, independent researchers and vendors who perform the age checks for big tech companies, including TikTok, Facebook owner Meta (META.O), opens new tab and OpenAI. "The age-assurance market has matured a lot in the last couple of years," said Ariel Fox Johnson, a senior adviser to San Francisco-based Common Sense Media, a children's online advocacy group. She pointed to improving technology, as well as the establishment of trade groups, technical protocols and certification schemes standardizing evaluation of the various tools' effectiveness. AGE-ASSURANCE MARKET MATURES Social media companies now can often confidently guess a person's age group using digital breadcrumbs like the year an account was established or the type of content it views, they said, while โ a burgeoning industry of age assurance vendors like Yoti, k-ID and Persona offer additional layers of checks via automated tools like face scans and machine-based analysis of government IDs. At the app-store level, too, Apple and Alphabet's (GOOGL.O), opens new tab Google have rolled out tools that allow parents to indicate their child's age range to app developers. "The tech definitely has gotten better, not just for age verification specifically but for overall identity verification," said Merritt Maxim, a vice president at Massachusetts-based research firm Forrester. "That, in turn, has driven down the average cost of verification, so that where you were using it five years ago only for higher-value types of transactions, now you can use it for pretty much anything without a significant financial impact." Vendors generally charge well under $1 per check for basic machine-only age assurance tools, though for large volumes the price is often as low as single-digit cents, said industry executives. More costly traditional processes like human confirmation and triangulation of personal data that were standard a decade ago are still available at a premium, but are needed less frequently, the executives said. Independent evaluations back up executives' descriptions of rapid progress. According to an ongoing study run by the U.S. National Institute of Standards and Technology (NIST), face-scanning software from firms including Yoti - which performs checks for TikTok and Meta's (META.O), opens new tab Facebook, Instagram and Threads - were off in their age estimations by an average of 4.1 years as of initial testing in 2014, while by 2024 that average had dropped to 3.1 years, and is currently 2.5 years. FACE-SCANNING GAINS PRECISION UK-based Yoti said the performance of its latest face analysis model due out in April surpasses that of models it submitted for the NIST and Australian studies, with an average error of only 1.04 years for kids in regulators' target age range of 14 to 18. Persona, a San Francisco-based identity verification firm used by OpenAI and Reddit (RDDT.N), opens new tab, touts a similar average error of 1.77 years for the 13-to-17-year-old age range. A report commissioned by the Australian government likewise determined last year that photo-based age estimation products were broadly accurate, although it โ acknowledged that users within three years of the law's age cutoff of 16 were in a "grey zone where system uncertainty is higher" and recommended they be diverted to "supplementary assurance methods, such as ID-based verification or parental consent." The systems also struggle more with certain skin types, with grainier imagery captured by older phones and when using privacy-protective "on-device" data processing, which entails performing a check entirely on the person's phone without sending their data out to a cloud server, executives said. For instance, systems using on-device processing are less likely to catch attempts by enterprising youngsters to appear older than they are, said Rick Song, CEO of San Francisco-based Persona. Common tricks used by teens include donning masks, applying heavy makeup or fake facial hair, or scanning the plastic faces of action figures instead of their own, he said. Still, said executives, facial age estimation can provide a digital version of the kind of screening performed daily at bars and liquor stores in the offline world. "If you look young, you can be challenged, and you may have to provide your ID," said Robin Tombs, CEO of London-based Yoti. He added that social media โ services generally require fewer face scans and ID checks than porn or gambling sites because they already have reams of personal information on their users. This means they can lean more on an age assurance method called "inference" -- involving analysis of online activities, connected financial information and other signals -- to satisfy regulators' requirements. The 10 social media companies included in Australia's teen ban all declined Reuters requests for data on the effectiveness of their age assurance tools. EARLY IMPLEMENTATION RESULTS Australia's internet regulator, the eSafety commissioner, has said it will collect population data for two years to assess the ban's impact and publish first results later this year. Already, companies have locked 4.7 million suspected underage accounts since the law came into effect in December, it said, although industry โ participants have told Reuters that some of the accounts were likely underage Google accounts that were prevented from logging in to YouTube, regardless of whether they were active. Meta said it took down about 550,000 Instagram, Facebook and Threads accounts suspected to be underage in the first weeks of the Australian ban. Snapchat (SNAP.N), opens new tab said it took down about 415,000. Regulators elsewhere are watching carefully. European Commission President Ursula von der Leyen is set to discuss age verification during an upcoming visit to Canberra, according to a European lawmaker briefed on her agenda. The United Kingdom, which requires age verification for porn websites and is considering tightening child safety rules for social media and AI chatbots as โ well, is likewise swapping notes with Australian counterparts. Early results from the Australian experiment should be taken with a grain of salt, as companies affected by the ban generally were doing the bare minimum to comply with legal requirements, said Iain Corby, the executive director of the Age Verification Providers Association, a trade association that represents about three dozen vendors including Yoti and Persona. In some cases, he added, the social media companies asked AVPA member firms to turn off controls that make the age checks more robust. "They are extremely worried this is going to be contagious and be a policy that is adopted around the world, so they are not really motivated for it to be a glowing success," he said. "They are testing the regulator's patience to see what they can get away with." Reporting by Katie Paul in New York, Supantha Mukherjee in Stockholm and Byron Kaye in Sydney; Editing by Kenneth Li and Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Data Privacy * Public Health * Public Policy Supantha Mukherjee Thomson Reuters Supantha leads the European Technology and Telecoms coverage, with a special focus on emerging technologies such as AI and 5G. He has been a journalist for about 18 years. He joined Reuters in 2006 and has covered a variety of beats ranging from financial sector to technology. He is based in Stockholm, Sweden. Byron Kaye Thomson Reuters Byron Kaye is the Reuters chief companies correspondent for Australia, based in Sydney. Over 10 years at Reuters he has covered banks, retail, healthcare, media, technology and politics, among other topics. He can be reached at +612 9171 7541 or on Signal via username byronkaye.01
[2]
Online age-verification tools spread across U.S. for child safety, but adults are being surveilled
Large volumes of sensitive identity data can become targets for government demands and hackers. But at a more fundamental level, the surveillance strikes at the foundation of the free and open internet, say civil liberties advocates, and last week a court decision in Virginia citing the First Amendment agreed. New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms -- including adult content sites, online gaming services, and social media apps -- to block underage users, forcing companies to screen everyone who approaches these digital gates. "There's a big spectrum," said Joe Kaufman, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. "The regulations are moving in many different directions at once," he said. Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user's device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year. "Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings," Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post. Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints -- often run by specialized identity-verification vendors on behalf of websites -- rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records. Vendors say a challenge is balancing safety with how much friction users will tolerate. "We're in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible," said Rivka Gerwitz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. Still, many users perceive mandatory identity checks as invasive. "Having another way to be forced to provide that information is intrusive to people," said Heidi Howard Tandy, a partner at Berger Singerman who specializes in intellectual property and internet law. Some users may attempt workarounds -- including prepaid cards or alternative credentials -- or turn to unauthorized distribution channels. "It's going to cause a piracy situation," she added. In many implementations, verification vendors -- not the websites themselves -- process and retain the identity information, returning only a pass-fail signal to the platform. Gerwitz Little said Socure does not sell verification data and that in lightweight age-estimation scenarios, where platforms use quick facial analysis or other signals rather than government documentation, the company may store little or no information. But in fuller identity-verification contexts, such as gaming and fraud prevention that require ID scans, certain adult verification records may be retained to document compliance. She said Socure can keep some adult verification data for up to three years while following applicable privacy and purging rules. Civil liberties' advocates warn that concentrating large volumes of identity data among a small number of verification vendors can create attractive targets for hackers and government demands. Earlier this year, Discord disclosed a data breach that exposed ID images belonging to approximately 70,000 users through a compromised third-party service, highlighting the security risks associated with storing sensitive identity information. In addition, they warn that expanding age-verification systems represent not only a usability challenge but a structural shift in how identity becomes tied to online behavior. Age verification risks tying users' "most sensitive and immutable data" -- names, faces, birthdays, home addresses -- to their online activity, according to Molly Buckley, a legislative analyst at the Electronic Frontier Foundation. "Age verification strikes at the foundation of the free and open internet," she said. Even when vendors promise to safeguard personal information, users ultimately rely on contractual terms they rarely read or fully understand. "There's language in their terms-of-use policies that says if the information is requested by law enforcement, they'll hand it over. They can't confirm that they will always forever be the only entity who has all of this information. Everyone needs to understand that their baseline information is not something under their control," Tandy said. As more platforms route age checks through third-party vendors, that concentration of identity data is also creating new legal exposure for the companies that rely on them. "A company is going to have some of that information passing through their own servers," Tandy said. "And you can't offload that kind of liability to a third party." Companies can distribute risk through contracts and insurance, she said, but they remain responsible for how identity systems interact with their infrastructure. "What you can do is have really good insurance and require really good insurance from the entities that you're contracting with," she said. Tandy also cautioned that retention promises can be more complex than they appear. "If they say they're holding it for three years, that's the minimum amount of time they're holding it for," she said. "I wouldn't feel comfortable trusting a company that says, 'We delete everything one day after three years.' That is not going to happen," she added. Federal and state regulators argue that age-verification laws are primarily a response to documented harms to minors and insist the rules must operate under strict privacy and security safeguards. An FTC spokesperson told CNBC that companies must limit how collected information is used. While age-verification technologies can help parents protect children online, the agency said firms are still bound by existing consumer protection rules governing data minimization, retention, and security. The agency pointed to existing rules requiring firms to retain personal information only as long as reasonably necessary and to safeguard its confidentiality and integrity.
[3]
Amid Wave of Kids' Online Safety Laws, Age-Checking Tech Comes of Age
By Katie Paul, Supantha Mukherjee and Byron Kaye NEW YORK/STOCKHOLM/SYDNEY, March 9 (Reuters) - For years, tech companies successfully resisted pressure from child safety advocates to do more to keep kids off โ their services, โ claiming technical limitations would make any attempt to restrict access for teens impractical, overly broad or โ a security risk. Now, a growing list of governments is concluding those hurdles are not insurmountable, and pushing ahead with aggressive new age-checking requirements for social networks, AI chatbots and porn purveyors alike. Three months after Australia launched a landmark ban on teen social media accounts, regulators across Europe, in Brazil and in a handful of U.S. states are moving to emulate it. California Governor Gavin Newsom - seen as a likely Democratic candidate for president in 2028 - joined the call last month, while Republican President Donald Trump is also reportedly "taking an interest" in age limits, according to his daughter-in-law. Spurring them along are escalating concerns over online abuse and teen mental health, and a recent outcry over the spread of AI-generated child sexual images, as well as increased confidence in the capabilities of "age assurance" software that backers say can suss out a person's approximate age using facial analysis, parental approval, ID checks and other digital clues. Recent advancements in artificial intelligence have boosted the effectiveness and slashed the cost of those age-gating tools, according to Reuters interviews with more than a dozen regulators, โ child safety advocates, independent researchers and vendors โ who perform the age checks for big tech companies, including TikTok, Facebook owner Meta and OpenAI. "The age-assurance market has matured a lot in the last couple of years," said Ariel Fox Johnson, a senior adviser to San Francisco-based Common Sense Media, a children's online advocacy group. She pointed to improving technology, as well as the establishment of trade groups, technical protocols and certification schemes standardizing evaluation of the various tools' effectiveness. AGE-ASSURANCE MARKET MATURES Social media companies now can often confidently guess a person's age group using digital breadcrumbs like the year an account was established or the type of content it views, they said, while a burgeoning industry of age assurance vendors like Yoti, k-ID and Persona offer additional layers of checks via automated tools like face scans and machine-based analysis of government IDs. At the app-store level, too, Apple and Alphabet's Google have rolled out tools that allow parents to indicate their child's age range to app developers. "The tech definitely has gotten better, not just for age verification specifically but for overall identity verification," said Merritt Maxim, a vice president at Massachusetts-based research firm Forrester. "That, in turn, has driven down the average cost of verification, so that where you were using it five years ago only for higher-value โ types โ of transactions, now you can use it for pretty much anything โ without a significant financial impact." Vendors generally charge well under $1 per check for basic machine-only age assurance tools, though for large volumes the price is often as low as single-digit cents, said industry executives. More costly traditional processes like human confirmation and triangulation of personal data that were standard a decade ago are still available at a premium, but are needed less frequently, the executives said. Independent evaluations back up executives' descriptions of rapid progress. According to โ an ongoing study run by the U.S. National Institute of Standards and Technology (NIST), face-scanning software from firms including Yoti - which performs checks for TikTok and Meta's Facebook, Instagram and Threads - were off in their age estimations by an average of 4.1 years as of initial testing in 2014, while by 2024 that average had dropped to 3.1 years, and is currently 2.5 years. FACE-SCANNING GAINS PRECISION UK-based Yoti said the performance of its latest face analysis model due out in April surpasses that of models it submitted for the NIST and Australian studies, with an average error of only 1.04 years for kids in regulators' target age range of 14 to 18. Persona, a San Francisco-based identity verification firm used by OpenAI and Reddit, touts a similar average error of 1.77 years for the 13-to-17-year-old age range. A report commissioned by the Australian government likewise determined last year that photo-based age estimation products were broadly accurate, although it acknowledged that users within three years of the law's age cutoff of 16 were in a "grey zone where system uncertainty is higher" and recommended they be diverted to "supplementary โ assurance methods, such as ID-based verification or parental consent." The systems also struggle more with certain skin types, with grainier imagery captured by older phones and when using privacy-protective "on-device" data processing, which entails performing a check entirely on โ the person's phone without sending their data out to a cloud server, executives said. For instance, systems using on-device processing are less likely to catch attempts by enterprising youngsters to appear older than they are, said Rick Song, CEO of San Francisco-based Persona. Common tricks used by teens include donning masks, applying heavy makeup or fake facial hair, or scanning the plastic faces of action figures instead of their own, he said. Still, said executives, facial age estimation can provide a digital version of the kind of screening performed daily at bars and liquor stores in the offline world. "If you look young, you can be challenged, and you may have to provide your ID," said Robin Tombs, CEO of London-based Yoti. He added that social media services generally require fewer face scans and ID checks than porn or gambling sites because they already have reams of personal information on their users. This means they can lean more on an age assurance method called "inference" -- involving analysis of online activities, connected financial information and other signals -- to satisfy regulators' requirements. The 10 social media companies included in Australia's teen ban all declined Reuters requests for data on the effectiveness of their age assurance tools. EARLY IMPLEMENTATION RESULTS Australia's internet regulator, the eSafety commissioner, has said it will collect population data for two years to assess the ban's impact and publish first results later this year. Already, companies have locked 4.7 million suspected underage accounts since the law came into effect in December, it said, although industry participants have told Reuters that some of the accounts were likely underage Google accounts that were prevented from logging in to YouTube, regardless of whether they were active. Meta said it took down about 550,000 Instagram, Facebook and Threads accounts suspected to be underage in the first weeks of the Australian ban. Snapchat said it โ took down about 415,000. Regulators elsewhere are watching carefully. European Commission President Ursula von der Leyen is set to discuss age verification during an upcoming visit to Canberra, according to a European lawmaker briefed on her agenda. The United Kingdom, which requires age verification for porn websites and is considering tightening child safety rules for social media and AI chatbots as well, is likewise swapping notes with Australian counterparts. Early results from the Australian experiment should be taken with a grain of salt, as companies affected by the ban generally were doing the bare minimum to comply with legal requirements, said Iain Corby, the executive director of the Age Verification Providers Association, a trade association that represents about three dozen vendors including Yoti and Persona. In some cases, he added, the social media companies asked AVPA member firms to turn off controls that make the age checks more robust. "They are extremely worried this is going to be contagious and be a policy that is adopted around the world, so they are not really motivated for it to be a glowing success," he said. "They are testing the regulator's patience to see what they can get away with." (Reporting by Katie Paul in New York, Supantha Mukherjee in Stockholm and Byron Kaye in Sydney; Editing by Kenneth Li and Matthew Lewis)
[4]
Amid wave of kids' online safety laws, age-checking tech comes of age - The Economic Times
A recent outcry over the spread of AI-generated child sexual images, as well as increased confidence in the capabilities of "age assurance" software that backers say can suss out a person's approximate age using facial analysis, parental approval, ID checks and other digital clues.For years, tech companies successfully resisted pressure from child safety advocates to do more to keep kids off their services, claiming technical limitations would make any attempt to restrict access for teens impractical, overly broad or a security risk. Now, a growing list of governments is concluding those hurdles are not insurmountable, and pushing ahead with aggressive new age-checking requirements for social networks, AI chatbots and porn purveyors alike. Three months after Australia launched a landmark ban on teen social media accounts, regulators across Europe, in Brazil and in a handful of US states are moving to emulate it. California Governor Gavin Newsom - seen as a likely Democratic candidate for president in 2028 - joined the call last month, while Republican President Donald Trump is also reportedly "taking an interest" in age limits, according to his daughter-in-law. Spurring them along are escalating concerns over online abuse and teen mental health, and a recent outcry over the spread of AI-generated child sexual images, as well as increased confidence in the capabilities of "age assurance" software that backers say can suss out a person's approximate age using facial analysis, parental approval, ID checks and other digital clues. Recent advancements in artificial intelligence have boosted the effectiveness and slashed the cost of those age-gating tools, according to Reuters interviews with more than a dozen regulators, child safety advocates, independent researchers and โ vendors who perform โ the age checks for big tech companies, including TikTok, Facebook owner Meta and OpenAI. "The age-assurance market has matured a lot in the last couple of years," said Ariel Fox Johnson, a senior adviser to San Francisco-based Common Sense Media, a children's online advocacy group. She pointed to improving technology, as well as the establishment of trade groups, technical protocols and certification schemes standardising evaluation of the various tools' effectiveness. Age-assurance market matures Social media companies now can often confidently guess a person's age group using digital breadcrumbs like the year an account was established or the type of content it views, they said, while a burgeoning industry of age assurance vendors like Yoti, k-ID and Persona offer additional layers of checks via automated tools like face scans and machine-based analysis of government IDs. At the app-store level, too, Apple and Alphabet's Google have rolled out tools that allow parents to indicate their child's age range to app developers. "The tech definitely has gotten better, not just for age verification specifically but for overall identity verification," said Merritt Maxim, a vice president at Massachusetts-based research firm Forrester. "That, in turn, has driven down the average cost of verification, so that where you were using it five years ago only for higher-value types of transactions, now you can use it for pretty much anything without a significant financial impact." Vendors generally charge well under $1 per check for basic machine-only age assurance tools, though for large volumes the price is often as low as single-digit cents, said โ industry executives. More costly traditional processes like human confirmation and triangulation of personal data that were standard a decade ago are still available at a premium, but are needed less frequently, the executives said. Independent evaluations back up executives' descriptions of rapid progress. According to an ongoing study run by the US National Institute of Standards and Technology (NIST), face-scanning software from firms including Yoti - which performs checks for TikTok and Meta's Facebook, Instagram and Threads - were off in their age estimations by an average of 4.1 years as of initial testing โ in 2014, while by 2024 that average had dropped to 3.1 years, and is currently 2.5 years. Face-scanning gains precision UK-based Yoti said the performance of its latest face analysis model due out in April surpasses that of models it submitted for the NIST and Australian studies, with an average error of only 1.04 years for kids in regulators' target age range of 14 to 18. Persona, a San Francisco-based identity verification firm used by OpenAI and Reddit, touts a similar average error of 1.77 years for the 13-to-17-year-old age range. A report commissioned by the Australian government likewise determined last year that photo-based age estimation products were broadly accurate, although it acknowledged that users within three years of the law's age cutoff of 16 were in a "grey zone where system uncertainty is higher" and recommended they be diverted to "supplementary assurance methods, such as ID-based verification or parental consent." The systems also struggle more with certain skin types, with grainier imagery captured by older phones and when using privacy-protective "on-device" data processing, which entails performing a check entirely on the person's phone without sending their data out to a cloud server, executives said. For instance, systems using on-device processing are less likely to catch attempts by enterprising youngsters to appear older than they are, said Rick Song, CEO of San Francisco-based Persona. Common tricks used by teens include donning masks, applying heavy makeup or fake facial hair, or scanning the plastic faces of action figures instead of their own, he said. Still, said executives, facial age estimation can provide a digital version of the kind of screening performed daily at bars and liquor stores in the offline world. "If you look young, you can be challenged, and you may have to provide your ID," said Robin Tombs, CEO of London-based Yoti. He added that social media services generally require fewer face scans and ID checks than porn or gambling sites because they already have reams of personal information on their users. This means they can lean more on an age assurance method called "inference" - involving analysis of online activities, connected financial information and other signals - to satisfy regulators' requirements. The 10 social media companies included in Australia's teen ban all declined Reuters requests โ for data on the effectiveness of their age assurance tools. Early implementation results Australia's internet regulator, the eSafety commissioner, has said it will collect population data for two years to assess the ban's impact and publish first results later this year. Already, companies have locked 4.7 million suspected underage accounts since the law came into effect in December, it said, although industry participants have told Reuters that some of the accounts were likely underage Google accounts that were prevented from logging in to YouTube, regardless of whether they were active. Meta said it took down about 550,000 Instagram, Facebook and Threads accounts suspected to be underage in the first weeks of the Australian ban. Snapchat said it took down about 415,000. Regulators elsewhere are watching carefully. European Commission President Ursula von der Leyen is set to discuss age verification during an upcoming visit to Canberra, according to a European lawmaker briefed on her agenda. The United Kingdom, which requires age verification for porn websites and is considering tightening child safety rules for social media and AI chatbots as well, is likewise swapping notes with Australian counterparts. Early results from the Australian experiment should be taken with a grain of salt, as companies affected by the ban generally were doing the bare minimum to comply with legal requirements, said Iain Corby, the executive director of the Age Verification Providers Association, a trade association that represents about three dozen vendors including Yoti and Persona. In some cases, he added, the social media companies asked AVPA member firms to turn off controls that make the age checks more robust. "They are extremely worried this is going to be contagious and be a policy that is adopted around the world, so they are not really motivated for it to be a glowing success," he said. "They are testing the regulator's patience to see what they can get away with."
[5]
Amid wave of kids' online safety laws, age-checking tech comes of age
NEW YORK/STOCKHOLM/SYDNEY, March 9 (Reuters) - For years, tech companies successfully resisted pressure from child safety advocates to do more to keep kids off their services, claiming technical limitations would make any attempt to restrict access for teens impractical, overly broad or a security risk. Now, a growing list of governments is concluding those hurdles are not insurmountable, and pushing ahead with aggressive new age-checking requirements for social networks, AI chatbots and porn purveyors alike. Three months after Australia launched a landmark ban on teen social media accounts, regulators across Europe, in Brazil and in a handful of U.S. states are moving to emulate it. California Governor Gavin Newsom - seen as a likely Democratic candidate for president in 2028 - joined the call last month, while Republican President Donald Trump is also reportedly "taking an interest" in age limits, according to his daughter-in-law. Spurring them along are escalating concerns over online abuse and teen mental health, and a recent outcry over the spread of AI-generated child sexual images, as well as increased confidence in the capabilities of "age assurance" software that backers say can suss out a person's approximate age using facial analysis, parental approval, ID checks and other digital clues. Recent advancements in artificial intelligence have boosted the effectiveness and slashed the cost of those age-gating tools, according to Reuters interviews with more than a dozen regulators, child safety advocates, independent researchers and vendors who perform the age checks for big tech companies, including TikTok, Facebook owner Meta and OpenAI. "The age-assurance market has matured a lot in the last couple of years," said Ariel Fox Johnson, a senior adviser to San Francisco-based Common Sense Media, a children's online advocacy group. She pointed to improving technology, as well as the establishment of trade groups, technical protocols and certification schemes standardizing evaluation of the various tools' effectiveness. AGE-ASSURANCE MARKET MATURES Social media companies now can often confidently guess a person's age group using digital breadcrumbs like the year an account was established or the type of content it views, they said, while a burgeoning industry of age assurance vendors like Yoti, k-ID and Persona offer additional layers of checks via automated tools like face scans and machine-based analysis of government IDs. At the app-store level, too, Apple and Alphabet's Google have rolled out tools that allow parents to indicate their child's age range to app developers. "The tech definitely has gotten better, not just for age verification specifically but for overall identity verification," said Merritt Maxim, a vice president at Massachusetts-based research firm Forrester. "That, in turn, has driven down the average cost of verification, so that where you were using it five years ago only for higher-value types of transactions, now you can use it for pretty much anything without a significant financial impact." Vendors generally charge well under $1 per check for basic machine-only age assurance tools, though for large volumes the price is often as low as single-digit cents, said industry executives. More costly traditional processes like human confirmation and triangulation of personal data that were standard a decade ago are still available at a premium, but are needed less frequently, the executives said. Independent evaluations back up executives' descriptions of rapid progress. According to an ongoing study run by the U.S. National Institute of Standards and Technology (NIST), face-scanning software from firms including Yoti - which performs checks for TikTok and Meta's Facebook, Instagram and Threads - were off in their age estimations by an average of 4.1 years as of initial testing in 2014, while by 2024 that average had dropped to 3.1 years, and is currently 2.5 years. FACE-SCANNING GAINS PRECISION UK-based Yoti said the performance of its latest face analysis model due out in April surpasses that of models it submitted for the NIST and Australian studies, with an average error of only 1.04 years for kids in regulators' target age range of 14 to 18. Persona, a San Francisco-based identity verification firm used by OpenAI and Reddit, touts a similar average error of 1.77 years for the 13-to-17-year-old age range. A report commissioned by the Australian government likewise determined last year that photo-based age estimation products were broadly accurate, although it acknowledged that users within three years of the law's age cutoff of 16 were in a "grey zone where system uncertainty is higher" and recommended they be diverted to "supplementary assurance methods, such as ID-based verification or parental consent." The systems also struggle more with certain skin types, with grainier imagery captured by older phones and when using privacy-protective "on-device" data processing, which entails performing a check entirely on the person's phone without sending their data out to a cloud server, executives said. For instance, systems using on-device processing are less likely to catch attempts by enterprising youngsters to appear older than they are, said Rick Song, CEO of San Francisco-based Persona. Common tricks used by teens include donning masks, applying heavy makeup or fake facial hair, or scanning the plastic faces of action figures instead of their own, he said. Still, said executives, facial age estimation can provide a digital version of the kind of screening performed daily at bars and liquor stores in the offline world. "If you look young, you can be challenged, and you may have to provide your ID," said Robin Tombs, CEO of London-based Yoti. He added that social media services generally require fewer face scans and ID checks than porn or gambling sites because they already have reams of personal information on their users. This means they can lean more on an age assurance method called "inference" -- involving analysis of online activities, connected financial information and other signals -- to satisfy regulators' requirements. The 10 social media companies included in Australia's teen ban all declined Reuters requests for data on the effectiveness of their age assurance tools. EARLY IMPLEMENTATION RESULTS Australia's internet regulator, the eSafety commissioner, has said it will collect population data for two years to assess the ban's impact and publish first results later this year. Already, companies have locked 4.7 million suspected underage accounts since the law came into effect in December, it said, although industry participants have told Reuters that some of the accounts were likely underage Google accounts that were prevented from logging in to YouTube, regardless of whether they were active. Meta said it took down about 550,000 Instagram, Facebook and Threads accounts suspected to be underage in the first weeks of the Australian ban. Snapchat said it took down about 415,000. Regulators elsewhere are watching carefully. European Commission President Ursula von der Leyen is set to discuss age verification during an upcoming visit to Canberra, according to a European lawmaker briefed on her agenda. The United Kingdom, which requires age verification for porn websites and is considering tightening child safety rules for social media and AI chatbots as well, is likewise swapping notes with Australian counterparts. Early results from the Australian experiment should be taken with a grain of salt, as companies affected by the ban generally were doing the bare minimum to comply with legal requirements, said Iain Corby, the executive director of the Age Verification Providers Association, a trade association that represents about three dozen vendors including Yoti and Persona. In some cases, he added, the social media companies asked AVPA member firms to turn off controls that make the age checks more robust. "They are extremely worried this is going to be contagious and be a policy that is adopted around the world, so they are not really motivated for it to be a glowing success," he said. "They are testing the regulator's patience to see what they can get away with." (Reporting by Katie Paul in New York, Supantha Mukherjee in Stockholm and Byron Kaye in Sydney; Editing by Kenneth Li and Matthew Lewis)
Share
Share
Copy Link
Governments worldwide are implementing stringent age-checking requirements for social networks, AI chatbots, and adult content sites following Australia's landmark teen social media ban. Advanced facial analysis and AI-powered tools now verify ages with error margins below 1.77 years, costing as little as single-digit cents per check. But the rapid expansion raises critical data privacy concerns as millions of adults face mandatory identity verification.
Three months after Australia launched its landmark ban on teen social media accounts, a wave of governments across Europe, Brazil, and multiple U.S. states are moving to implement similar age-checking requirements
1
. California Governor Gavin Newsom joined the push last month, while President Donald Trump is reportedly taking an interest in age limits3
. The aggressive new online safety laws target social networks, AI chatbots, and adult content providers alike, marking a fundamental shift in how platforms must verify user ages.
Source: ET
The momentum stems from escalating concerns over online abuse and teen mental health, coupled with recent outcry over AI-generated child sexual images
4
. For years, tech companies successfully resisted pressure from child safety advocates, claiming technical limitations made restricting teen access impractical or posed security risks. Now regulators are concluding those hurdles are surmountable, driven by increased confidence in age assurance technology capabilities.Recent advancements in artificial intelligence have dramatically boosted effectiveness while slashing costs of age-gating tools, according to interviews with more than a dozen regulators, child safety advocates, independent researchers, and vendors who perform age checks for major platforms including TikTok, Meta, and OpenAI
1
. Ariel Fox Johnson, senior adviser to Common Sense Media, notes the age-assurance market has matured significantly through improving technology and establishment of trade groups, technical protocols, and certification schemes standardizing evaluation3
.Source: Market Screener
Social media companies can now confidently estimate age groups using digital breadcrumbs like account establishment dates or content viewing patterns
5
. A burgeoning industry of age assurance vendors like Yoti, k-ID, and Persona offer additional verification layers via AI-powered tools including facial analysis and machine-based analysis of government ID checks. At the app-store level, Apple and Alphabet's Google have rolled out tools allowing parents to indicate their child's age range to developers.Independent evaluations document rapid progress in facial recognition accuracy. According to an ongoing study by the U.S. National Institute of Standards and Technology (NIST), face-scanning software from firms including Yoti were off in age estimations by an average of 4.1 years in initial 2014 testing, dropping to 3.1 years by 2024 and currently reaching 2.5 years . UK-based Yoti, which performs checks for TikTok and Meta's Facebook, Instagram, and Threads, reports its latest face analysis model due in April achieves an average error of only 1.04 years for the 14-to-18 age range
3
. Persona, used by OpenAI and Reddit, touts a similar 1.77-year average error for ages 13 to 17.Merritt Maxim, vice president at research firm Forrester, explains improved digital identity verification technology has driven down average verification costs, enabling use for virtually any transaction without significant financial impact
4
. Vendors generally charge well under $1 per check for basic machine-only age assurance technology, with prices often reaching single-digit cents for large volumes5
. Traditional processes like human confirmation standard a decade ago remain available at premium pricing but are needed less frequently.Related Stories
Roughly half of U.S. states have enacted or are advancing laws requiring platforms to block underage users, forcing companies to screen everyone approaching digital gates
2
. Joe Kaufman, global head of privacy at digital identity verification platform Jumio, notes the patchwork of state laws vary in technical demands and compliance expectations, with regulations moving in many different directions simultaneously.The expansion pulls millions of adult Americans into mandatory age verification gates, creating backlash from users and criticism from civil liberties advocates warning a free and open internet is at stake
2
. Discord announced plans in February to roll out mandatory age verification globally but delayed launch until the second half of this year after users expressed concerns about submitting selfies or government IDs. Many implementations see verification vendors process and retain identity information, returning only pass-fail signals to platforms. Rivka Gerwitz Little, chief growth officer at identity verification platform Socure, indicated the company may retain adult verification data for up to three years in full identity verification contexts while following privacy rules2
.Civil liberties advocates warn concentrating large volumes of identity data among few verification vendors creates attractive targets for hackers and government demands. Discord disclosed a data breach earlier this year exposing ID images of approximately 70,000 users through a compromised third-party service, highlighting security risks
2
. An Australian government report acknowledged users within three years of the law's 16-year age cutoff face higher system uncertainty and recommended diversion to supplementary assurance methods like ID-based verification or parental consent . Systems also struggle more with certain skin types and grainier imagery from older phones.The maturation of age-checking technology fundamentally alters the debate around protecting minors online. Platforms can no longer credibly claim technical impossibility when regulators worldwide are mandating implementation. For users, expect increasing encounters with age gates requiring facial scans, ID uploads, or parental verification across social networks, AI chatbots, gaming services, and adult content sites. The balance between child online safety and user friction remains contentious, with excessive data collection creating resistance that may drive users toward workarounds or unauthorized distribution channels. As more jurisdictions adopt these requirements, watch for continued tension between protecting minors and preserving digital privacy rights for all users.
Summarized by
Navi
[2]
[5]
29 Aug 2025โขPolicy and Regulation

16 Sept 2025โขPolicy and Regulation

02 Sept 2025โขPolicy and Regulation

1
Technology

2
Technology

3
Science and Research
