3 Sources
[1]
Facebook and Instagram are using AI bone structure analysis to identify photos of kids
Facebook and Instagram have a new way to detect and remove users under 13: AI bone structure analysis. In a blog post on Tuesday, Meta -- Facebook and Instagram's parent company -- says its AI system will scan photos and videos posted to its platforms for "general themes and visual cues," including height and bone structure. "We want to be clear: this is not facial recognition," Meta says in the blog post, adding that it "does not identify the specific person in the image." This system is part of Meta's efforts to keep kids under 13 off its platforms, and will also analyze posts, comments, bios, and captions to search for "contextual clues" that someone might be underage. Meta's AI-powered facial analysis, which is only available in "select" countries including the US ahead of a wider rollout, seems similar to the face-scanning tech offered by age verification services like Yoti and k-ID. Facebook and Instagram will deactivate accounts identified as underage, and the owner will need to verify their age to prevent it from deletion. The announcement comes just days after a New Mexico jury found that Meta violated state law by misleading customers about the safety of its platforms and failing to protect children from child predators. Meta must pay $375 million as a result, and may have to implement changes that the company has already threatened to leave the state over. Separately, Meta is expanding the technology it uses on Instagram to automatically identify and place users between 13 and 18 into Teen Accounts. These accounts come with stricter content controls, block messages from strangers, and prevent users under 16 from livestreaming. Instagram rolled out the tech in 2024, and now Facebook will do the same for users in the US, followed by a rollout in the UK and EU in June. In its announcement, Meta continues to advocate for age verification at the app store and operating system level, an approach that's gaining traction in Congress and some states, including California and Colorado.
[2]
Meta AI will analyze faces of teen users 'but it's not face recognition'
Meta's latest attempt to comply with age verification requirements in Europe, Brazil and the US is to roll out AI-powered tech to analyse the faces of teenage users of Facebook and Instagram. The company says AI analysis will be used to estimate the ages of faces but that it does not amount to face recognition ... Regulators around the world are requiring social media companies to get far better at identifying and blocking users below the age of 13. Additionally, teenagers in the 13-18 range need to be given age-appropriate feeds. The social media network already uses AI to try to pick up clues as to the age of its users. This includes using AI technology to analyze entire profiles for contextual clues -- such as birthday celebrations or mentions of school grades -- to determine if an account likely belongs to someone underage. We look for these signals across various formats, like posts, comments, bios, and captions, and we're continuing to expand this technology across additional parts of our apps like Instagram Reels, Instagram Live, and Facebook Groups. Meta says that it is now adding visual analysis of the faces of users. This technology allows our AI to scan photos and videos for visual clues about a person's age that text might miss. We want to be clear: this is not facial recognition. Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone's general age; it does not identify the specific person in the image. The company has also renewed its call for the legal responsibility for age verification to be passed to app stores rather than individual developers. While we're investing heavily in our own age assurance technology, we know that no single company can solve this challenge alone. We believe legislation should require app stores to verify age and provide apps and developers with this information so that they can provide age-appropriate experiences, like Teen Accounts. The company claims that 88% of US parents support this approach.
[3]
Meta turns to AI in age enforcement efforts
Meta announced on Tuesday it will be utilizing artificial intelligence to help remove users under 13 from its platforms as the technology giant continues efforts in the kids safety space amid scrutiny in state courts and Congress. In a blog post, Meta said it is developing advanced AI to "analyze entire profiles for contextual clues" like birthday celebrations or discussions about school grades in posts, bios, comments or captions to determine if a user is likely underage. Should it determine the account may be used by a minor under 13, the company said the profile will be deactivated and the account holder will be required to go through the system's age verification process to stop the account from being deleted. Meta, the parent of Facebook and Instagram, added that they will also integrate visual analysis, giving AI the ability to scan videos or photos for clues about a person's age that comments, posts, or bios may miss. The company emphasized this is not facial recognition. "Our AI looks at general themes and visual cues, for example height or bone structure, to estimate some's general age," the company wrote. "It does not identify the specific person in the image. Some of the advanced features are currently available in select countries, but Meta said they are working towards a broader roll-out. Users will also have an easier time reporting suspected underage accounts, and human review teams will be assisted by AI models trained with a standard evaluation criteria. "In our testing, this AI-driven review delivers higher accuracy and faster resolutions than human review alone, ensuring that these accounts are addressed with more speed and reliability," the executives wrote. Meta launched their Teen Account program in 2024, which are accounts private by default for users under 18. Teen accounts have to manually accept new followers, and are only able to be messaged, tagged or mentioned by people they follow. Since its release, the tech giant has updated features for these accounts. The company said Tuesday it is expanding its new technology that automatically looks for suspected underage Instagram accounts to 27 new countries in the EU and Brazil. Parents will also get notifications this month on Facebook and Instagram on how to check and confirm their children's ages on Meta platforms. As it hosts some age verification checks on its platform, the company has pushed for legislation mandating app stores verify age and share the information with app developers. App store hosts like Apple and Google have advocated the onus be shared between the stores and app developers, and Congress has yet to come to consensus on the issue. The changes come as Meta fights off kids safety claims in multiple state courts. A jury in New Mexico determined in a landmark ruling last month that Meta was liable in compromising children's safety online. The company was ordered to pay $375 million in damages for violating New Mexico's Unfair Practices Act, which prohibits unfair, deceptive and misleading business ventures across the state. A bench trial kicked off Monday for a judge to review the office of the New Mexico attorney general's requested protections for users under 18. Meta, alongside Google's YouTube, was also found liable by a jury, determining the companies were negligent in their design or operation of platforms.
Share
Copy Link
Meta announced AI-powered systems that analyze bone structure and height to identify users under 13 on Facebook and Instagram. The parent company insists this isn't facial recognition, but the technology arrives as Meta faces a $375 million penalty from New Mexico for failing to protect children online.
Meta is rolling out AI-powered systems that scan photos and videos for physical characteristics like bone structure and height to identify underage users on Facebook and Instagram
1
. The parent company announced Tuesday that its AI will analyze visual cues alongside contextual clues found in posts, comments, bios, and captions to detect users under 132
. When the system identifies a suspected underage account, it will be deactivated, and the owner must complete age verification to prevent deletion3
.
Source: The Hill
The company emphasizes this is not facial recognition technology. Meta states the AI looks at general themes and visual cues to estimate someone's general age without identifying specific individuals
1
. The facial analysis technology, currently available in select countries including the US, resembles systems offered by age verification services like Yoti and k-ID1
.Beyond visual analysis, Meta's AI examines entire profiles for contextual clues that might indicate age, such as birthday celebrations or mentions of school grades across various formats including Instagram Reels, Instagram Live, and Facebook Groups. The company claims that in testing, this AI-driven review delivers higher accuracy and faster resolutions than human review alone
3
.Meta is also expanding technology that automatically identifies users between 13 and 18 to place them into Teen Accounts. These accounts feature stricter privacy features, including default private settings, content controls that block messages from strangers, and restrictions preventing users under 16 from livestreaming
1
. Instagram rolled out these protections in 2024, and Facebook will implement the same for US users, followed by a rollout in the UK and EU in June1
. The company announced it's expanding this technology to 27 new countries in the EU and Brazil3
.
Source: 9to5Mac
The announcement follows a landmark ruling where a New Mexico jury found Meta violated state law by misleading customers about platform safety and failing to protect children from child predators
1
. Meta must pay $375 million in damages for violating New Mexico's Unfair Practices Act3
. The company faces ongoing legal challenges in multiple state courts over online safety concerns, with a bench trial beginning Monday to review protections for users under 183
.Regulations around the world now require social media companies to identify and block users below age 13 while providing age-appropriate feeds for teenagers in the 13-18 range
2
. These regulatory pressures are forcing rapid innovation in age assurance technology.Related Stories
Despite investing heavily in its own systems, Meta continues advocating for app store age verification, arguing that legislation should require platforms like Apple and Google to verify age and share this information with developers
2
. This approach is gaining traction in Congress and some states, including California and Colorado1
. Meta claims 88% of US parents support this approach2
. However, app stores have advocated that responsibility should be shared between stores and app developers, with Congress yet to reach consensus3
. Parents will receive notifications this month on how to check and confirm their children's ages on Meta platforms3
.Summarized by
Navi
[3]
1
Entertainment and Society

2
Health

3
Technology
