Australia targets app stores and search engines in sweeping AI age verification crackdown

8 Sources

Share

Australia's internet regulator may require app stores and search engines to block AI services lacking age verification by March 9. A Reuters review found only 9 of 50 popular AI platforms have implemented age assurance systems, while 30 show no compliance steps. The move follows Australia's groundbreaking social media ban for teenagers and reflects growing concerns about youth mental health and AI chatbot usage.

News article

Australia Expands AI Regulation to Digital Gatekeepers

Australia's eSafety commissioner has signaled it may take enforcement action against app stores and search engines that provide access to AI services failing to implement age verification by March 9. The warning marks one of the most aggressive AI regulation efforts globally, extending the country's youth protection measures beyond its December 2024 social media ban for teenagers

1

. "eSafety will use the full range of our powers where there is non-compliance," a spokesperson stated, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services"

2

.

The new rules require internet services including OpenAI's ChatGPT and companion chatbots to restrict Australians under 18 from accessing pornography, extreme violence, self-harm and eating disorder content. Companies face fines of up to A$49.5 million ($35 million) for non-compliance

3

. This AI age crackdown positions Australia as a global leader in efforts to protect minors from harmful content, following its pioneering social media restrictions that prompted similar commitments from world leaders.

Widespread Non-Compliance Among AI Platforms

A Reuters review conducted one week before the deadline revealed alarming gaps in compliance across the AI industry. Of the 50 most popular text-based AI products, only nine had rolled out or announced plans for age assurance systems

1

. Another 11 platforms implemented blanket content filters or planned to block all Australians from using their services, leaving 30 with no apparent steps taken to follow the new rules. The review assessed each platform's response to prompts requesting restricted content, moderation policies, published terms of service, and direct statements to Reuters.

Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters, while Character.AI cut off open-ended chat for under-18s. However, among companion chatbots, three-quarters had no functioning or planned filtering or age verification, and one-sixth lacked even a published email address to report suspected breaches

5

. Elon Musk's Grok, under investigation globally for suspected failure to stop production of synthetic sexualized imagery of children, had no age assurance measures or text-based content filtering, Reuters found.

Growing Concerns About Youth Mental Health and Chatbot Usage

The regulatory push stems from mounting evidence about AI's impact on young users. Australia's eSafety regulator has received reports of children as young as 10 talking to AI-powered interactive tools up to six hours a day

1

. Officials expressed concern that AI companies are "leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage"

3

. Researchers caution that such platforms may be more harmful to youth mental health than social media.

OpenAI and companion chatbot startup Character.AI have already faced wrongful death lawsuits over their interactions with young users. OpenAI also acknowledged this week it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without notifying authorities

5

. While Australia hasn't yet experienced reports of chatbot-linked violence or self-harm, the proactive stance reflects determination to prevent such incidents.

Tech Giants Face Pressure on Online Child Safety

Apple, the top app store operator, stated on its website it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions introducing age restrictions, without specifying the methods

1

. Google, Australia's dominant search engine provider and second-largest app store operator, declined to comment. The question of which parties bear responsibility for online safety is being debated worldwide. In the US, Apple and Google have lobbied to delegate the task to platforms rather than app store operators

2

.

Jennifer Duxbury, head of policy at internet industry group DIGI who led drafting of the AI code, noted that while eSafety was attempting to notify chatbot services about the new rules, "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them"

1

. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls."

Global Implications and What's Next

Australia's approach signals a broader trend of age-targeted regulation spreading beyond social media to the entire digital ecosystem. France, Spain, the UK, and New Zealand are all exploring similar age limits on social media and online services for minors under 16

4

. The focus on digital gatekeepers like app stores and search engines represents a strategic shift, targeting chokepoints where access can be controlled more effectively than policing individual services.

As the March 9 deadline approaches, the AI industry faces a critical test of its willingness to prioritize online child safety over unfettered access. Whether Australia's aggressive stance on content filtering and age verification will become the global standard remains uncertain, but the country's leadership on youth mental health protection is already influencing policy discussions worldwide. The balance between protecting minors and preserving privacy, access, and civil liberties will define how governments regulate AI services in the coming years.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo