8 Sources
8 Sources
[1]
Exclusive: Australia says it may go after app stores, search engines in AI age crackdown
SYDNEY, Mar 2 (Reuters) - Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week. The warning reflects one of the most aggressive efforts globally to rein in AI companies, which face a growing number of lawsuits for failing to stop - and even encouraging - self-harm or violence while researchers caution that such platforms are more harmful to youth mental health than social media. Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology. From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million). "eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services". OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without telling the authorities. Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day. eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said. Top app store operator Apple (AAPL.O), opens new tab did not respond but said on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions that are introducing age restrictions, without specifying the methods. A spokesperson for Google (GOOGL.O), opens new tab, Australia's dominant search engine provider and No.2 app store operator, declined to comment. Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, said eSafety was trying to notify chatbot services about the new rules but "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them". COMPLIANCE IN THE MINORITY A week before Australia's deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems, the Reuters review found. The review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters. Another 11 platforms had blanket content filters or planned to block all Australians from using their service, measures that would comply with the new law by keeping restricted content from all users, leaving 30 with no apparent steps taken to follow the new rules, the review found. Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code. But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required. Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok's parent company, xAI, did not respond to a request for comment. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls". "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed," she said. ($1 = 1.4085 Australian dollars) Reporting by Byron Kaye; Editing by Saad Sayeed Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence Byron Kaye Thomson Reuters Byron Kaye is the Reuters chief companies correspondent for Australia, based in Sydney. Over 10 years at Reuters he has covered banks, retail, healthcare, media, technology and politics, among other topics. He can be reached at +612 9171 7541 or on Signal via username byronkaye.01
[2]
Australia will consider requiring app stores to block AI services without age verification
Australia's government may take a strict stance on ensuring younger users cannot access AI chatbots. Reuters reports that Australian regulators may require app storefronts to block AI services that do not implement age verification for restricting mature content by March 9. "eSafety will use the full range of our powers where there is non-compliance," a representative for the commissioner said in a statement to the publication. Those paths could include "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services." A review by Reuters found that of 50 leading text-based AI chat services in the region, only nine had introduced or shared plans for age assurance. Eleven services reportedly "had blanket content filters or planned to block all Australians from using their service," according to the report, leaving a large number that had not taken public action a week ahead of the country's deadline. Failure to comply could see AI companies face fines of up to A$49.5 million ($35 million). The question of which parties are responsible for keeping children from accessing potentially harmful content is being debated around the world. In the US, for instance, Apple and Google have been lobbying to have the task delegated to platforms rather than app store operators. The language from the Australian regulators about all stores is hardly definitive at this stage, but given the breadth of its sweeping ban on the use of social media and some highly social digital platforms for citizens under age 16 enacted last year, an aggressive stance seems to align with leaders' priorities.
[3]
Australia may push Apple to block AI apps under age check rules - 9to5Mac
Following last year's introduction of a ban on social media apps for teenagers, Australia is now tightening age-verification requirements for AI apps. Here are the details. Last year, Australia became the first country to ban social media apps for teenagers, in what became the first national effort to protect young people's mental health. The move followed growing global concern about the mental health impact of social media on young users, a debate that has only intensified since the release of books such as The Anxious Generation by Jonathan Haidt. Now, starting March 9, AI platforms, including services from companies such as OpenAI, will have to comply with a series of requirements designed to prevent users under 18 from accessing pornography, extreme violence, self-harm, or eating disorder content. The move also addresses concerns about excessive chatbot use among teens, including fears that emotionally manipulative design features could encourage dependency at a time when the impact of these tools remains unclear: Australia is yet to experience reports of chatbot-linked violence or self-harm, but [eSafety] has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day. eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said. As part of the new rules, Reuters reports that app stores and search engines may be required to block access to non-compliant AI services: Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week. When Reuters asked for comment, Apple declined to respond. Still, the company has been rolling out age-related safeguards across its platforms to comply with age-restriction laws worldwide, including systems that rely on signals automatically detected by the device. Adoption of these APIs, as well as compliance with local requirements, ultimately remains the responsibility of individual developers. Back to the Australian issue, Reuters also reported that compliance remains limited, with the majority of the 50 most popular text-based AI tools showing no clear steps toward implementing age verification or content filtering ahead of the fast-approaching deadline.
[4]
After social media, app stores and search engines are the next target for age-gating
Even app stores and search engines aren't safe from the age gate. Australia is already the first country to ban social media use for children under 16. And now, it is considering expanding its youth protection rules to target other parts of the internet, including app stores, search engines, and AI services. The country's internet regulator has said that it could press big digital gatekeepers to block access to services that don't implement robust age verification systems. What Australia is proposing Recommended Videos In an interview with Reuters, officials from Australia's eSafety watchdog said they may extend age-gating to app stores and search engines that make it easy for minors to access AI tools and other online services without verifying age. One of the immediate focuses is on AI chat services like OpenAI's ChatGPT. These platforms can display content like graphic material, self-harm information, and other areas regulators see as potentially harmful to internet users under 18 years of age. Under their new proposal, age checks would need to be implemented by March 9, or the companies risk facing fines up to A$49.5 million (~US$35 million). Australia's push highlights a broader trend of age-targeted regulation that began with social media and is now spreading to other areas in the digital ecosystem. Apple has seemingly backed this up already, with age checks before allowing users to download certain apps in regions like Brazil, Australia, and Singapore. A global wave of age restrictions In a broader context, governments from across the globe have also been considering similar rules to limit minors' access to social media and online services: France and Spain are moving towards age limits on social media, with minimum ages and verification requirements. In the UK, social media being banned for youth under 16 is also being explored. New Zealand has proposed similar age restriction laws against social media for minors under 16. The bottom line The focus on online safety for the youth has clearly moved beyond just social media platforms, with Australia eyeing age checks even at the point of access (app stores, search engines, and AI tools). While regulators are widening the digital safety policy net, it remains to be seen how countries balance online protection with privacy, access, and civil liberties for young internet users worldwide.
[5]
Australia says it may go after app stores, search engines in AI age crackdown - The Economic Times
Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology.Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week. The warning reflects one of the most aggressive efforts globally to rein in AI companies, which face a growing number of lawsuits for failing to stop - and even encouraging - self-harm or violence while researchers caution that such platforms are more harmful to youth mental health than social media. Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology. From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million). "eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services". OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week that it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without telling the authorities. Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day. eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said. Top app store operator Apple did not respond but said on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions that are introducing age restrictions, without specifying the methods. A spokesperson for Google, Australia's dominant search engine provider and No.2 app store operator, declined to comment. Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, said eSafety was trying to notify chatbot services about the new rules but "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them". Compliance in the minority A week before Australia's deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems, the Reuters review found. The review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters. Another 11 platforms had blanket content filters or planned to block all Australians from using their service, measures that would comply with the new law by keeping restricted content from all users, leaving 30 with no apparent steps taken to follow the new rules, the review found. Most large chat-based search assistants, such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code. But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required. Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok's parent company, xAI, did not respond to a request for comment. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls". "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed," she said. ($1 = 1.4085 Australian dollars)
[6]
After Social Media, Australia Vows Crackdown On AI Services Over Age Verification Breaches: Report - Alphabet (NASDAQ:GOOGL)
Australia's internet regulator, eSafety, is reportedly contemplating strict measures against artificial intelligence (AI) services that are not adhering to age verification rules. The regulator's decision follows a review that found over half of these services had not publicly committed to meeting the compliance deadline set for the following week, reported Reuters on Monday. From March 9, internet services in Australia, including AI tools like OpenAI's ChatGPT and other chatbots, will be mandated to prevent Australians under 18 from accessing explicit content, or they could attract fines up to A$49.5 million ($35 million). The commissioner's spokesperson told the publication that eSafety would use its "full range" of powers in case of non-compliance, and added that this could include action against "gatekeeper services such as search engines and app stores that provide key points of access to particular services". This is one of the most severe global efforts to regulate AI companies, which are increasingly facing lawsuits for their inability to curb harmful content. In December, Australia became the first country to bar children under 16 from using major social platforms. Global Scrutiny of AI Platforms Grows Just a month later, Elon Musk's AI chatbot Grok faced a formal investigation by Ireland's privacy regulator over concerns about how it processes personal data and generates sexualized content. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors Photo courtesy: Shutterstock/ wutzkohphoto Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[7]
Australia warns app stores of action for age check non-compliance
Australia's internet regulator has warned that artificial intelligence platforms, search engines and app stores could face action if they fail to stop minors from accessing harmful content through AI tools from March 9, according to a Reuters report. The move expands Australia's online safety push beyond social media to AI services such as OpenAI's ChatGPT and other chatbot platforms. Under the new rules, online services operating in Australia must prevent users under 18 from accessing content related to pornography, extreme violence, self-harm and eating disorders. Companies that fail to comply risk fines of up to A$49.5 million (about $35 million). The eSafety Commissioner said it could also act against "gatekeeper services" that provide access to AI tools. A spokesperson for the regulator said: "eSafety will use the full range of our powers where there is non-compliance", including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services". A Reuters review of the 50 most popular text-based AI products found limited preparedness ahead of the deadline: Among larger AI services, ChatGPT, Replika and Anthropic's Claude have begun rolling out age checks or stronger content filters. Companion chatbot provider Character.AI has restricted open-ended chats for users under 18. However, Reuters found that three-quarters of companion chatbot services had no functioning or announced age verification systems. One-sixth did not even publish an email address to report suspected breaches, another requirement under the code. The regulator's warning signals that enforcement may not stop at AI developers. Major app stores and search operators could also be required to block non-compliant services. Apple, which runs one of the world's largest app stores, recently introduced age verification tools for developers in regions including Australia. Through its Declared Age Range API, Apple can identify a user's age group and, from February 24, 2026, block users in Australia from downloading apps rated 18+ unless they confirm they are adults. Apple said it would use reasonable methods to prevent minors from accessing restricted apps, but did not detail the methods. The shift towards app store-level checks has triggered debate globally about who should carry the burden of age verification: platforms themselves or gatekeepers like app stores. Critics argue that users can bypass app stores by accessing services through web browsers, raising enforcement questions. Australia became the first country in December to ban social media access for children under 16, citing mental health risks. The new AI restrictions are part of a broader effort to limit young people's exposure to harmful online content. Although Australia has not reported cases of chatbot-linked violence, the regulator said it has received reports of children as young as 10 spending up to six hours a day on AI chat tools. The eSafety spokesperson said the agency was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage." ". Globally, AI companies have faced lawsuits related to interactions with minors. OpenAI and Character.AI have been named in wrongful death lawsuits abroad. OpenAI also confirmed this week that it had deactivated a ChatGPT account linked to a teenage mass shooting suspect in Canada months before the attack, without notifying authorities. Australia's move is part of a wider international trend. In the United Kingdom, Prime Minister Keir Starmer has said his government will seek new powers from Parliament to introduce a minimum age for social media access and curb features such as infinite scrolling and autoplay. The UK is also examining whether AI chatbot providers should be clearly brought under online safety laws. In India, Union IT Minister Ashwini Vaishnaw recently said the government is in discussions with social media platforms on age-based restrictions and deepfakes. Speaking at the AI Impact Summit in New Delhi, he said, "Right now, we are in conversation regarding deepfakes, regarding age-based restrictions with the various social media platforms on what is the right way to go forward on this." He added: "Any company which operates must operate within the constitutional framework of the country in which it is operating" and said platforms must be mindful that "something which is normal in one country can be prohibited in another country." Several Indian states, including Andhra Pradesh and Goa, are examining whether to introduce age-based limits on social media. The debate has also reached courts, with the Madras High Court urging the Union government to consider stronger online child protection measures. With the March 9 deadline approaching, Australia's approach positions it among the most aggressive regulators globally in extending youth protection laws from social media platforms to AI-powered services.
[8]
Australia says it may go after app stores, search engines in AI age crackdown
SYDNEY, Mar 2 (Reuters) - Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week. The warning reflects one of the most aggressive efforts globally to rein in AI companies, which face a growing number of lawsuits for failing to stop - and even encouraging - self-harm or violence while researchers caution that such platforms are more harmful to youth mental health than social media. Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology. From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million). "eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services". OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without telling the authorities. Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day. eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said. Top app store operator Apple did not respond but said on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions that are introducing age restrictions, without specifying the methods. A spokesperson for Google, Australia's dominant search engine provider and No.2 app store operator, declined to comment. Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, said eSafety was trying to notify chatbot services about the new rules but "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them". COMPLIANCE IN THE MINORITY A week before Australia's deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems, the Reuters review found. The review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters. Another 11 platforms had blanket content filters or planned to block all Australians from using their service, measures that would comply with the new law by keeping restricted content from all users, leaving 30 with no apparent steps taken to follow the new rules, the review found. Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code. But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required. Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok's parent company, xAI, did not respond to a request for comment. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls". "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed," she said.
Share
Share
Copy Link
Australia's internet regulator may require app stores and search engines to block AI services lacking age verification by March 9. A Reuters review found only 9 of 50 popular AI platforms have implemented age assurance systems, while 30 show no compliance steps. The move follows Australia's groundbreaking social media ban for teenagers and reflects growing concerns about youth mental health and AI chatbot usage.

Australia's eSafety commissioner has signaled it may take enforcement action against app stores and search engines that provide access to AI services failing to implement age verification by March 9. The warning marks one of the most aggressive AI regulation efforts globally, extending the country's youth protection measures beyond its December 2024 social media ban for teenagers
1
. "eSafety will use the full range of our powers where there is non-compliance," a spokesperson stated, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services"2
.The new rules require internet services including OpenAI's ChatGPT and companion chatbots to restrict Australians under 18 from accessing pornography, extreme violence, self-harm and eating disorder content. Companies face fines of up to A$49.5 million ($35 million) for non-compliance
3
. This AI age crackdown positions Australia as a global leader in efforts to protect minors from harmful content, following its pioneering social media restrictions that prompted similar commitments from world leaders.A Reuters review conducted one week before the deadline revealed alarming gaps in compliance across the AI industry. Of the 50 most popular text-based AI products, only nine had rolled out or announced plans for age assurance systems
1
. Another 11 platforms implemented blanket content filters or planned to block all Australians from using their services, leaving 30 with no apparent steps taken to follow the new rules. The review assessed each platform's response to prompts requesting restricted content, moderation policies, published terms of service, and direct statements to Reuters.Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters, while Character.AI cut off open-ended chat for under-18s. However, among companion chatbots, three-quarters had no functioning or planned filtering or age verification, and one-sixth lacked even a published email address to report suspected breaches
5
. Elon Musk's Grok, under investigation globally for suspected failure to stop production of synthetic sexualized imagery of children, had no age assurance measures or text-based content filtering, Reuters found.The regulatory push stems from mounting evidence about AI's impact on young users. Australia's eSafety regulator has received reports of children as young as 10 talking to AI-powered interactive tools up to six hours a day
1
. Officials expressed concern that AI companies are "leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage"3
. Researchers caution that such platforms may be more harmful to youth mental health than social media.OpenAI and companion chatbot startup Character.AI have already faced wrongful death lawsuits over their interactions with young users. OpenAI also acknowledged this week it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without notifying authorities
5
. While Australia hasn't yet experienced reports of chatbot-linked violence or self-harm, the proactive stance reflects determination to prevent such incidents.Related Stories
Apple, the top app store operator, stated on its website it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions introducing age restrictions, without specifying the methods
1
. Google, Australia's dominant search engine provider and second-largest app store operator, declined to comment. The question of which parties bear responsibility for online safety is being debated worldwide. In the US, Apple and Google have lobbied to delegate the task to platforms rather than app store operators2
.Jennifer Duxbury, head of policy at internet industry group DIGI who led drafting of the AI code, noted that while eSafety was attempting to notify chatbot services about the new rules, "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them"
1
. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls."Australia's approach signals a broader trend of age-targeted regulation spreading beyond social media to the entire digital ecosystem. France, Spain, the UK, and New Zealand are all exploring similar age limits on social media and online services for minors under 16
4
. The focus on digital gatekeepers like app stores and search engines represents a strategic shift, targeting chokepoints where access can be controlled more effectively than policing individual services.As the March 9 deadline approaches, the AI industry faces a critical test of its willingness to prioritize online child safety over unfettered access. Whether Australia's aggressive stance on content filtering and age verification will become the global standard remains uncertain, but the country's leadership on youth mental health protection is already influencing policy discussions worldwide. The balance between protecting minors and preserving privacy, access, and civil liberties will define how governments regulate AI services in the coming years.
Summarized by
Navi
[2]
[4]
09 Mar 2026•Policy and Regulation

16 Sept 2025•Policy and Regulation

09 Mar 2026•Policy and Regulation
