4 Sources
4 Sources
[1]
Exclusive: Australia says it may go after app stores, search engines in AI age crackdown
SYDNEY, Mar 2 (Reuters) - Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week. The warning reflects one of the most aggressive efforts globally to rein in AI companies, which face a growing number of lawsuits for failing to stop - and even encouraging - self-harm or violence while researchers caution that such platforms are more harmful to youth mental health than social media. Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology. From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million). "eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services". OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without telling the authorities. Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day. eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said. Top app store operator Apple (AAPL.O), opens new tab did not respond but said on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions that are introducing age restrictions, without specifying the methods. A spokesperson for Google (GOOGL.O), opens new tab, Australia's dominant search engine provider and No.2 app store operator, declined to comment. Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, said eSafety was trying to notify chatbot services about the new rules but "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them". COMPLIANCE IN THE MINORITY A week before Australia's deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems, the Reuters review found. The review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters. Another 11 platforms had blanket content filters or planned to block all Australians from using their service, measures that would comply with the new law by keeping restricted content from all users, leaving 30 with no apparent steps taken to follow the new rules, the review found. Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code. But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required. Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok's parent company, xAI, did not respond to a request for comment. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls". "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed," she said. ($1 = 1.4085 Australian dollars) Reporting by Byron Kaye; Editing by Saad Sayeed Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence Byron Kaye Thomson Reuters Byron Kaye is the Reuters chief companies correspondent for Australia, based in Sydney. Over 10 years at Reuters he has covered banks, retail, healthcare, media, technology and politics, among other topics. He can be reached at +612 9171 7541 or on Signal via username byronkaye.01
[2]
Australia says it may go after app stores, search engines in AI age crackdown - The Economic Times
Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology.Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week. The warning reflects one of the most aggressive efforts globally to rein in AI companies, which face a growing number of lawsuits for failing to stop - and even encouraging - self-harm or violence while researchers caution that such platforms are more harmful to youth mental health than social media. Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology. From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million). "eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services". OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week that it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without telling the authorities. Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day. eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said. Top app store operator Apple did not respond but said on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions that are introducing age restrictions, without specifying the methods. A spokesperson for Google, Australia's dominant search engine provider and No.2 app store operator, declined to comment. Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, said eSafety was trying to notify chatbot services about the new rules but "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them". Compliance in the minority A week before Australia's deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems, the Reuters review found. The review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters. Another 11 platforms had blanket content filters or planned to block all Australians from using their service, measures that would comply with the new law by keeping restricted content from all users, leaving 30 with no apparent steps taken to follow the new rules, the review found. Most large chat-based search assistants, such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code. But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required. Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok's parent company, xAI, did not respond to a request for comment. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls". "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed," she said. ($1 = 1.4085 Australian dollars)
[3]
After Social Media, Australia Vows Crackdown On AI Services Over Age Verification Breaches: Report - Alphabet (NASDAQ:GOOGL)
Australia's internet regulator, eSafety, is reportedly contemplating strict measures against artificial intelligence (AI) services that are not adhering to age verification rules. The regulator's decision follows a review that found over half of these services had not publicly committed to meeting the compliance deadline set for the following week, reported Reuters on Monday. From March 9, internet services in Australia, including AI tools like OpenAI's ChatGPT and other chatbots, will be mandated to prevent Australians under 18 from accessing explicit content, or they could attract fines up to A$49.5 million ($35 million). The commissioner's spokesperson told the publication that eSafety would use its "full range" of powers in case of non-compliance, and added that this could include action against "gatekeeper services such as search engines and app stores that provide key points of access to particular services". This is one of the most severe global efforts to regulate AI companies, which are increasingly facing lawsuits for their inability to curb harmful content. In December, Australia became the first country to bar children under 16 from using major social platforms. Global Scrutiny of AI Platforms Grows Just a month later, Elon Musk's AI chatbot Grok faced a formal investigation by Ireland's privacy regulator over concerns about how it processes personal data and generates sexualized content. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors Photo courtesy: Shutterstock/ wutzkohphoto Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[4]
Australia says it may go after app stores, search engines in AI age crackdown
SYDNEY, Mar 2 (Reuters) - Australia's internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week. The warning reflects one of the most aggressive efforts globally to rein in AI companies, which face a growing number of lawsuits for failing to stop - and even encouraging - self-harm or violence while researchers caution that such platforms are more harmful to youth mental health than social media. Australia in December became the first country to ban social media for teenagers, citing mental health concerns, prompting an outpouring of world leaders saying they would do the same. The country now says it is spearheading a similar crackdown on AI by putting age restrictions on the content people can access with the technology. From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million). "eSafety will use the full range of our powers where there is non-compliance," a spokesperson for the commissioner said, including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services". OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits over their interactions with young users, while OpenAI acknowledged this week it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without telling the authorities. Australia is yet to experience reports of chatbot-linked violence or self-harm, but the regulator has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day. eSafety was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage", the spokesperson said. Top app store operator Apple did not respond but said on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions that are introducing age restrictions, without specifying the methods. A spokesperson for Google, Australia's dominant search engine provider and No.2 app store operator, declined to comment. Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, said eSafety was trying to notify chatbot services about the new rules but "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them". COMPLIANCE IN THE MINORITY A week before Australia's deadline, of the 50 most popular text-based AI products, nine had rolled out or announced plans for age assurance systems, the Reuters review found. The review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters. Another 11 platforms had blanket content filters or planned to block all Australians from using their service, measures that would comply with the new law by keeping restricted content from all users, leaving 30 with no apparent steps taken to follow the new rules, the review found. Most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters. Chatbot provider Character.AI cut off open-ended chat for under-18s. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi told Reuters they planned to comply without elaborating, while HammerAI said it would block its services from Australia initially to comply with the code. But those were the minority. Of the companion chatbots, three-quarters had no functioning or planned filtering or age assurance, while one-sixth did not have a published email address to report suspected breaches, which is also required. Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok's parent company, xAI, did not respond to a request for comment. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls". "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed," she said.
Share
Share
Copy Link
Australia's internet regulator eSafety warns it may target app stores and search engines to block AI services that fail to verify user ages. A Reuters review found over half of the 50 most popular AI platforms had not publicly committed to complying with the March 9 deadline, which requires restricting minors from harmful content or facing fines up to A$49.5 million ($35 million).
Australia's internet regulator eSafety has issued a stark warning that it may push app stores and search engines to block artificial intelligence services that fail to verify user ages, marking one of the most aggressive efforts globally to rein in AI companies
1
. The threat comes after a Reuters review found more than half of AI platforms had not made public any steps to comply by the March 9 deadline1
. This new age restrictions on AI follows Australia becoming the first country in December to ban social media for teenagers, citing concerns about safeguarding youth mental health .
Source: Benzinga
From March 9, internet services in Australia including search tools like OpenAI's ChatGPT and lesser-known companion chatbots must restrict Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines of up to A$49.5 million ($35 million)
1
. An eSafety spokesperson confirmed the regulator would "use the full range of our powers where there is non-compliance," including "action in respect of gatekeeper services such as app stores and search engines that provide key points of access to particular services"4
. The warning reflects mounting concerns that AI platforms are more harmful to youth mental health than social media, with researchers cautioning about the unique risks these tools pose1
.
Source: ET
A week before the deadline, of the 50 most popular text-based AI products, only nine had rolled out or announced plans for age assurance systems, according to the Reuters review
1
. Another 11 platforms had blanket content filters or planned to block all Australians from using their service, leaving 30 non-compliant AI services with no apparent steps taken to follow the new rules4
. While most large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude had started rolling out age assurance systems or blanket filters, three-quarters of companion chatbots had no functioning or planned filtering or age verification2
. Elon Musk's chat-based search tool Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualized imagery of children, had no age assurance measures or content filtering, Reuters found .Related Stories
The regulator has reported being told about children as young as 10 talking to AI-powered interactive tools up to six hours a day, despite Australia not yet experiencing reports of chatbot-linked violence or self-harm
2
. eSafety expressed concern that "AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage," according to the spokesperson . The urgency of these new regulations is underscored by recent wrongful death lawsuits faced by OpenAI and companion chatbot startup Character.AI over their interactions with young users3
. OpenAI also acknowledged this week that it deactivated the ChatGPT account of a teen mass shooting suspect in Canada months before the attack, without notifying authorities4
.
Source: Reuters
Top app store operator Apple did not respond to requests for comment but stated on its website last week that it would use "reasonable methods" to stop minors downloading 18+ apps in Australia and other jurisdictions introducing age restrictions, without specifying the methods . A spokesperson for Google, Australia's dominant search engine provider and second-largest app store operator, declined to comment
2
. Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, emphasized that "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them"4
. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls" . The enforcement action against blocking AI platforms through app stores and search engines could set a global precedent for how governments protect minors from harmful content in the AI era.Summarized by
Navi
[2]
16 Sept 2025•Policy and Regulation

11 Dec 2025•Policy and Regulation

18 Dec 2025•Technology

1
Business and Economy

2
Policy and Regulation

3
Policy and Regulation
