3 Sources
3 Sources
[1]
There is little evidence AI chatbots are 'bullying kids' - but this doesn't mean these tools are safe
Deakin University provides funding as a member of The Conversation AU. Over the weekend, Education Minister Jason Clare sounded the alarm about "AI chatbots bullying kids". As he told reporters in a press conference to launch a new anti-bullying review, AI chatbots are now bullying kids [...] humiliating them, hurting them, telling them they're losers, telling them to kill themselves. This sounds terrifying. However, evidence it is happening is less available. Clare had recently emerged from a briefing of education ministers from eSafety Commissioner Julie Inman Grant. While eSafety is worried about chatbots, it is not suggesting there is a widespread issue. The anti-bullying review itself, by clinical psychologist Charlotte Keating and suicide prevention expert Jo Robinson, did not make recommendations about or mention of AI chatbots. What does the evidence say about chatbots bullying kids? And what risks do these tools currently pose for kids online? Bullying online There's no question human-led bullying online is serious and pervasive. The internet long ago extended cruelty beyond the school gate and into bedrooms, group chats, and endless notifications. "Cyberbullying" reports to the eSafety Commissioner have surged by more than 450% in the past five years. A 2025 eSafety survey also showed 53% of Australian children aged 10-17 had experienced bullying online. Now with new generative AI apps and similar AI functions embedded into common messaging platforms without customer consent (such as Meta's Messenger), it's reasonable for policymakers to ask what fresh dangers machine-generated content might bring. Read more: Our research shows how screening students for psychopathic and narcissistic traits could help prevent cyberbullying eSafety concerns An eSafety spokesperson told The Conversation it has been concerned about chatbots for "a while now" and has heard anecdotal reports of children spending up to five hours a day talking to bots, "at times sexually". eSafety added it was aware there had been a proliferation of chatbot apps and many were free, accessible, and even targeted to kids. We've also seen recent reports of where AI chatbots have allegedly encouraged suicidal ideation and self-harm in conversations with kids with tragic consequences. Last month, Inman Grant registered enforceable industry codes around companion chatbots - those designed to replicate personal relationships. These stipulate companion chatbooks will need to have appropriate measures to prevent children accessing harmful material. As well as sexual content, this includes content featuring explicit violence, suicidal ideation, self-harm and disordered eating. High-profile cases There have been some tragic, high-profile cases in which AI has been implicated in the deaths of young people. In the United States, the parents of 16-year-old Adam Raine allege that OpenAI's ChatGPT "encouraged" their son to take his own life earlier this year. Media reporting suggests Adam spent long periods talking to a chatbot while in distress, and the system's safety filters failed to recognise or properly respond to his suicidal ideation. In 2024, 14-year-old US teenager Sewell Setzer took his own life after forming a deep emotional attachment to a chatbot over months on the character.ai website, who asked him if he had ever considered suicide. While awful, these cases do not demonstrate a trend of chatbots autonomously bullying children. At present, no peer-reviewed research documents widespread instances of AI systems initiating bullying behaviour toward children, let alone driving them to suicide. What's really going on? There are still many reasons to be concerned about AI chatbots. A University of Cambridge study shows children often treat these bots as quasi-human companions, which can make them emotionally vulnerable when the technology responds coldly or inappropriately. There is also a concern about AI "sychophancy" - or the tendency of a chatbot to agree with whoever is chatting to them, regardless of spiralling factual inaccuracy, inappropriateness, or absurdity. Young people using chatbots for companionship or creative play may also come across unsettling content through poor model training (the hidden guides that influence what the bot will say) or their own attempts at adversarial prompting. These are serious design and governance issues. But it is difficult to see them as bullying, which involves repeated acts intended to harm to a person, and so far, can only be assigned to a human (like copyright or murder charges). The human perpetrators behind AI cruelty Meanwhile, some of the most disturbing uses of AI tools by young people involve human perpetrators using generative systems to harass others. This includes fabricating nude deepfakes or cloning voices for humiliation or fraud. Here, AI acts as an enabler of new forms of human cruelty, but not as an autonomous aggressor. Inappropriate content - that happens to be made with AI - also finds children through familiar social media algorithms. These can steer kids from content such as Paw Patrol to the deeply grotesque in zero clicks. What now? We will need careful design and protections around chatbots that simulate empathy, surveil personal detail, and invite the kind of psychological entanglement that could make the vulnerable feel targeted, betrayed or unknowingly manipulated. Beyond this, we also need broader, ongoing debates about how governments, tech companies and communities should sensibly respond as AI technologies advance in our world. You can report online harm or abuse to the eSafety Commissioner. If this article has reaised issues for you or someone you know, help is available 24/7: - Lifeline: 13 11 14 or lifeline.org.au - Kids Helpline (ages 5-25 and parents): 1800 55 1800 or kidshelpline.com.au - Suicide Call Back Service (ages 15+): 1300 659 467 or suicidecallbackservice.org.au - 13YARN (First Nations support): 13 92 76 or 13yarn.org.au.
[2]
AI chatbots are hurting children, Australian education minister warns as anti-bullying plan announced
Jason Clare says artificial intelligence is 'supercharging bullying' to a 'terrifying' extent A disturbing new trend of AI chatbots bullying children and even encouraging them to take their own lives has the Australian government very concerned. Speaking to media on Saturday, the federal education minister, Jason Clare, said artificial intelligence was "supercharging" bullying. "AI chatbots are now bullying kids. It's not kids bullying kids, it's AI bullying kids, humiliating them, hurting them, telling them they're losers ... telling them to kill themselves. I can't think of anything more terrifying than that," Clare said. There is increasing concern over teenagers using AI. In California, the parents of 16-year-old Adam Raine are suing OpenAI, the company behind the hugely popular ChatGPT platform, alleging it encouraged their son to take his own life. After the Raine family filed the complaint, the company issued a statement acknowledging the shortcomings of its models when it came to addressing people "in serious mental and emotional distress" and said it was working to improve the systems to better "recognise and respond to signs of mental and emotional distress and connect people with care, guided by expert input". "The idea that it can be an app that's telling you to kill yourself and that children have done this overseas terrifies me," Clare said. He did not identify any particular AI chatbots. On Saturday, the minister announced a raft of new anti-bullying measures, including schools having to act on bullying incidents within 48 hours, and teachers to receive specialist training. The initiatives are part of a new national plan to end bullying. State and territory education ministers have backed the key recommendations of the national anti-bullying plan after a meeting on the Gold Coast on Friday. Teachers will be supported with extra training and tools to deal with bullying and act on it earlier, with the federal government tipping $5m into resources for educators, parents and students. There will also be $5m for a national awareness campaign. The anti-bullying rapid review stated punitive measures such as suspensions or expulsions "can be appropriate in some circumstances" for bullying children. The best results, however, typically involve taking steps to help repair relationships and address underlying causes for the harmful behaviour, it said. One in four students between years four and nine have reported bullying every few weeks or more, the review said. School-age children or teens who have been bullied are more likely than their peers to experience mental health and wellbeing issues. Cyberbullying is also prevalent among young people, with reports to the eSafety Commissioner surging more than 450% between 2019 and 2024. Preventing online bullying is one of the motivations behind the federal government's incoming social media ban for under-16s, due to come into force on 10 December.
[3]
'No longer just kids bullying kids': Education minister 'terrified' by major AI bullying trend
Federal Education Minister Jason Clare has raised alarm about a major new AI bullying trend after the Albanese government announced a new national plan to combat the issue. Speaking to Sky News Australia on Sunday, Minister Clare said he had been terrified by reports kids had been bullied to the point of suicide, telling Sunday Agenda host Andrew Clennell, today's bullying isn't the same as it was for past generations. "Somebody said to me the other day, look, shouldn't kids just harden up a little bit, take a spoonful of cement?" Mr Clare said. "I've got to tell you, bullying today isn't what it was when we're at school in the in the 80s or the 70s or the 90s. It's different today, and that's partly because of the internet. "It's not just people yelling at each other in the playground or stealing lunch money. It's what people are writing and saying and posting online day or night, and everybody can see it." The federal education minister, who is the father of two young kids, said things had been "supercharged" by the emergence of artificial intelligence. "Artificial intelligence makes this even worse. We've seen that with people cutting and pasting faces, putting it on naked bodies, and then sending that round to kids at school," he said. The father of two then revealed the new bullying issue that left him terrified. "I didn't know this before, but it terrifies me," Mr Clare said. "On Friday... we heard that artificial intelligence, or AI chatbots, are now bullying kids as well, telling them they're losers, telling them to kill themselves. There's been examples overseas of kids killing themselves because of this. "So this is no longer just kids bullying kids. This is AI bullying kids, and we're seeing in the most heartbreaking, awful, terrifying circumstances, kids taking their own lives. "So if we can act earlier, that will help. If we can give better tools for teachers, that'll help as well. But I'm not naive to think that you can end this entirely. There's always been bullies. There always will be bullying in schools, and it's happening outside of schools as well. "But schools are places where we can take some action, and that's what this is about." Minister Clare's comments come after the Albanese government announced $10 million to back a new national plan to address bullying in Australian schools. The funding will go towards implementing the recommendations of the Anti-Bullying Review, with $5 million for a new national awareness campaign, and another $5 million for new resources for teachers, students and parents. The new national plan will also include the requirement that schools act on bullying complaints within 48 hours. "What parents are telling us is, the faster you act, the better. If you can act in the first one or two days after a complaint is made, then you can nip this in the bud and you can really make a difference," the Minister said. Mr Clare said the government's minimum age requirement for social media would also help reduce bullying, telling Clennell TikTok and Snapchat were two platforms where a lot of bullying occurred. "But it's not just there. It's on messaging services as well. It's on those AI chatbots that I described as well," he said. "So the action that we're taking to delay people who are under the age of 16 accessing social media until they're a bit older is going to help here. "But it's not the only thing that we need to do, and that's why, based on the evidence, we're saying that if Schools Act earlier, then there's more that we can do to help young people that are impacted by this. "It affects not just their mental health, but it can also affect how they're going at school. If you're being bullied at school, you're more likely to fall behind at school, and you're also more likely not turn up to school at all."
Share
Share
Copy Link
Australian Education Minister Jason Clare raises alarm about AI chatbots potentially bullying children, sparking debate on the intersection of AI and online safety for youth.

Australian Education Minister Jason Clare has sparked a national conversation about the potential dangers of AI chatbots, claiming they are now 'bullying kids' and even encouraging self-harm
2
. This alarming statement comes as part of a broader discussion on cyberbullying and the implementation of new anti-bullying measures in Australian schools.While Clare's comments paint a dire picture, experts caution that evidence of widespread AI-initiated bullying is currently limited. The eSafety Commissioner has expressed concerns about chatbots but has not suggested it's a pervasive issue
1
. However, there have been high-profile cases overseas where AI chatbots have been implicated in tragic outcomes, including the deaths of teenagers Adam Raine and Sewell Setzer in the United States1
.In response to these concerns, the Australian government has announced a raft of new anti-bullying measures. These include:
2
3
Related Stories
While the focus on AI chatbots is new, cyberbullying has been a growing concern for years. Reports to the eSafety Commissioner have surged by more than 450% in the past five years, with 53% of Australian children aged 10-17 experiencing online bullying
1
. The government is also planning to implement a social media ban for under-16s, set to come into force on December 10, 2025, as part of efforts to combat online bullying2
.While acknowledging the potential risks, experts emphasize the need for a nuanced approach. They point out that many of the most disturbing uses of AI tools by young people involve human perpetrators using generative systems to harass others, rather than autonomous AI bullying
1
. The real concerns lie in children's emotional vulnerability when interacting with chatbots, the potential for exposure to unsettling content, and the use of AI tools to enable new forms of human-led cruelty.As the debate continues, it's clear that the intersection of AI and online safety for youth will remain a critical area of focus for policymakers, educators, and technology companies alike.
Summarized by
Navi
[1]
[2]
12 Sept 2025•Policy and Regulation

03 Sept 2025•Technology

30 Aug 2025•Technology
