5 Sources
[1]
eSafety boss wants YouTube included in the social media ban. But AI raises even more concerns for kids
Julie Inman Grant, Australia's eSafety Commissioner, today addressed the National Press Club to outline how her office will be driving the Social Media Minimum Age Bill when it comes into effect in December this year. The bill, often referred to as a social media ban, prevents under-16s having social media accounts. But Inman Grant wants Australians to consider the bill a "social media delay" rather than a ban. When the ban was legislated in November 2024, the federal government carved out an exemption for YouTube, citing the platform's educational purpose. Inman Grant has now advised the government to remove this exemption because of the harm young people can experience on YouTube. But as she has also pointed out, there are new risks for young people that the ban won't address - especially from generative artificial intelligence (AI). Banning YouTube According to eSafety's new research, 37% of young people have encountered harmful content on YouTube. This was the highest percentage of any platform. In her speech, Inman Grant argued YouTube had "mastered persuasive design", being adept at using algorithms and recommendations to keep young people scrolling, and that exempting YouTube from the ban simply makes no sense in her eyes. Her advice to Communications Minister Anika Wells, which she delivered last week, is to not exempt YouTube, effectively including that platform in the ban's remit. Unsurprisingly, YouTube Australia and New Zealand has responded with vigour. In a statement published today, the Google-owned company argues that eSafety's advice goes against the government's own commitment, its own research on community sentiment, independent research, and the view of key stakeholders in this debate. YouTube denies it is a social media platform and claims the advice it should be included in the ban is "inconsistent and contradictory". But given YouTube's Shorts looks and feels very similar to TikTok, with shorter vertical videos in an endlessly scrolling feed, exempting YouTube while banning TikTok and Instagram's Reels never appeared logically consistent. It also remains the case that any public YouTube video can be viewed without a YouTube account. The argument that including YouTube in the ban would stop educational uses, then, doesn't carry a lot of weight. How will the ban work? Inman Grant took great care to emphasise that the responsibility for making the ban work lies with the technology giants and platforms. Young people who get around the ban, or parents and carers who help them, will not be penalised. A raft of different tools and technologies to infer the age of users have been explored by the platforms and by other age verification and assurance vendors. Australia's Age Assurance Technology Trial released preliminary findings last week. But these findings really amounted to no more than a press release. No technical details were shared, only high-level statements that the trial revealed age-assurance technologies could work. These early findings did reveal that the trial "did not find a single ubiquitous solution that would suit all use cases". This suggests there isn't a single age-assurance tool that's completely reliable. If these tools are going to be one of the main gatekeepers that do or don't allow Australians to access online platforms, complete reliability would be desirable. Concerns about AI Quite rightly, Inman Grant opened her speech by flagging the emerging harms that will not actually be addressed by new legislation. Generative AI was at the top of the list. Unregulated use of AI companions and bots was of particular concern, with young people forming deep attachments to these tools, sometimes in harmful ways. Generative AI has also made the creation of deepfake images and videos much easier, making it far too easy for young people to be harmed, and to cause real harm to each other. As a recent report I coauthored from the ARC Centre of Excellence for the Digital Child highlights, there are many pressing issues in terms of how children and young people use and experience generative AI in their everyday lives. For example, despite the tendency of these tools to glitch and fabricate information, they are increasingly being used in place of search engines for basic information gathering, life advice and even mental health support. There are larger challenges around protecting young people's privacy when using these tools, even when compared to the already privacy-averse social media platforms. There are many new opportunities with AI, but also many new risks. With generative AI being relatively new, and changing rapidly, more research is urgently needed to find the safest and most appropriate ways for AI to be part of young people's lives. What happens in December? Social media users under 16, and their parents and carers, need to prepare for changes in young people's online experiences this December when the ban is due to begin. The exact platforms included in the ban, and the exact mechanisms to gauge the age of Australia users, are still being discussed. The eSafety Commissioner has made her case today to include more platforms, not fewer. Yet Wells has already acknowledged that social media age-restrictions will not be the end-all be-all solution for harms experienced by young people online but they will make a significant impact. Concerns remain about the ban cutting young people off from community and support, including mental health support. There is clearly work to be done on that front. Nor does the ban explicitly address concerns about cyberbullying, which Inman Grant said has recently "intensified", with messaging applications at this stage still not likely to be included in the list of banned services. It's also clear some young people will find ways to circumvent the ban. For parents and carers, keeping the door open so young people can discuss their online experiences will be vital to supporting young Australians and keeping them safe.
[2]
Australia not banning YouTube - kids can use parents' logins
China turns on 'minors mode' that ensures kids only see wholesome socialist content online Australia's cyber-safety regulator eSafety has advised its government that YouTube is as dangerous as other social networks, opening the door for the video-streaming site to be included in the Land Down Under's plan to prevent Big Tech allowing kids under 16 from signing up for accounts. Australia's government plans to enact its plan from December 10th when, per the requirements of the Online Safety Amendment (Social Media Minimum Age) Act 2024, operators of some social media services "must take reasonable steps to prevent children who have not reached a minimum age from having accounts." The responsible minister can use the Act to designate social media services that must try to stop kids signing up for accounts. As of last November the government had named TikTok, Facebook, Snapchat, Reddit, Instagram, and X as certainties for regulation, but said that list of services represents a "minimum". The minister at the time, Michelle Rowland, promised not to use the Act's powers to regulate "messaging services, online games, and services that significantly function to support the health and education of users." The government therefore planned to exempt YouTube from regulation, an omission that many found peculiar. Australia's May 2025 election saw its government re-elected. Prime Minister Anthony Albanese appointed a new minister to oversee the Act - Anika Wells MP - and she sought advice from eSafety about how the law could best be applied. eSafety duly delivered the advice, which included a recommendation that YouTube be regulated under the Act. Google objected strongly to eSafety's advice, arguing that YouTube contains plenty of content that can support young users and that the law passed without mention that the vid-streaming site would be regulated. eSafety fired back as follows: eSafety Commissioner Julie Inman Grant later delivered a speech in which she reported new research that found 70 percent of children aged 10 to 15 reported encountering "content associated with harm, including exposure to misogynistic or hateful material, dangerous online challenges, violent fight videos, and content promoting disordered eating." Inman Grant also noted that YouTube has reportedly eased its content moderation efforts and cited such shifts as a reason to revisit which platforms the Act regulated. She also pointed out that AI applications are already putting kids at risk and suggested Australia will need to regulate them, too. Inman Grant also reminded Australians that the Act means kids will still be able to use YouTube at home or school - so long as they sign in with an account established by an adult. Which means kids can still encounter harmful content. Inman Grant noted the Act could therefore be futile and admitted it "won't solve everything" but "will create some friction in the system" that helps to reduce harms caused by online services. The Commissioner also noted that Australia's legislation has sparked debate on how to protect children online in other nations. "And, I can assure you, they are all beating down our door to find out just how Australia plans to take this bold regulatory action forward," she said in her speech. Other nations may also be watching Australia's efforts to determine how tech companies can detect the age of users who try to sign up for accounts, because as we reported last week trials have found the tech feasible but deeply flawed. ®
[3]
YouTube should not be exempt from Australia's under-16s social media ban, eSafety commissioner says
Julie Inman Grant has urged the Albanese government to rethink its decision to carve out the video sharing platform from new rules which apply to other apps YouTube should be included in the ban on under-16s accessing social media, the nation's online safety chief has said as she urges the Albanese government to rethink its decision to carve out the video sharing platform from new rules which apply to apps such as TikTok, Snapchat and Instagram. The eSafety commissioner, Julie Inman Grant, also recommended the government update its under-16s social media ban to specifically address features like stories, streaks and AI chatbots which can disproportionately pose risk to young people. The under-16s ban will come into effect in December 2025, despite questions over how designated online platforms would verify users' ages, and the government's own age assurance trial reporting last week that current technology is not "guaranteed to be effective" and face-scanning tools have given incorrect results. Although then communications minister Michelle Rowland initially indicated YouTube would be part of the ban legislated in December 2024, the regulations specifically exempted the Google-owned video site. Guardian Australia revealed YouTube's global chief executive personally lobbied Rowland for an exemption shortly before she announced the carve out. But in new advice to the communications minister, Anika Wells, Inman Grant warned that large online platforms were weakening their policies designed to reduce harm, and said YouTube should be included in the social media ban. "Given the known risks of harm on Youtube, the similarity of its functionality to other online services, and without sufficient evidence demonstrating that Youtube predominately provides beneficial experiences for children under 16, providing a specific carve out for Youtube appears to be inconsistent with the purpose of the Act," Inman Grant wrote in advice to Wells. YouTube was exempted from the rules after Rowland said it was among online services that helped young people access "education and health support they need" - a deal strongly opposed by other leading social platforms, which called it an "irrational" and "shortsighted" decision. In her advice to Wells, Inman Grant said an exemption was "not consistent with the purpose of the [social media minimum age] obligation to reduce the risk of harm". Wells announced last week she had asked Inman Grant for advice on the rules around the under-16s ban. Inman Grant's advice was released by Wells' office ahead of a speech by the eSafety commissioner to the National Press Club on Tuesday. Inman Grant said exempting any particular service could create issues around enforcement, noting "rapidly evolving" technology was leading to a "shifting risk profile" of certain platforms, and that naming any specific platform could quickly become outdated. She said YouTube features - such as infinite scroll, auto-play and algorithmically recommended feeds - which are also present on TikTok and Instagram, were among those meant to be captured by the social media ban. According to advance speech extracts released by the eSafety Commission, Inman Grant will raise concern about YouTube in her press club speech, referencing a survey of 2,600 children aged 10 to 15 conducted by her office. "Alarmingly, around 7 in 10 kids said they had encountered harmful content, including exposure to misogynistic or hateful material, dangerous online challenges, violent fight videos, and content promoting disordered eating," Inman Grant will say. "Children told us that 75% of this harmful content was most recently encountered on social media. YouTube was the most frequently cited platform in our research, with almost 4 in 10 children reporting exposure to harmful content there." "This also comes as the New York Times reported earlier this month that YouTube surreptitiously rolled back its content moderation processes to keep more harmful content on its platform, even when the content violates the company's own policies." Inman Grant will also voice alarm at "platform after platform winding back their trust and safety teams and weakening policies designed to minimise harm, making these platforms ever-more perilous for our children". Inman Grant's advice also recommended the government's rules be significantly amended to better address the harms they set out to curb, including editing the wording of the ban as well as specifically listing the design features - such as endless content feeds, notifications, stories and streaks - which can disproportionately affect children. Meta, TikTok and Snapchat were unhappy with the original decision to exempt YouTube from the legislation. "It is illogical to restrict two platforms while exempting the third," TikTok's director of public policy in Australia and New Zealand, Ella Woods-Joyce, wrote in a submission to a government consultation on the ban. "It would be akin to banning the sale of soft drinks to minors but exempting Coca-Cola."
[4]
YouTube fires back at eSafety commissioner's push for platform's inclusion in under-16s social media ban
Online video hosting service accuses the nation's online safety boss Julie Inman Grant of ignoring parents and teachers YouTube has criticised calls for it to be included in the under-16s social media ban, accusing the nation's online safety boss of ignoring parents and teachers. The eSafety commissioner, Julie Inman Grant, has urged the government to rethink its decision to carve out the video sharing platform from the minimum social media age which will apply to apps such as TikTok, Snapchat and Instagram. YouTube has said the government should stick by its draft rules and disregard Inman Grant's advice. "Today's position from the eSafety Commissioner represents inconsistent and contradictory advice, having previously flagged concerns the ban 'may limit young people's access to critical support'," YouTube's public policy and government relations manager, Rachel Lord, said. "eSafety's advice ignores Australian families, teachers, broad community sentiment and the government's own decision." Inman Grant's speech to the National Press Club on Tuesday set out more details of the social media age limit - which she referred to as a "delay" rather than a "ban" - to come into force in mid-December. While there are still no details of how social media users would be age checked, she said Australians should expect "a waterfall of tools and techniques", many likely to include artificial intelligence like analysing facial or hand features. Guardian Australia is aware several social media platforms have privately expressed concern about a lack of information about their obligations under the laws, and raised doubts they would be able to build such age assurance systems with less than six months until the deadline. Inman Grant indicated age verification would take place on individual platforms, rather than at the device or app store level, adding that many social media sites already used tools to estimate or verify users' ages. She said platforms would need to report their progress to eSafety, and demonstrate they were using tools to verify users and remove children. However Inman Grant also conceded systems would not be perfect: "We know that companies aren't going to get it right the first time. None of these technologies are foolproof, but again, if they're using them in tandem with one another, they'll have greater levels of success." "While the social media delay will not solve everything, it will create some friction in the system ... this world-leading legislation seeks to shift the burden of reducing harm away from parents and carers and back on to the companies themselves," Inman Grant said. "We are treating big tech like the extractive industry it has become. Australia is legitimately asking companies to provide the lifejackets and the safe guardrails we expect from almost every other consumer-facing industry." YouTube had been pledged a carve-out by former communications minister Michelle Rowland, who listed it alongside Google Classroom and online services from ReachOut and Kids Helpline as being exempt from the ban because they help children "get the education and health support they need". Federal government sources said the communications minister, Anika Wells, would decide within weeks whether to take the commissioner's advice to amend the draft rules. YouTube maintained its service is about video distribution and watching content, not social interactions. Lord said YouTube had been a leader in building age-appropriate products and responding to threats, and denied it had ever changed policies to negatively impact younger users. YouTube said it removed more than 192,000 videos for violating its hate and abuse policies in the first quarter of 2025 alone, and has designed age-appropriate products specifically for young children. Lord said the government should not change course on exempting YouTube from the ban. "eSafety's advice goes against the government's own commitment, its own research on community sentiment, independent research, and the view of key stakeholders in this debate." The shadow communications minister, Melissa McIntosh, said the government must provide more clarity on the looming reforms. "In or out, the government needs to make its position clear on the requirements for social media platforms and families to protect our kids from the vitriol that is so prevalent online," she said. "There are more questions than answers right now, including what verification technology will be required, which platforms are in or out and what constitutes platforms taking reasonable steps to implement social media age minimum standards by 10 December 2025."
[5]
eSafety boss wants YouTube included in the social media ban. But AI raises even more concerns for kids
Julie Inman Grant, Australia's eSafety Commissioner, today addressed the National Press Club to outline how her office will be driving the Social Media Minimum Age Bill when it comes into effect in December this year. The bill, often referred to as a social media ban, prevents under-16s having social media accounts. But Inman Grant wants Australians to consider the bill a "social media delay" rather than a ban. When the ban was legislated in November 2024, the federal government carved out an exemption for YouTube, citing the platform's educational purpose. Inman Grant has now advised the government to remove this exemption because of the harm young people can experience on YouTube. But as she has also pointed out, there are new risks for young people that the ban won't address -- especially from generative artificial intelligence (AI). Banning YouTube According to eSafety's new research, 37% of young people have encountered harmful content on YouTube. This was the highest percentage of any platform. In her speech, Inman Grant argued YouTube had "mastered persuasive design," being adept at using algorithms and recommendations to keep young people scrolling, and that exempting YouTube from the ban simply makes no sense in her eyes. Her advice to Communications Minister Anika Wells, which she delivered last week, is to not exempt YouTube, effectively including that platform in the ban's remit. Unsurprisingly, YouTube Australia and New Zealand has responded with vigor. In a statement published today, the Google-owned company argues that "eSafety's advice goes against the government's own commitment, its own research on community sentiment, independent research, and the view of key stakeholders in this debate." YouTube denies it is a social media platform and claims the advice it should be included in the ban is "inconsistent and contradictory." But given YouTube's Shorts looks and feels very similar to TikTok, with shorter vertical videos in an endlessly scrolling feed, exempting YouTube while banning TikTok and Instagram's Reels never appeared logically consistent. It also remains the case that any public YouTube video can be viewed without a YouTube account. The argument that including YouTube in the ban would stop educational uses, then, doesn't carry a lot of weight. How will the ban work? Inman Grant took great care to emphasize that the responsibility for making the ban work lies with the technology giants and platforms. Young people who get around the ban, or parents and caregivers who help them, will not be penalized. A raft of different tools and technologies to infer the age of users have been explored by the platforms and by other age verification and assurance vendors. Australia's Age Assurance Technology Trial released preliminary findings last week. But these findings really amounted to no more than a press release. No technical details were shared, only high-level statements that the trial revealed age-assurance technologies could work. These early findings did reveal that the trial "did not find a single ubiquitous solution that would suit all use cases." This suggests there isn't a single age-assurance tool that's completely reliable. If these tools are going to be one of the main gatekeepers that do or don't allow Australians to access online platforms, complete reliability would be desirable. Concerns about AI Quite rightly, Inman Grant opened her speech by flagging the emerging harms that will not actually be addressed by new legislation. Generative AI was at the top of the list. Unregulated use of AI companions and bots was of particular concern, with young people forming deep attachments to these tools, sometimes in harmful ways. Generative AI has also made the creation of deepfake images and videos much easier, making it far too easy for young people to be harmed, and to cause real harm to each other. As a recent report I coauthored from the ARC Center of Excellence for the Digital Child highlights, there are many pressing issues in terms of how children and young people use and experience generative AI in their everyday lives. For example, despite the tendency of these tools to glitch and fabricate information, they are increasingly being used in place of search engines for basic information gathering, life advice and even mental health support. There are larger challenges around protecting young people's privacy when using these tools, even when compared to the already privacy-averse social media platforms. There are many new opportunities with AI, but also many new risks. With generative AI being relatively new, and changing rapidly, more research is urgently needed to find the safest and most appropriate ways for AI to be part of young people's lives. What happens in December? Social media users under 16, and their parents and caregivers, need to prepare for changes in young people's online experiences this December when the ban is due to begin. The exact platforms included in the ban, and the exact mechanisms to gauge the age of Australian users, are still being discussed. The eSafety Commissioner has made her case today to include more platforms, not fewer. Yet Wells has already acknowledged that "social media age-restrictions will not be the end-all-be-all solution for harms experienced by young people online, but they will make a significant impact." Concerns remain about the ban cutting young people off from community and support, including mental health support. There is clearly work to be done on that front. Nor does the ban explicitly address concerns about cyberbullying, which Inman Grant said has recently "intensified," with messaging applications at this stage still not likely to be included in the list of banned services. It's also clear some young people will find ways to circumvent the ban. For parents and caregivers, keeping the door open so young people can discuss their online experiences will be vital to supporting young Australians and keeping them safe.
Share
Copy Link
Australia's eSafety Commissioner Julie Inman Grant urges the government to include YouTube in the upcoming social media ban for under-16s, citing concerns about harmful content and persuasive design. The decision sparks debate among tech companies and raises questions about age verification methods and emerging AI-related risks.
Julie Inman Grant, Australia's eSafety Commissioner, has advised the government to include YouTube in the upcoming Social Media Minimum Age Bill, set to take effect in December 2025. This recommendation challenges the previous exemption granted to YouTube, citing concerns about harmful content and the platform's "persuasive design" 1.
Source: Tech Xplore
YouTube Australia and New Zealand has strongly objected to this recommendation, arguing that it contradicts the government's own research and community sentiment. The platform maintains that it is not a social media service but primarily a video distribution platform 4.
The implementation of the ban faces significant challenges, particularly in age verification. Australia's Age Assurance Technology Trial has not found a single, universally effective solution for age verification across all platforms 1. Inman Grant suggests that a combination of tools and techniques, likely involving AI, will be used to estimate or verify users' ages 4.
While addressing social media risks, Inman Grant also highlighted new threats posed by generative AI. These include unregulated AI companions, deepfake creation, and the use of AI for information gathering and mental health support. The rapid evolution of AI technology necessitates urgent research to ensure safe integration into young people's lives 5.
Source: The Conversation
The eSafety Commissioner emphasizes that the responsibility for enforcing the ban lies with the technology platforms. Users under 16 and their parents or caregivers will not be penalized for circumventing the ban 2. However, concerns remain about potentially cutting young people off from online communities and support networks 1.
Australia's approach to regulating social media access for minors has sparked international interest. Other nations are closely watching how Australia implements this "bold regulatory action" 2. The outcome of this legislation could influence similar policies worldwide, potentially reshaping how young people interact with social media and online platforms globally.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
11 Sources
Business
14 hrs ago
11 Sources
Business
14 hrs ago
SoftBank makes a significant $2 billion investment in Intel, boosting the chipmaker's efforts to regain its competitive edge in the AI semiconductor market.
22 Sources
Business
22 hrs ago
22 Sources
Business
22 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
22 hrs ago
15 Sources
Technology
22 hrs ago
Microsoft introduces a new AI-powered 'COPILOT' function in Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
8 Sources
Technology
14 hrs ago
8 Sources
Technology
14 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
14 hrs ago
10 Sources
Technology
14 hrs ago