5 Sources
5 Sources
[1]
Google Faces Demands to Prohibit AI Videos for Kids on YouTube
The advocates are calling for YouTube to halt investment in AI-generated videos for children, citing concerns that time spent watching such content replaces real-world activities key to children's emotional and social development. Alphabet Inc.'s Google is facing demands from child development experts to prohibit videos created with artificial intelligence from being shown or recommended to young viewers across YouTube and YouTube Kids. More than 200 children's specialists, advocacy groups and schools sent a letter to Google Chief Executive Officer Sundar Pichai and YouTube CEO Neal Mohan on Wednesday raising concerns about what they view as a lack of substance in many AI-generated YouTube videos that claim to be educational. In the letter, the advocates also criticized the perceived low quality of kids' content being mass-produced by AI generators, and the rise in creators on Google's YouTube video service that use artificial intelligence to make clips aimed at profiting off the world's youngest and most impressionable viewers. The child safety advocates worry that AI-generated material, some of it referred to as "AI slop," affects kids' attention spans and their ability to separate what's real from what's not. They also argue that time spent looking at a screen is replacing real-world activities that are key to children's emotional and social development. "There is much we don't know about the consequences of AI content for children," the group wrote. "YouTube is participating in this uncontrolled experiment by pushing AI-generated content without research demonstrating its benefits and without acknowledging the child development principles that tell us it's likely mostly harmful." The letter was signed by social psychologist Jonathan Haidt, whose bestselling book The Anxious Generation kick-started a global movement to fight youth harm caused by social media and smartphones, as well as by child advocacy groups like Fairplay and the National Alliance to Advance Adolescent Health. The American Federation of Teachers and several schools also signed. Google didn't immediately respond to a request for comment. AI-generated videos have become increasingly popular on YouTube, particularly those targeting toddlers and other youngsters. Some creators have found that outsourcing that work to an AI system makes it much easier and cheaper, and have even started sharing tutorials on how to build a business around spinning up videos for toddlers and babies. Mohan said in January that "managing AI slop" and "ensuring YouTube remains a place where people feel good spending their time" is a top company priority in 2026. But YouTube has also argued that not all content made with AI is "slop," and that when done right, creating with AI can even be positive. YouTube requires creators to label "altered and synthetic content," and has said that its systems and monetization policies are designed to penalize those who mass-produce low quality or spammy content. The advocates argued in the letter that these labels are "unlikely to be understood by the preliterate children who are targets for much of this AI slop." In March, Google announced an investment into Animaj, an AI animation studio focused on making YouTube content for kids, part of an effort to improve the quality of its offerings for young users. One Google executive involved called it "a real blueprint for the future," while child safety advocates criticized Google and Animaj for engaging "babies and toddlers who shouldn't have any screen time at all." They urged YouTube to halt "all investment in the creation of AI-generated videos for children." Wednesday's letter arrived at a time when there are other outside efforts to change the way YouTube operates. In March, a landmark jury trial on social media addiction found Google and Meta Platforms Inc. liable for harming a young user with products designed to keep her hooked. Both companies said they would appeal the verdict. Plaintiffs, consumer advocates and lawmakers, however, are now pushing the companies to change some of their most lucrative operational features, including their content algorithms.
[2]
Advocacy groups urge YouTube to protect kids from 'AI slop' videos
Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube's parent company Google, children's advocacy group Fairplay expresses "serious concern" about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators. "This ' AI slop ' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter reads. "These harms are particularly acute for young children." The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it. The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like "The Anxious Generation" author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition. Much of this AI-generated content is fast-paced with bright colors, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of " brainrot." Spokesperson Boot Bullwinkle said in a statement that YouTube has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels." "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content," Bullwinkle said. "We're always evolving our approach to stay current as the ecosystem evolves." YouTube's current policy regarding AI-generated content requires creators to disclose when content that's "realistic" is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects. YouTube said it is actively working on developing labels for YouTube Kids. In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an "extremely limited" definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labeled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children "to fend for themselves or their parents to play whack-a-mole," the letter reads. Fairplay's campaign comes shortly after Google's AI Futures Fund invested $1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg. The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case. "Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children's time online -- including babies. AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction," said Rachel Franz, the director of Fairplay's Young Children Thrive Offline program, in a statement. "What's more, YouTube's algorithm makes it impossible for kids to avoid AI slop." Earlier this year, YouTube head Mohan listed out "managing AI slop" as one of the company's priorities for 2026. In a January blog post, he wrote that the company was "actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
[3]
Advocates to Google CEO: Stop YouTube AI slop from harming kids
YouTube's AI slop problem could have lifelong effects if not controlled, child experts warn. Credit: George Chan / Stringer / Getty Images News via Getty Images And child safety advocates are getting worried. In a letter sent to Google CEO Sundar Pichai and YouTube CEO Neil Mohan, a coalition of national organizations and child development experts is demanding a change to YouTube policies to cut down on AI slop, including an outright ban on "Made for Kids" content generated by AI. "Given the absence of evidence that AI slop is safe for children and the potential for these videos to mesmerize and harm kids, Google must take swift action to protect children on its platforms," the letter reads. Two weeks ago, YouTube announced a partnership with generative AI studio Animaj, which specializes in AI children's content and boasts billions of views across several YouTube channels aimed at infants and babies. The letter, led by child safety nonprofit Fairplay, is signed by organizations like the American Federation of Teachers, the National Black Child Development Association, and Mothers Against Media Addiction (MAMA), as well as experts like Jonathan Haidt, author of the highly cited book The Anxious Generation. The group cites growing concern that exposure to AI content can distort children's perception of reality, cause cognitive overload, and displace real world activities necessary for development. "First YouTube introduced Shorts with Made For Kids content without wondering what impact it would have on young viewers, and then -- no surprise -- AI slop started competing for kids' attention on those very feeds. It's time for platforms to start respecting the attention and minds of young children, not just treat them as a resource to be extracted," said Jenny Radesky, a developmental behavioral pediatrician and media researcher who also signed onto the letter. The group also announced a public petition demanding YouTube implement several new safety policies addressing the proliferation of AI slop directed toward children, including: The letter comes one week after a precedent-defining verdict in a recent case against Meta and YouTube parent company Google, which sided in favor of a 19-year-old user who claimed the companies knew their platforms could be "dangerously addictive" and ignored warnings about user mental health. The Los Angeles jury found that both Meta and YouTube were negligent in addressing internal safety warnings and went forward with platform features that exacerbated expert concerns. "In some cases, seemingly benign animations can turn out to be sexual or violent in nature," said Sebastian Mahal, co-chair of youth-led lobby coalition Design It For Us. "Young people don't want to be targeted with this type of experience by YouTube's algorithm. After a California jury found YouTube liable for failing to protect young people on its platform, one would think YouTube would finally take its responsibility to its young users seriously." In addition to claims that Instagram's algorithms exacerbated the youth mental health crisis, particularly among teen girls, child safety advocates have long warned that YouTube is a dangerous site for young children. Rachel Franz, director of Fairplay's Young Children Thrive Offline program, told Mashable in a March interview: "If 'managing AI slop' was really YouTube's top priority this year, they would have already taken down the millions of AI-generated 'Made for Kids' videos that are designed to entrance young children, leading to more screen time and displacing the activities they need to thrive offline." YouTube is the most popular video platform for young child viewers, especially among low-income households. Despite efforts to address AI-generated content, YouTube has yet to fully rein in the problem, and AI-generated content aimed at children has become a lucrative business. A New York Times report found thousands of low-quality AI videos in YouTube's algorithm, including ones that violated child safety policies. Currently, animated videos generated by AI do not require AI labels, and AI labels do not appear consistently on YouTube Kids. YouTube only requires labeling for synthetic media made to mimic "realistic" settings or people. In response to the new letter, Franz added, "YouTube's algorithm makes it impossible for kids to avoid AI slop. YouTube must stop shoving AI slop onto children now, before it further damages an entire generation of kids."
[4]
Advocacy Groups Urge YouTube to Protect Kids From 'AI Slop' Videos
Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube's parent company Google, children's advocacy group Fairplay expresses "serious concern" about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators. "This ' AI slop ' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter reads. "These harms are particularly acute for young children." The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it. The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like "The Anxious Generation" author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition. Much of this AI-generated content is fast-paced with bright colors, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of " brainrot." Spokesperson Boot Bullwinkle said in a statement that YouTube has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels." "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content," Bullwinkle said. "We're always evolving our approach to stay current as the ecosystem evolves." YouTube's current policy regarding AI-generated content requires creators to disclose when content that's "realistic" is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects. YouTube said it is actively working on developing labels for YouTube Kids. In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an "extremely limited" definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labeled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children "to fend for themselves or their parents to play whack-a-mole," the letter reads. Fairplay's campaign comes shortly after Google's AI Futures Fund invested $1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg. The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case. "Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children's time online -- including babies. AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction," said Rachel Franz, the director of Fairplay's Young Children Thrive Offline program, in a statement. "What's more, YouTube's algorithm makes it impossible for kids to avoid AI slop." Earlier this year, YouTube head Mohan listed out "managing AI slop" as one of the company's priorities for 2026. In a January blog post, he wrote that the company was "actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
[5]
Advocacy groups urge YouTube to protect kids from 'AI slop' videos
Advocacy groups and experts are condemning YouTube for serving low-quality AI-generated videos to children, warning of developmental harm. A letter to YouTube's CEOs calls for clear labeling of AI content and a ban on such videos on YouTube Kids, citing concerns about distorted reality and hijacked attention. Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube's parent company Google, children's advocacy group Fairplay expresses "serious concern" about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organisations and individual experts such as child psychiatrists and educators. "This ' AI slop ' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter reads. "These harms are particularly acute for young children." The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it. The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like "The Anxious Generation" author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition. Much of this AI-generated content is fast-paced with bright colours, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of " brainrot". Spokesperson Boot Bullwinkle said in a statement that YouTube has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels". "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labelling content from our own AI tools, and requiring creators to disclose realistic AI content," Bullwinkle said. "We're always evolving our approach to stay current as the ecosystem evolves." YouTube's current policy regarding AI-generated content requires creators to disclose when content that's "realistic" is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects. YouTube said it is actively working on developing labels for YouTube Kids. In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an "extremely limited" definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labelled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children "to fend for themselves or their parents to play whack-a-mole," the letter reads. Fairplay's campaign comes shortly after Google's AI Futures Fund invested USD 1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg. The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case. "Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children's time online - including babies. AI slop hypnotises young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction," said Rachel Franz, the director of Fairplay's Young Children Thrive Offline programme, in a statement. "What's more, YouTube's algorithm makes it impossible for kids to avoid AI slop." Earlier this year, YouTube head Mohan listed out "managing AI slop" as one of the company's priorities for 2026. In a January blog post, he wrote that the company was "actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
Share
Share
Copy Link
More than 200 child development experts and advocacy groups are calling on YouTube to prohibit AI-generated videos from being shown to young viewers. The coalition argues that low-quality AI content, dubbed 'AI slop,' harms children's ability to distinguish reality, overwhelms their learning processes, and displaces essential offline activities crucial for healthy development.
More than 200 child development specialists, advocacy groups, and educational institutions have sent a letter to Google CEO Sundar Pichai and YouTube CEO Neal Mohan demanding immediate action to protect kids from AI videos flooding the platform
1
. The coalition, led by children's advocacy group Fairplay, expresses serious concern about the proliferation of low-quality AI content—commonly referred to as AI slop—on both YouTube and the YouTube Kids app2
.
Source: Mashable
The letter, sent Wednesday morning, was signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, along with approximately 100 individual experts such as social psychologist Jonathan Haidt, author of the bestselling book The Anxious Generation
4
. The campaign represents a growing movement to address youth harm caused by social media platforms and their content algorithms.The advocates argue that AI slop harms child development by distorting their sense of reality, overwhelming their learning processes, and hijacking attention spans. Rachel Franz, director of Fairplay's Young Children Thrive Offline program, stated that "AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction"
2
.
Source: Bloomberg
Much of this AI-generated content features fast-paced sequences with bright colors, lively music, and clickbait titles designed to grab the attention of young viewers
4
. The letter warns that time spent watching these videos replaces real-world activities essential to children's emotional and social development. Developmental behavioral pediatrician Jenny Radesky, who signed the letter, noted that "platforms should start respecting the attention and minds of young children, not just treat them as a resource to be extracted"3
.The campaign arrives just weeks after Google's AI Futures Fund invested $1 million into Animaj, an AI animation studio that specializes in Made for Kids content and boasts billions of views across several YouTube channels aimed at infants and babies
4
. One Google executive called the partnership "a real blueprint for the future," while child safety advocates criticized the companies for engaging "babies and toddlers who shouldn't have any screen time at all"1
.The letter urges YouTube to halt all investment in the creation of AI-generated videos for children, arguing that creators have found outsourcing work to AI systems makes content production much easier and cheaper, leading to mass-produced material designed primarily for profit
1
.YouTube currently requires creators to disclose when "realistic" content is made with altered or synthetic media, including generative AI. However, creators are not required to disclose when generative AI is used to create clearly unrealistic content, including animated videos and special effects
2
. Fairplay argues this voluntary disclosure policy and what it views as an "extremely limited" definition of altered content mean kids still encounter a flood of unlabeled AI-generated videos.The advocates contend these labels are "unlikely to be understood by the preliterate children who are targets for much of this AI slop"
1
. Many children watching YouTube videos cannot yet read or comprehend an AI disclosure, leaving them "to fend for themselves or their parents to play whack-a-mole"4
.YouTube spokesperson Boot Bullwinkle responded that the platform maintains "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels"
2
. The company also provides parents the option to block channels and says it prioritizes transparency when labeling AI content.However, Franz argues that "YouTube's algorithm makes it impossible for kids to avoid AI slop". The letter proposes implementing parental controls that allow parents to turn off AI-generated content even if their child searches for it, along with barring AI-generated videos from being recommended to users under 18
2
.Related Stories
The campaign follows a landmark verdict in a social media addiction trial where a California jury found that YouTube designed its platform to hook young users without concern for their well-being
4
. Meta was also found liable on the same counts in the case involving a 19-year-old user who claimed the companies knew their platforms could be "dangerously addictive" and ignored warnings about user mental health3
.Plaintiffs, consumer advocates, and lawmakers are now pushing both companies to change some of their most lucrative operational features, including their content algorithms
1
. Neal Mohan stated in January that "managing AI slop" and "ensuring YouTube remains a place where people feel good spending their time" is a top company priority in 20261
.The coalition warns that "there is much we don't know about the consequences of AI content for children" and accuses YouTube of "participating in this uncontrolled experiment by pushing AI-generated content without research demonstrating its benefits"
1
. Concerns about cognitive overload and distorting reality suggest potential long-term implications for how an entire generation processes information and engages with the world.As YouTube continues developing labels for the YouTube Kids app, the platform faces mounting pressure to balance innovation with child safety. The outcome of this campaign could set precedents for how tech companies regulate AI-generated content targeting vulnerable audiences, particularly as AI animation tools become more accessible and profitable for creators seeking to capitalize on young viewership.
Summarized by
Navi
12 Mar 2026•Technology

27 Dec 2025•Entertainment and Society

29 Jan 2026•Technology
