12 Sources
12 Sources
[1]
Google Faces Demands to Prohibit AI Videos for Kids on YouTube
The advocates are calling for YouTube to halt investment in AI-generated videos for children, citing concerns that time spent watching such content replaces real-world activities key to children's emotional and social development. Alphabet Inc.'s Google is facing demands from child development experts to prohibit videos created with artificial intelligence from being shown or recommended to young viewers across YouTube and YouTube Kids. More than 200 children's specialists, advocacy groups and schools sent a letter to Google Chief Executive Officer Sundar Pichai and YouTube CEO Neal Mohan on Wednesday raising concerns about what they view as a lack of substance in many AI-generated YouTube videos that claim to be educational. In the letter, the advocates also criticized the perceived low quality of kids' content being mass-produced by AI generators, and the rise in creators on Google's YouTube video service that use artificial intelligence to make clips aimed at profiting off the world's youngest and most impressionable viewers. The child safety advocates worry that AI-generated material, some of it referred to as "AI slop," affects kids' attention spans and their ability to separate what's real from what's not. They also argue that time spent looking at a screen is replacing real-world activities that are key to children's emotional and social development. "There is much we don't know about the consequences of AI content for children," the group wrote. "YouTube is participating in this uncontrolled experiment by pushing AI-generated content without research demonstrating its benefits and without acknowledging the child development principles that tell us it's likely mostly harmful." The letter was signed by social psychologist Jonathan Haidt, whose bestselling book The Anxious Generation kick-started a global movement to fight youth harm caused by social media and smartphones, as well as by child advocacy groups like Fairplay and the National Alliance to Advance Adolescent Health. The American Federation of Teachers and several schools also signed. Google didn't immediately respond to a request for comment. AI-generated videos have become increasingly popular on YouTube, particularly those targeting toddlers and other youngsters. Some creators have found that outsourcing that work to an AI system makes it much easier and cheaper, and have even started sharing tutorials on how to build a business around spinning up videos for toddlers and babies. Mohan said in January that "managing AI slop" and "ensuring YouTube remains a place where people feel good spending their time" is a top company priority in 2026. But YouTube has also argued that not all content made with AI is "slop," and that when done right, creating with AI can even be positive. YouTube requires creators to label "altered and synthetic content," and has said that its systems and monetization policies are designed to penalize those who mass-produce low quality or spammy content. The advocates argued in the letter that these labels are "unlikely to be understood by the preliterate children who are targets for much of this AI slop." In March, Google announced an investment into Animaj, an AI animation studio focused on making YouTube content for kids, part of an effort to improve the quality of its offerings for young users. One Google executive involved called it "a real blueprint for the future," while child safety advocates criticized Google and Animaj for engaging "babies and toddlers who shouldn't have any screen time at all." They urged YouTube to halt "all investment in the creation of AI-generated videos for children." Wednesday's letter arrived at a time when there are other outside efforts to change the way YouTube operates. In March, a landmark jury trial on social media addiction found Google and Meta Platforms Inc. liable for harming a young user with products designed to keep her hooked. Both companies said they would appeal the verdict. Plaintiffs, consumer advocates and lawmakers, however, are now pushing the companies to change some of their most lucrative operational features, including their content algorithms.
[2]
Advocacy groups urge YouTube to protect kids from 'AI slop' videos
Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube's parent company Google, children's advocacy group Fairplay expresses "serious concern" about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators. "This ' AI slop ' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter reads. "These harms are particularly acute for young children." The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it. The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like "The Anxious Generation" author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition. Much of this AI-generated content is fast-paced with bright colors, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of " brainrot." Spokesperson Boot Bullwinkle said in a statement that YouTube has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels." "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content," Bullwinkle said. "We're always evolving our approach to stay current as the ecosystem evolves." YouTube's current policy regarding AI-generated content requires creators to disclose when content that's "realistic" is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects. YouTube said it is actively working on developing labels for YouTube Kids. In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an "extremely limited" definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labeled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children "to fend for themselves or their parents to play whack-a-mole," the letter reads. Fairplay's campaign comes shortly after Google's AI Futures Fund invested $1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg. The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case. "Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children's time online -- including babies. AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction," said Rachel Franz, the director of Fairplay's Young Children Thrive Offline program, in a statement. "What's more, YouTube's algorithm makes it impossible for kids to avoid AI slop." Earlier this year, YouTube head Mohan listed out "managing AI slop" as one of the company's priorities for 2026. In a January blog post, he wrote that the company was "actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
[3]
YouTube faces backlash over AI slop videos targeting kids
YouTube has spent years trying to position itself as a safer place for younger viewers, but a new wave of AI-generated content for kids is stirring controversy. A large group of child development experts is urging Google to step in and potentially ban some of it outright. As reported by Bloomberg, more than 200 children's specialists, advocacy groups, and schools have sent a letter to Google CEO Sundar Pichai and YouTube CEO Neal Mohan calling for AI-generated videos to be removed from recommendations for kids on YouTube and YouTube Kids. The group takes aim at what it describes as low-quality, mass-produced content -- often dubbed AI slop -- that's designed to grab attention rather than actually teach anything.
[4]
Advocates to Google CEO: Stop YouTube AI slop from harming kids
YouTube's AI slop problem could have lifelong effects if not controlled, child experts warn. Credit: George Chan / Stringer / Getty Images News via Getty Images And child safety advocates are getting worried. In a letter sent to Google CEO Sundar Pichai and YouTube CEO Neil Mohan, a coalition of national organizations and child development experts is demanding a change to YouTube policies to cut down on AI slop, including an outright ban on "Made for Kids" content generated by AI. "Given the absence of evidence that AI slop is safe for children and the potential for these videos to mesmerize and harm kids, Google must take swift action to protect children on its platforms," the letter reads. Two weeks ago, YouTube announced a partnership with generative AI studio Animaj, which specializes in AI children's content and boasts billions of views across several YouTube channels aimed at infants and babies. The letter, led by child safety nonprofit Fairplay, is signed by organizations like the American Federation of Teachers, the National Black Child Development Association, and Mothers Against Media Addiction (MAMA), as well as experts like Jonathan Haidt, author of the highly cited book The Anxious Generation. The group cites growing concern that exposure to AI content can distort children's perception of reality, cause cognitive overload, and displace real world activities necessary for development. "First YouTube introduced Shorts with Made For Kids content without wondering what impact it would have on young viewers, and then -- no surprise -- AI slop started competing for kids' attention on those very feeds. It's time for platforms to start respecting the attention and minds of young children, not just treat them as a resource to be extracted," said Jenny Radesky, a developmental behavioral pediatrician and media researcher who also signed onto the letter. The group also announced a public petition demanding YouTube implement several new safety policies addressing the proliferation of AI slop directed toward children, including: The letter comes one week after a precedent-defining verdict in a recent case against Meta and YouTube parent company Google, which sided in favor of a 19-year-old user who claimed the companies knew their platforms could be "dangerously addictive" and ignored warnings about user mental health. The Los Angeles jury found that both Meta and YouTube were negligent in addressing internal safety warnings and went forward with platform features that exacerbated expert concerns. "In some cases, seemingly benign animations can turn out to be sexual or violent in nature," said Sebastian Mahal, co-chair of youth-led lobby coalition Design It For Us. "Young people don't want to be targeted with this type of experience by YouTube's algorithm. After a California jury found YouTube liable for failing to protect young people on its platform, one would think YouTube would finally take its responsibility to its young users seriously." In addition to claims that Instagram's algorithms exacerbated the youth mental health crisis, particularly among teen girls, child safety advocates have long warned that YouTube is a dangerous site for young children. Rachel Franz, director of Fairplay's Young Children Thrive Offline program, told Mashable in a March interview: "If 'managing AI slop' was really YouTube's top priority this year, they would have already taken down the millions of AI-generated 'Made for Kids' videos that are designed to entrance young children, leading to more screen time and displacing the activities they need to thrive offline." YouTube is the most popular video platform for young child viewers, especially among low-income households. Despite efforts to address AI-generated content, YouTube has yet to fully rein in the problem, and AI-generated content aimed at children has become a lucrative business. A New York Times report found thousands of low-quality AI videos in YouTube's algorithm, including ones that violated child safety policies. Currently, animated videos generated by AI do not require AI labels, and AI labels do not appear consistently on YouTube Kids. YouTube only requires labeling for synthetic media made to mimic "realistic" settings or people. In response to the new letter, Franz added, "YouTube's algorithm makes it impossible for kids to avoid AI slop. YouTube must stop shoving AI slop onto children now, before it further damages an entire generation of kids."
[5]
AI 'slop' is flooding YouTube Kids -- and more than 200 groups and experts are calling for a ban | Fortune
More than 200 child advocacy groups and experts are demanding that YouTube ban AI-generated "slop" from its children's platform entirely, arguing that the low-quality, algorithmically produced videos are rewiring young brains and raking in millions while parents and regulators look the other way. The open letter, organized by children's advocacy group Fairplay and addressed to YouTube CEO Neal Mohan and Google CEO Sundar Pichai, was signed by more than 135 organizations. It included the American Federation of Teachers and the American Counseling Association, as well as prominent researchers such as Jonathan Haidt, author of The Anxious Generation. In it, the authors say YouTube is not only failing to stop AI slop from reaching children but is also actively profiting from it. "AI generated videos are really just an escalation of a myriad of problems that YouTube already has when it comes to interfacing with kids on their platforms," Rachel Franz, Program Director of Fairplay's Young Children Thrive Offline program, told Fortune. "It's important to address this AI slop phenomenon, but it's also equally important to take YouTube to task for the way that its platform is designed to hook users into spending more time in ways that aren't necessarily related to AI." The term refers to a wave of mass-produced, AI-generated videos flooding platforms like YouTube. The content is cheap to make, often bizarre or nonsensical, and engineered to grab and hold young (or really, any) viewers' attention. And reader, they are bizarre: cartoon animals performing repetitive tasks in an uncanny valley aesthetic; fake "educational" videos with garbled information; or hypnotic loops without any pure purpose. The New York Times documented the phenomenon in a February investigation, finding such videos embedded throughout YouTube Kids, a platform YouTube has marketed as a safe, curated space for children. "So much of AI-generated content is really designed to hijack children's attention, especially young children who are just at the beginning of developing their impulse control, and they can really distort reality, create confusion, and impact how children are understanding the world around them," said Franz, who has a background in early child development. "This isn't a parenting issue in and of itself. The platform is consistently recommending AI content to young users in ways that make it kind of impossible for them to avoid." The financial incentives are staggering. Fairplay found that top AI slop channels targeting children have earned over $4.25 million in annual revenue, with some creators openly advertising profits from "plotless, mesmerizing AI content." The letter argued that no amount of policy will be enough until the platform removes the financial incentive for creators of these videos. "Only about 5% of videos on YouTube for kids under eight are actually high quality. And there are debates amongst that 5% of whether those are actually high quality," said Franz. YouTube, however, finds that number contrary to their standards policy. "We have high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels," YouTube spokesperson Boot Bullwinkle told Fortune in a statement. "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content. We're always evolving our approach to stay current as the ecosystem evolves." The coalition draws on child development research to argue this isn't a niche concern. Even adults can have trouble correctly identifying AI-generated content, getting it right only about 50% of the time. More troubling, repeated exposure makes people more likely to perceive AI imagery as real, even after being told it's fake. For young children whose brains are still building foundational schemas of reality, the damage compounds over time. Fairplay's asks are structural, not cosmetic. The coalition is calling on YouTube to clearly label all AI-generated content across the platform, ban AI-generated content entirely from YouTube Kids, and prohibit AI-generated "Made for Kids" content on the main YouTube platform. Fairplay wants YouTube to bar its algorithm from recommending AI content to users under 18, introduce a parental toggle to disable AI content that is switched off by default, and halt all investment in AI-generated content targeting children. That last demand takes direct aim at YouTube's investment in Animaj, an AI-powered children's entertainment studio backed by Google AI Futures. "YouTube is essentially investing in harming babies through its purchase of Animaj," Franz said. In Bullwinkle's statement to Fortune, the spokesperson confirmed that YouTube is developing dedicated AI labels for YouTube Kids, though did not provide a timeline. YouTube CEO Neal Mohan had already flagged "managing AI slop" as a top priority in his annual letter. "To reduce the spread of low-quality AI content, we're actively building on our established systems that have been very successful in combating spam and clickbait, and reducing the spread of low-quality, repetitive content," read the letter. Bullwinkle also noted that the 15 channels mentioned in the Times article are not on YouTube Kids and that the platform removed videos that violated its Child Safety policies. But for Franz, that's not good enough. "It shouldn't be up to individual researchers to point out a few channels as examples that are doing things that could potentially harm kids, and have that be the basis for what YouTube decides to kick off the platform. What we saw with Elsagate was that at that time, YouTube removed 150,000 videos from its platform and several hundred different channels," Franz said. She was referencing Elsagate, a 2017 scandal in which thousands of videos on YouTube and YouTube Kids used familiar children's characters, like Elsa from Frozen and Peppa Pig, to hide deeply disturbing content including graphic violence, sexual themes, and drug use, all dressed up with algorithm-friendly tags like "education" and "fun" to slip past filters and reach young children. "So we know that YouTube has the capacity to monitor, track, and remove these videos at scale, but right now, they're doing a band-aid approach, where the channels that are getting press coverage, it seems like those are the ones they're going forward doing something about," Franz continued. "But it's not fixing the overall problem."
[6]
YouTube blasted by hundreds of experts over 'AI slop' videos served up to kids
A letter and petition by advocacy group Fairplay urged YouTube and Google CEOs to protect children from the spread of AI-generated videos. Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube's parent company Google, children's advocacy group Fairplay expresses "serious concern" about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators. "This 'AI slop' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter reads. "These harms are particularly acute for young children." The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it. The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like "The Anxious Generation" author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition. Much of this AI-generated content is fast-paced with bright colors, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of " brainrot." Spokesperson Boot Bullwinkle said in a statement that YouTube has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels." "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content," Bullwinkle said. "We're always evolving our approach to stay current as the ecosystem evolves." YouTube's current policy regarding AI-generated content requires creators to disclose when content that's "realistic" is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects. YouTube said it is actively working on developing labels for YouTube Kids. In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an "extremely limited" definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labeled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children "to fend for themselves or their parents to play whack-a-mole," the letter reads. Fairplay's campaign comes shortly after Google's AI Futures Fund invested $1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg. The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case. "Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children's time online -- including babies. AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction," said Rachel Franz, the director of Fairplay's Young Children Thrive Offline program, in a statement. "What's more, YouTube's algorithm makes it impossible for kids to avoid AI slop." Earlier this year, YouTube head Mohan listed out "managing AI slop" as one of the company's priorities for 2026. In a January blog post, he wrote that the company was "actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
[7]
Google faces calls to prohibit AI videos for kids on YouTube
Google is facing demands from child development experts to prohibit videos created with artificial intelligence from being shown or recommended to young viewers across YouTube and YouTube Kids. More than 200 children's specialists, advocacy groups and schools sent a letter to Google Chief Executive Officer Sundar Pichai and YouTube CEO Neal Mohan on Wednesday raising concerns about what they view as a lack of substance in many AI-generated YouTube videos that claim to be educational. In the letter, the advocates also criticized the perceived low quality of kids' content being mass-produced by AI generators, and the rise in creators on Google's YouTube video service that use artificial intelligence to make clips aimed at profiting off the world's youngest and most impressionable viewers. The child safety advocates worry that AI-generated material, some of it referred to as "AI slop," affects kids' attention spans and their ability to separate what's real from what's not. They also argue that time spent looking at a screen is replacing real-world activities that are key to children's emotional and social development. "There is much we don't know about the consequences of AI content for children," the group wrote. "YouTube is participating in this uncontrolled experiment by pushing AI-generated content without research demonstrating its benefits and without acknowledging the child development principles that tell us it's likely mostly harmful." The letter was signed by social psychologist Jonathan Haidt, whose bestselling book "The Anxious Generation" kick-started a global movement to fight youth harm caused by social media and smartphones, as well as by child advocacy groups like Fairplay and the National Alliance to Advance Adolescent Health. The American Federation of Teachers and several schools also signed. "We have high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels," YouTube spokesperson Boot Bullwinkle said in an email, adding that parents also have the option to block channels. "Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content. We're always evolving our approach to stay current as the ecosystem evolves." AI-generated videos have become increasingly popular on YouTube, particularly those targeting toddlers and other youngsters. Some creators have found that outsourcing that work to an AI system makes it much easier and cheaper, and have even started sharing tutorials on how to build a business around spinning up videos for toddlers and babies. (Bullwinkle said "mass-producing low-quality content is not a viable business strategy on YouTube, as our systems and monetization policies are designed to penalize this type of spam.") Mohan said in January that "managing AI slop" and "ensuring YouTube remains a place where people feel good spending their time" is a top company priority in 2026. But YouTube has also argued that not all content made with AI is "slop," and that when done right, creating with AI can even be positive. YouTube requires creators to label "altered and synthetic content." The advocates argued in the letter that these labels are "unlikely to be understood by the preliterate children who are targets for much of this AI slop." In March, Google announced an investment into Animaj, an AI animation studio focused on making YouTube content for kids, part of an effort to improve the quality of its offerings for young users. One Google executive involved called it "a real blueprint for the future," while child safety advocates criticized Google and Animaj for engaging "babies and toddlers who shouldn't have any screen time at all." They urged YouTube to halt "all investment in the creation of AI-generated videos for children." Wednesday's letter arrived at a time when there are other outside efforts to change the way YouTube operates. In March, a landmark jury trial on social media addiction found Google and Meta Platforms Inc. liable for harming a young user with products designed to keep her hooked. Both companies said they would appeal the verdict. Plaintiffs, consumer advocates and lawmakers, however, are now pushing the companies to change some of their most lucrative operational features, including their content algorithms.
[8]
Advocacy Groups Urge YouTube to Protect Kids From 'AI Slop' Videos
Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube's parent company Google, children's advocacy group Fairplay expresses "serious concern" about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organizations and individual experts such as child psychiatrists and educators. "This ' AI slop ' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter reads. "These harms are particularly acute for young children." The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it. The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like "The Anxious Generation" author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition. Much of this AI-generated content is fast-paced with bright colors, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of " brainrot." Spokesperson Boot Bullwinkle said in a statement that YouTube has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels." "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content," Bullwinkle said. "We're always evolving our approach to stay current as the ecosystem evolves." YouTube's current policy regarding AI-generated content requires creators to disclose when content that's "realistic" is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects. YouTube said it is actively working on developing labels for YouTube Kids. In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an "extremely limited" definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labeled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children "to fend for themselves or their parents to play whack-a-mole," the letter reads. Fairplay's campaign comes shortly after Google's AI Futures Fund invested $1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg. The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case. "Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children's time online -- including babies. AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction," said Rachel Franz, the director of Fairplay's Young Children Thrive Offline program, in a statement. "What's more, YouTube's algorithm makes it impossible for kids to avoid AI slop." Earlier this year, YouTube head Mohan listed out "managing AI slop" as one of the company's priorities for 2026. In a January blog post, he wrote that the company was "actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
[9]
Advocacy groups urge YouTube to protect kids from 'AI slop' videos
Advocacy groups and experts are condemning YouTube for serving low-quality AI-generated videos to children, warning of developmental harm. A letter to YouTube's CEOs calls for clear labeling of AI content and a ban on such videos on YouTube Kids, citing concerns about distorted reality and hijacked attention. Advocacy groups and experts condemned YouTube for serving up low-quality artificial intelligence-generated videos to its most vulnerable audience: children. In a letter to YouTube CEO Neal Mohan and Sundar Pichai, the CEO of YouTube's parent company Google, children's advocacy group Fairplay expresses "serious concern" about the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, which was sent on Wednesday morning, was signed by more than 200 organisations and individual experts such as child psychiatrists and educators. "This ' AI slop ' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter reads. "These harms are particularly acute for young children." The letter calls on YouTube to clearly label all AI-generated content and ban any AI-generated content on YouTube Kids. They also propose barring AI-generated videos from being recommended to users under 18 and implementing an option for parents to turn off AI-generated content even if their child searches for it. The letter is signed by 135 organizations including the American Federation of Teachers and the American Counseling Association, and around 100 individual experts like "The Anxious Generation" author Jonathan Haidt. The letter is part of a larger campaign from Fairplay that also includes a petition. Much of this AI-generated content is fast-paced with bright colours, lively music and clickbait titles that work to grab the attention of young viewers, the letter outlines. There has been a growing movement online against AI-generated content, particularly when it looks or feels low quality or leans into the meaninglessness of " brainrot". Spokesperson Boot Bullwinkle said in a statement that YouTube has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels". "We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labelling content from our own AI tools, and requiring creators to disclose realistic AI content," Bullwinkle said. "We're always evolving our approach to stay current as the ecosystem evolves." YouTube's current policy regarding AI-generated content requires creators to disclose when content that's "realistic" is made with altered or synthetic media, including generative AI. Creators are not required to disclose when generative AI is used to create content that is clearly unrealistic, including animated videos and those with special effects. YouTube said it is actively working on developing labels for YouTube Kids. In its letter, Fairplay argues that voluntary disclosure policy and what it sees as an "extremely limited" definition of altered and synthetic content mean kids still see a flood of AI-generated videos that are not labelled as such. They also argue that many children who watch YouTube videos are not yet able to read or to comprehend something like an AI disclosure. That leaves children "to fend for themselves or their parents to play whack-a-mole," the letter reads. Fairplay's campaign comes shortly after Google's AI Futures Fund invested USD 1 million into Animaj, an AI animation studio that makes videos for kids and draws in staggeringly high viewership numbers, according to Bloomberg. The campaign follows a landmark verdict in a social media addiction trial in which a California jury found that YouTube designed its platform to hook young users without concern for their well-being. Meta was also found liable on the same counts as YouTube in the same case. "Pushing AI slop onto young children is just another testament to how YouTube and YouTube Kids are designed to maximize children's time online - including babies. AI slop hypnotises young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction," said Rachel Franz, the director of Fairplay's Young Children Thrive Offline programme, in a statement. "What's more, YouTube's algorithm makes it impossible for kids to avoid AI slop." Earlier this year, YouTube head Mohan listed out "managing AI slop" as one of the company's priorities for 2026. In a January blog post, he wrote that the company was "actively building on our established systems that have been very successful in combatting spam and clickbait, and reducing the spread of low quality, repetitive content."
[10]
Google faces calls to prohibit AI videos for kids on YouTube
Google is facing demands from child development experts to prohibit videos created with artificial intelligence from being shown or recommended to young viewers across YouTube and YouTube Kids. More than 200 children's specialists, advocacy groups and schools sent a letter to Google Chief Executive Officer Sundar Pichai and YouTube CEO Neal Mohan on Wednesday raising concerns about what they view as a lack of substance in many AI-generated YouTube videos that claim to be educational. In the letter, the advocates also criticized the perceived low quality of kids' content being mass-produced by AI generators, and the rise in creators on Google's YouTube video service that use artificial intelligence to make clips aimed at profiting off the world's youngest and most impressionable viewers.
[11]
More Than 200 Child Advocacy Groups Urge YouTube To Ban AI 'Slop' From Kids Platform - Alphabet (NASDAQ:G
More than 200 child advocacy groups and experts are demanding that YouTube ban AI-generated "slop" from its kids' platform, citing harm to young viewers and profits for the company. AI Videos Flood YouTube Kids It was signed by 135 organizations, including the American Federation of Teachers and the American Counseling Association, along with researchers such as Jonathan Haidt, author of The Anxious Generation. "AI-generated videos are really just an escalation of a myriad of problems that YouTube already has when it comes to interfacing with kids on their platforms," said Rachel Franz, director of Fairplay's Young Children Thrive Offline program, as reported by Fortune. She added, "It's important to address this AI slop phenomenon, but it's also equally important to take YouTube to task for the way that its platform is designed to hook users into spending more time in ways that aren't necessarily related to AI." The videos are often bizarre, repetitive, or nonsensical -- cartoon animals performing odd tasks, fake educational content, or hypnotic loops designed to hold attention. Fairplay estimates top AI slop channels targeting children have earned over $4.25 million annually. YouTube said it limits AI-generated content in YouTube Kids to high-quality channels, offers parental controls, and is developing AI labels. "We're actively building on our established systems that have been very successful in combating spam and clickbait, and reducing the spread of low-quality, repetitive content," spokesperson Boot Bullwinkle said. YouTube Removes AI Content, Warns On Short-Form Videos Earlier, YouTube removed dozens of AI-generated spam channels, some with millions of views, as Mohan prioritized cutting low-quality automated content. Analysis by Kapwing found 21% of videos in feeds were AI-generated, highlighting the rise of low-quality content. He noted some parents were guiding kids toward longer, less flashy content to protect cognitive development. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Juan Alejandro Bernal / Shutterstock.com Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[12]
Kids Consuming AI Slop; Child Safety Advocates Seek YouTube Policy Changes
Google algorithms push mesmerising content to kids as a means to engaging them for longer durations that keep them off normal activities Barely days after the precedent-setting verdicts around Meta and YouTube's efforts to promote addictive behaviour among children, a group of digital safety advocates have now petitioned Google CEO Sundari Pichai and his colleague at YouTube Neil Mohan to change the latter's policies in order to cut down AI slop. The demand came from a coalition of US-based organisations and child development experts who sought an outright ban for the "Made for Kids" content that is generated by AI. Of course, as is their wont in the past, YouTube came up with a standard, sanitised response that claimed they maintained "high standards" for the content on YouTube Kids. The signatories to the letter have argued that the absence of evidence that AI slop was safe for children as well as the potential of these videos to "mesmerise and harm kids", Google should be taking swift action to protect children on its platforms. Readers would be aware that YouTube recently partnered with Gen AI studio Animaj that specialises in AI-led kids' content. The groups also announced a public petition demanding YouTube implement several safety policies to immediately address the proliferation of AI slop directed towards children. These include clearly labelling AI-generated content on YouTube, barring such content from YouTube Kids, prohibiting child-focused videos that are AI-generated, prohibiting algorithmic recommendations of AI-content for users below 18 years, having a toggle switch in parental controls to stop kids searching for it and cutting all investments in AI-generated kids' videos. Nonprofit Fairplay, an organisation focusing on child safety, is helming the latest public opposition to Big Tech's all-out AI play. The memorandum is signed by several organisations such as the American Federation of Teachers, the National Black Child Development Association, and Mothers Against Media Addiction. Experts and authors such as Jonathan Haidt, who wrote the highly popular and often cited book "The Anxious Generation" are among others who have pushed for this digital reform. Furthermore, these groups have also referred to growing concerns around exposure to AI content among children and teens and how they can distort perception of reality, result in cognitive overload and displace real world activities necessary for development. Eminent behavioural paediatrician Jenny Radesky was another signatory who said YouTube was going overboard with its engagement metrics. First they introduced Shorts with Made for Kids content without understanding its impact on young viewers. Now AI slop is competing for their attention on those very feeds, she says. A YouTube spokesperson claimed that AI-generated content was limited in YouTube to a small set of "high-quality channels". In addition, parents are allowed to block such channels and the company prioritises transparency when it comes to AI content, labelling it with their own AI tools, and requiring creators to disclose realistic AI content. "We're always evolving our approach to stay current as the ecosystem evolves," the YouTube spokesperson concluded, leaving us in no doubt that such prefabricated responses to actual problems is what makes Big Tech appear heartless. Do Big Tech companies never learn their lessons? That YouTube can respond in such a cavalier manner that almost discounts the petitioners at this juncture is what is both shocking and surprising. Barely a week ago, a Los Angeles court had found them and Meta guilty of causing addiction by intentionally hooking the plaintiff as a child and causing her to develop anxiety, body dysmorphia, and suicidal thoughts. In fact, another court at Santa Fe in New Mexico even fined Meta $375 million dollars as part of a lawsuit filed by the district attorney claiming civil penalties while holding the company guilty of misleading consumers about the safety of its platforms, but how it had instead endangered the mental health of children. Interestingly, the jury in both courts had a similar point of view when it comes to the proof of negligence against the two Big Tech companies. Both noted negligence in design and a failure to warn of those risks as key reasons. However, in the Santa Fe case, it was the testimony of a former Meta employee that tilted the jurors strongly against Meta. AI slop for kids is the pits as far as Big Tech is concerned In the latest petition to curtail AI slop on YouTube Kids, the petitioners have said seemingly benign animations could turn out to be sexual or violent in nature. "Young people do not want to be targeted by this type of experience by YouTube's algorithm. After the recent verdicts, one would think YouTube would finally take its responsibility to its young users seriously," says Sebastian Mahal, co-chair of a youth-led lobby coalition 'Design for Us'. In fact, mental wellness experts have repeatedly raised the alarm over how Instagram and YouTube push content via their algorithms to enhance engagement, considered by the Big Tech companies to be the best metric to drive profits. In fact, some experts noted that YouTube should have already pulled down millions of AI-generated 'Made for Kids' videos by now. Most of these videos are designed to entrance and entrap young children, leading to more screen time and removing them from activities that they need to perform offline for good physical and mental wellness. However, the company appears to be taking a 180-degree turn, if recent events are anything to go by. The tech giant announced a $1 million investment early March in Animaj - an AI-led children's entertainment company. Under the deal, Animaj would get exclusive access to Google's AI tools such as Veo and Imagin While most of the videos generated by the company attracts families and claim more than 22 million subscribers for its affiliate channels, experts argue that most of these may be built around nursery rhymes with kid-friendly characters, they are still about mesmerising children - yet another way of driving engagement. And this is where the American Academy of Paediatrics has sent out a distinct warning to parents. They note that AI-generated content aimed at mesmerising children with stimulating visuals and music encourages them to choose longer-form videos over short-form content. Additionally, once used to such content, they shun evidence-based educational content with a slower pace and frequent interactions such as call-and-response queues. In short, the experts argue that content that mesmerises children only displaces the time they need to spend playing, socialising, and using all their senses during a period in which infants are still wiring their brains.
Share
Share
Copy Link
More than 200 child development experts and advocacy groups are calling on YouTube to ban AI-generated videos from its kids' platform. The coalition argues that low-quality AI content, dubbed 'AI slop,' distorts children's sense of reality and displaces activities critical for healthy development. The campaign follows a landmark verdict finding YouTube liable for designing addictive features that harm young users.
More than 200 children's specialists, advocacy groups, and schools have sent a letter to Google CEO Sundar Pichai and YouTube CEO Neal Mohan demanding immediate action against AI-generated videos for kids. The coalition, which includes Fairplay, the American Federation of Teachers, and prominent author Jonathan Haidt, argues that AI slop is flooding both YouTube and the YouTube Kids platform with content that threatens child development in ways experts are only beginning to understand
1
2
.
Source: Mashable
The letter, delivered Wednesday, describes these low-quality AI videos as mass-produced content designed to grab attention rather than educate. "This 'AI slop' harms children's development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development," the letter states
2
. The advocacy groups urge YouTube to implement a complete ban on AI-generated content from the YouTube Kids platform and stop recommending such videos to users under 18.Child development experts warn that AI-generated videos for kids create cognitive overload and distort young viewers' perception of reality. Research cited by the coalition reveals that even adults correctly identify AI-generated content only about 50% of the time, and repeated exposure makes people more likely to perceive AI imagery as real even after being told it's fake
5
. For young children whose brains are still building foundational understanding of the world, these harmful effects on children compound over time.
Source: Fortune
The letter was signed by 135 organizations and around 100 individual experts, including The Anxious Generation author Jonathan Haidt, whose bestselling book sparked a global movement against youth harm from social media addiction
1
2
. Rachel Franz, director of Fairplay's Young Children Thrive Offline program, told Fortune that "only about 5% of videos on YouTube for kids under eight are actually high quality"5
.The proliferation of low-quality AI videos stems from powerful financial incentives. Fairplay found that top AI slop channels targeting children have earned over $4.25 million in annual revenue, with some creators openly sharing tutorials on building businesses around "plotless, mesmerizing AI content" for toddlers and babies
1
5
. The content is cheap to produce using AI generators, making it an attractive option for creators looking to profit from young viewers.YouTube's monetization policies and content algorithms have struggled to contain the spread. While Neal Mohan stated in January that "managing AI slop" is a top company priority for 2026, the platform continues to recommend these videos to children
1
4
. YouTube requires creators to label "altered and synthetic content," but advocates argue these AI labels are "unlikely to be understood by the preliterate children who are targets for much of this AI slop"1
.
Source: Bloomberg
Related Stories
The campaign to protect kids from AI content intensified after Google announced a $1 million investment in Animaj, an AI animation studio that makes videos for kids and generates billions of views across YouTube channels aimed at infants and babies
4
5
. One Google executive called the partnership "a real blueprint for the future," but child safety advocates criticized the move for "engaging babies and toddlers who shouldn't have any screen time at all"1
."YouTube is essentially investing in harming babies through its purchase of Animaj," Franz said
5
. The coalition demands YouTube halt all investment in AI-generated content targeting children and implement structural changes including clearly labeling all AI-generated content, banning such content entirely from YouTube Kids, and introducing parental controls to disable AI content by default5
.The letter arrives as YouTube faces mounting scrutiny over its impact on young users. In March, a landmark verdict in a social media addiction trial found Google and Meta Platforms Inc. liable for harming a young user with products designed to keep her hooked
1
2
. The Los Angeles jury determined that both companies were negligent in addressing internal safety warnings and proceeded with platform features that exacerbated expert concerns about attention spans and screen time4
.YouTube spokesperson Boot Bullwinkle responded that the platform has "high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels"
2
5
. The company confirmed it is developing dedicated AI labels for YouTube Kids but provided no timeline. However, current policies only require labeling for "realistic" synthetic media, meaning animated AI videos—the bulk of children's content—remain unlabeled2
.Summarized by
Navi
[3]
12 Mar 2026•Technology

29 Jan 2026•Technology

03 May 2025•Technology

1
Technology

2
Science and Research

3
Technology
