AI-Generated Content Targeting Children Raises Concerns on YouTube

2 Sources

Share

An investigation reveals a surge of AI-generated cartoon videos on YouTube featuring disturbing content aimed at children, sparking fears of a new 'Elsagate' wave and raising questions about content moderation.

News article

AI-Generated Content Targets Children on YouTube

A recent investigation by WIRED has uncovered a disturbing trend on YouTube: the proliferation of AI-generated cartoon videos featuring gore, fetish content, and body horror aimed at children. This phenomenon is reminiscent of the 2017 'Elsagate' scandal and raises serious concerns about content moderation and child safety on the platform

1

.

The Rise of AI-Generated Disturbing Content

Dozens of YouTube channels are using generative AI to create videos depicting popular cartoon characters like minions, Thomas the Tank Engine, and animated cats in bizarre and often disturbing scenarios. These videos often involve themes of transformation, violence, and sexualization. For instance, one channel called "Go Cat" presents itself as "a fun and exciting YouTube channel for kids" while featuring content that verges on body horror

1

.

Echoes of Elsagate

This new wave of content bears striking similarities to the Elsagate controversy of 2017, where videos featuring popular children's characters in inappropriate situations flooded YouTube and its Kids app. Despite YouTube's efforts to address the issue then, including removing ads from millions of videos and terminating hundreds of accounts, the problem seems to have resurfaced with the advent of easily accessible AI tools

1

.

The Role of AI in Content Creation

The ease of using generative AI, combined with tutorials on monetizing children's content, has made the creation of these disturbing videos both simple and potentially profitable. This technological advancement has enabled creators to circumvent traditional content creation barriers and exploit YouTube's algorithms

2

.

YouTube's Response and Policy Enforcement

In response to the investigation, YouTube has taken action by terminating two flagged channels for violating its Terms of Service and suspending monetization for three others. The platform also removed several videos that violated its Child Safety policy. YouTube emphasizes that all content, regardless of how it's generated, is subject to its Community Guidelines and quality principles for kids

1

.

Challenges in Content Moderation

The emergence of AI-generated content presents new challenges for content moderation on platforms like YouTube. While the company claims to use a combination of human reviewers and technology to enforce its policies, the sheer volume and evolving nature of AI-generated content make this task increasingly complex

2

.

Implications for Child Safety Online

This trend highlights ongoing concerns about child safety on online platforms. The ability of these videos to attract young viewers by using familiar characters in thumbnail images, while containing disturbing content, poses significant risks to children's well-being. It also raises questions about the effectiveness of current content filtering and recommendation systems

1

2

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo