2 Sources
[1]
Dozens of YouTube Channels Are Showing AI-Generated Cartoon Gore and Fetish Content
A WIRED investigation found that dozens of YouTube channels are using generative AI to depict cartoon cats and minions being beaten, starved, and sexualized -- sparking fears of a new Elsagate wave. Somewhere in an animated New York, a minion slips and tumbles down a sewer. As a wave of radioactive green slime envelops him, his body begins to transform -- limbs mutating, rows of bloody fangs emerging -- his globular, wormlike form, slithering menacingly across the screen. "Beware the minion in the night, a shadow soul no end in sight," an AI-sounding narrator sings, as the monstrous creature, now lurking in a swimming pool, sneaks up behind a screaming child before crunching them, mercilessly, between its teeth. Upon clicking through to the video's owner, though, it's a different story. "Welcome to Go Cat -- a fun and exciting YouTube channel for kids!" the channel's description announces to 24,500 subscribers and more than 7 million viewers. "Every episode is filled with imagination, colorful animation, and a surprising story of transformation waiting to unfold. Whether it's a funny accident or a spooky glitch, each video brings a fresh new story of transformation for kids to enjoy!" Go Cat's purportedly child-friendly content is visceral, surreal -- almost verging on body horror. Its themes feel eerily reminiscent of what, in 2017, became known as Elsagate, where hundreds of thousands of videos emerged on YouTube depicting children's characters like Elsa from Frozen, Spider-Man, and Peppa Pig involved in perilous, sexual, and abusive situations. By manipulating the platform's algorithms, these videos were able to appear on YouTube's dedicated Kids' app -- preying on children's curiosities to farm thousands of clicks for cash. In its attempts to eradicate the problem, YouTube removed ads on over 2 million videos, deleted more than 150,000, and terminated 270 accounts. Though subsequent investigations by WIRED revealed that similar channels -- some containing sexual and scatological depictions of Minecraft avatars -- continued to appear on YouTube's Topic page, Elsagate's reach had been noticeably quelled. Then came AI. The ability to enter (and circumvent) generative AI prompts, paired with an influx of tutorials on how to monetize children's content, means that creating these bizarre and macabre videos has become not just easy but lucrative. Go Cat is just one of many that appeared when WIRED searched for terms as innocuous as "minions," "Thomas the Tank Engine," and "cute cats." Many involve Elsagate staples like pregnant, lingerie-clad versions of Elsa and Anna, but minions are another big hitter, as are animated cats and kittens. In response to WIRED's request for comment, YouTube says it "terminated two flagged channels for violating our Terms of Service" and is suspending the monetization of three other channels. "A number of videos have also been removed for violating our Child Safety policy," a YouTube spokesperson says. "As always, all content uploaded to YouTube is subject to our Community Guidelines and quality principles for kids -- regardless of how it's generated." When asked what policies are in place to prevent banned users from simply opening up a new channel, YouTube stated that doing so would be against its Terms of Service and that these policies were rigorously enforced "using a combination of both people and technology."
[2]
Cats, Minions, and body horror: Meet the latest YouTube AI slop going after kids
For years now, we've seen cycles of YouTube implementing new policies and technical measures aimed at keeping kids safe, only for creators to find disturbing new ways to target child audiences. Now the latest to find the spotlight is some AI-powered nightmare fuel, featuring Minions, Thomas the Tank Engine, some Cronenberg-worthy body horror, and -- this being the internet, and all -- a whole lot of cats. In a new exposé published by Wired, the site looks into the newest trend that seems to be targeting young viewers. Channels like "Go Cat" are plastered with attractive, smiling characters, easily created with today's readily available AI tools. And while the channel starts off advertising itself as "a fun and exciting YouTube channel for kids," we quickly start getting to they eyebrow-raising part, with promises of "beloved toys ... reimagined in strange, funny, and sometimes spooky new forms."
Share
Copy Link
An investigation reveals a surge of AI-generated cartoon videos on YouTube featuring disturbing content aimed at children, sparking fears of a new 'Elsagate' wave and raising questions about content moderation.
A recent investigation by WIRED has uncovered a disturbing trend on YouTube: the proliferation of AI-generated cartoon videos featuring gore, fetish content, and body horror aimed at children. This phenomenon is reminiscent of the 2017 'Elsagate' scandal and raises serious concerns about content moderation and child safety on the platform 1.
Dozens of YouTube channels are using generative AI to create videos depicting popular cartoon characters like minions, Thomas the Tank Engine, and animated cats in bizarre and often disturbing scenarios. These videos often involve themes of transformation, violence, and sexualization. For instance, one channel called "Go Cat" presents itself as "a fun and exciting YouTube channel for kids" while featuring content that verges on body horror 1.
This new wave of content bears striking similarities to the Elsagate controversy of 2017, where videos featuring popular children's characters in inappropriate situations flooded YouTube and its Kids app. Despite YouTube's efforts to address the issue then, including removing ads from millions of videos and terminating hundreds of accounts, the problem seems to have resurfaced with the advent of easily accessible AI tools 1.
The ease of using generative AI, combined with tutorials on monetizing children's content, has made the creation of these disturbing videos both simple and potentially profitable. This technological advancement has enabled creators to circumvent traditional content creation barriers and exploit YouTube's algorithms 2.
In response to the investigation, YouTube has taken action by terminating two flagged channels for violating its Terms of Service and suspending monetization for three others. The platform also removed several videos that violated its Child Safety policy. YouTube emphasizes that all content, regardless of how it's generated, is subject to its Community Guidelines and quality principles for kids 1.
The emergence of AI-generated content presents new challenges for content moderation on platforms like YouTube. While the company claims to use a combination of human reviewers and technology to enforce its policies, the sheer volume and evolving nature of AI-generated content make this task increasingly complex 2.
This trend highlights ongoing concerns about child safety on online platforms. The ability of these videos to attract young viewers by using familiar characters in thumbnail images, while containing disturbing content, poses significant risks to children's well-being. It also raises questions about the effectiveness of current content filtering and recommendation systems 1 2.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago