John Oliver Sounds Alarm on AI-Generated Content: 'Drowning in This S**t'

Reviewed byNidhi Govil

4 Sources

John Oliver, host of 'Last Week Tonight', delves into the dangers of AI-generated content flooding social media, dubbing it 'AI slop' and warning of its potential to erode objective reality.

The Rise of 'AI Slop'

John Oliver, host of HBO's 'Last Week Tonight', has taken aim at the proliferation of AI-generated content, which he dubs "AI slop". In his latest episode, Oliver delves into the dangers posed by the flood of cheap, professional-looking, and often bizarre content created by artificial intelligence tools 1.

Source: HuffPost

Source: HuffPost

Oliver describes AI slop as the "newest iteration of spam", warning that it's making some platforms "unusable" due to its sheer volume. He expresses concern that many users are unaware that this content isn't real, stating, "It's extremely likely that we are gonna be drowning in this shit for the foreseeable future" 4.

The Mechanics and Motivations Behind AI Slop

The comedian explains that the spread of AI generation tools has made it incredibly easy to flood social media with this type of content. Platforms like Meta have even adjusted their algorithms to promote such content, with Oliver noting that "more than a third of content in your feed is now from accounts you don't follow" 1.

While some creators are profiting from this trend, Oliver points out that the financial rewards can be minimal, often amounting to just a few cents per post. However, the potential for viral success has led to the emergence of "AI slop gurus" who offer guidance on content creation for a fee 1.

Dangers and Implications

Oliver highlights several concerning aspects of AI slop:

  1. Misinformation: AI-generated content has been used to create fake disaster scenarios, causing problems for first responders and potentially misleading the public 1.

  2. Political manipulation: During the Israel-Iran conflict and U.S. elections, AI-generated content was used to spread false information and manipulate public opinion 1.

  3. Erosion of trust: The prevalence of fake content empowers bad actors to dismiss real videos and images as fake, further blurring the lines of objective reality 1.

  4. Environmental impact: The resources required to produce AI-generated content have significant environmental consequences 1.

Exploitation of Artists

A crucial issue raised by Oliver is the exploitation of real artists' work. AI models are often trained on content scraped from the internet, including artworks, books, and music, without compensating the original creators. This leads to a situation where "someone's hard work was stolen in order to create it [AI slop]" 2.

A Creative Response

Source: Digital Trends

Source: Digital Trends

While acknowledging that there's no easy fix to the AI slop problem, Oliver proposes a "petty way to respond". He suggests "creating real art by ripping off AI slop" 3. To demonstrate this, Oliver invites wood sculptor Michael Jones, whose work had been appropriated by AI, to create a real sculpture based on an AI-generated image of a man transforming into a red cabbage 3.

In conclusion, Oliver's segment serves as a stark warning about the potential dangers of AI-generated content and its impact on society, while also highlighting the need for better regulation and protection for artists in the age of AI.

Explore today's top stories

Landmark Ruling: AI Training on Books Deemed Fair Use, but Piracy Concerns Linger

A federal judge rules that Anthropic's use of copyrighted books for AI training is fair use, marking a significant victory for AI companies. However, the company still faces trial over allegations of book piracy.

Ars Technica logoTechCrunch logoWired logo

29 Sources

Policy and Regulation

3 hrs ago

Landmark Ruling: AI Training on Books Deemed Fair Use, but

UK Regulator Proposes New Rules to Curb Google's Search Dominance

The UK's Competition and Markets Authority (CMA) is considering designating Google with "strategic market status," which could lead to new regulations on its search engine operations, including fair ranking measures and increased publisher control over content use in AI-generated results.

Ars Technica logoTechCrunch logoBloomberg Business logo

22 Sources

Policy and Regulation

11 hrs ago

UK Regulator Proposes New Rules to Curb Google's Search

OpenAI Challenges Tech Giants with New ChatGPT Productivity Features

OpenAI is developing collaboration features for ChatGPT, potentially rivaling Google Docs and Microsoft Word, as it aims to transform the AI chatbot into a comprehensive productivity tool.

Economic Times logoPYMNTS logoInvesting.com logo

3 Sources

Technology

3 hrs ago

OpenAI Challenges Tech Giants with New ChatGPT Productivity

Google Expands AI Mode in Search to India, Introducing Advanced Query Capabilities

Google has launched its experimental AI Mode for Search in India, marking its first international expansion. The feature, powered by a custom version of Gemini 2.5, offers advanced reasoning and multimodal capabilities for complex queries.

TechCrunch logo9to5Google logoAnalytics India Magazine logo

10 Sources

Technology

19 hrs ago

Google Expands AI Mode in Search to India, Introducing

Google DeepMind Unveils Cloud-Free AI Model for Autonomous Robots

Google DeepMind has released a new on-device AI model called Gemini Robotics On-Device, enabling robots to operate autonomously without cloud connectivity. This breakthrough allows for faster, more reliable robotic operations in various environments.

Ars Technica logoTechCrunch logoThe Verge logo

4 Sources

Technology

3 hrs ago

Google DeepMind Unveils Cloud-Free AI Model for Autonomous
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Twitter logo
Instagram logo
LinkedIn logo