2 Sources
[1]
TikTok's AI Overviews Probably Thinks This Story Is a Blueberry
Aaron covers what's exciting and new in the world of home entertainment and streaming TV. Previously, he wrote about entertainment for places like Rotten Tomatoes, Inverse, TheWrap and The Hollywood Reporter. Aaron is also an actor and stay-at-home dad, which means coffee is his friend. TikTok's AI Overviews is singing the blues. The feature, which was in testing, was supposed to generate text summaries beneath video posts on the app. But due to numerous egregious inaccuracies in the AI-generated descriptions -- including one mistaking a celebrity content creator for blueberries -- TikTok has scaled back the test. A TikTok spokesperson told Business Insider that the updated feature will now only identify products shown in a video, not offer the sometimes-bizarre summaries. A TikTok representative didn't immediately respond to a request for comment. TikTok's AI Overviews feature was similar to the AI summaries Google uses in its search algorithm. It was supposed to explain what's happening in video posts and provide additional context. It succeeded only some of the time. The Business Insider reporter documented firsthand experience with TikTok's AI Overviews, citing a collection of incorrect summaries. The feature described a video of TikTok personality Charli D'Amelio talking to the camera as "a collection of various blueberries with different toppings." And it called a video from the singer Shakira "a repetitive sequence of several distinct blue shapes appearing and moving across the screen." Many TikTok users on Reddit were dissatisfied with the feature. One Reddit post shows a screenshot of a video featuring two ballroom dancers. The AI caption identified the visual as "a person repeatedly striking their head with a rubber chicken." AI-generated overviews have come a long way since they first began appearing online. Just a few years ago, Google faced a similar set of accuracy problems when its algorithm suggested eating one rock per day and using glue to keep cheese on pizza. TikTok isn't backing away from AI. The company recently rolled out a tool that converts an image to a video, and another that lets you control how much AI content appears on your For You Page. It has also addressed safety issues that come with implementing AI on its platform by releasing moderation tools.
[2]
TikTok rows back on AI video overviews in US after absurd errors
TikTok has rowed back on an AI feature which incorrectly summarised some videos on the platform, including claiming a celebrity was fruit. The company's 'AI overviews' recently began appearing beneath content on the platform to describe what a video was showing, or provide more context. While only rolled out to some users in the US and the Philippines, the feature's incorrect and bizarre AI-generated summaries of TikTok content - seen beneath videos of celebrities like platform star Charli D'Amelio - have been shared widely. According to TikTok, its experimental summaries have been tweaked to only suggest products similar to those shown in videos. The changes were first reported by news outlet Business Insider. Much like the AI Overviews at the top of most Google search results, TikTok's AI-generated overviews would attempt to sum up the contents of videos for some users when they clicked to see more of a video's caption. Some examples screenshotted by users and seen by the BBC showed videos on the platform being accurately described, but Business Insider also identified a number of "wildly inaccurate" AI overviews. This included one which saw a video of dancer Charli D'Amelio described as a "collection of various blueberries with different toppings," the publication said. It saw similarly vague, inaccurate and strange AI-generated summaries on other TikTok videos of celebrities and artists, including Shakira and Olivia Rodrigo. The feature will now only be used to surface information about items in videos, according to TikTok. It comes as tech firms look to deploy more AI products on their platforms to boost user engagement. However, some such efforts have been met with user backlash, or mockery, when these tools go awry. Posts reacting to TikTok's testing of AI overviews on its videos first began appearing in January. But it appears the summaries were made more widely available, with several users and creators highlighting AI-generated descriptions containing absurd mistakes in late April. A recent example shared on Reddit saw a performance by ballroom dancers Reagan and Juli To described in an AI overview on TikTok as "a person repeatedly striking their head with a rubber chicken". Other examples shared by TikTok users contained similarly strange descriptions. For instance, AI overviews for two separate videos, neither of which featured violence or tools, said they featured "a person repeatedly striking their head with a hammer". According to TikTok, users were able to report and provide feedback about AI overviews. But this did not stop some from speculating as to whether the platform was "trolling" its users. "The new AI Overview is so bad it feels like it has to be a joke," wrote TikTok user and creator Brett Vanderbrook alongside his video. He showed a range of examples where TikTok's AI feature conjured up bizarre descriptions for what was happening in videos - such as a comedy skit described as someone "demonstrating a new, clever technique for cutting through water". TikTok says it has identified the cause of AI overview errors and inconsistencies, without detailing what this was. But generative AI tools often make things up when responding to users, summarising or generating information, and errors can range from being hilarious to potentially harmful in nature. Google was widely mocked in 2024 after its AI Overviews results told users to eat rocks and "glue pizza". Apple later faced criticism after an AI tool designed to summarise notifications created false headlines for the BBC News and the New York Times apps. The tech giant suspended the feature, saying it would improve and update it. Since then AI development has continued, with firms claiming the tech has vastly improved in ability and accuracy, but so-called "hallucinations" persist. However, ChatGPT-maker OpenAI recently said it identified "goblin" and "gremlin" creeping into its systems' responses - a quirk it believes arose after a tool it trained to have a nerdy persona incentivised mentioning the creatures. False case law or citations appearing in court filings have meanwhile prompted warnings about AI use in legal settings, with AI errors also reportedly causing issues for some governments. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
Share
Copy Link
TikTok has pulled back its AI Overviews feature after it generated absurdly inaccurate video summaries, including describing celebrity Charli D'Amelio as a collection of blueberries. The experimental tool, which was meant to summarize video content, will now only identify products shown in videos after widespread user backlash and mockery.
TikTok has scaled back AI feature testing after its AI Overviews tool produced absurdly inaccurate video summaries that left users baffled and amused.

Source: BBC
The experimental feature, rolled out to select users in the US and the Philippines, was designed to generate text summaries beneath video posts and provide additional context about content. Instead, it delivered descriptions so wildly off-mark that one video of TikTok personality Charli D'Amelio talking to the camera was identified as "a collection of various blueberries with different toppings"
1
2
.
Source: CNET
The AI errors extended far beyond the blueberry incident. A video featuring singer Shakira was described as "a repetitive sequence of several distinct blue shapes appearing and moving across the screen," while a performance by ballroom dancers Reagan and Juli To was characterized as "a person repeatedly striking their head with a rubber chicken"
2
. TikTok users on Reddit documented numerous examples of the TikTok AI feature malfunctioning, with some videos incorrectly described as showing "a person repeatedly striking their head with a hammer" despite containing no violence or tools2
. The bizarre nature of these generative AI hallucinations led some users to speculate whether the platform was intentionally trolling them. Creator Brett Vanderbrook noted that "the new AI Overview is so bad it feels like it has to be a joke," sharing examples including a comedy skit described as someone "demonstrating a new, clever technique for cutting through water"2
.Following the widespread mockery and user complaints, TikTok confirmed to Business Insider that it has revised the feature's functionality. The updated AI Overviews will now only identify products shown in videos rather than attempt comprehensive content summaries
1
2
. This pivot to product recommendations represents a significant narrowing of the tool's original scope. TikTok stated it has identified the cause of the AI accuracy issues and inconsistencies but did not provide specific details about what went wrong2
. Users were able to report and provide feedback about AI Overviews during the testing phase, though this didn't prevent the feature from becoming a source of online ridicule.Related Stories
TikTok's struggles with AI Overviews mirror similar challenges faced by other tech giants. Google encountered comparable problems in 2024 when its AI Overviews feature suggested users eat one rock per day and use glue pizza to keep cheese attached
1
2
. Apple faced criticism after an AI tool designed to summarize notifications created false headlines for BBC News and New York Times apps, prompting the company to suspend the feature2
. These incidents highlight persistent issues with hallucinations in generative AI systems, where tools fabricate information when attempting to summarize or generate content. Such errors range from humorous to potentially harmful, with false case law citations appearing in court filings and AI mistakes reportedly causing problems for some governments2
.Despite this setback, TikTok continues investing in AI capabilities. The platform recently launched a tool that converts images to videos and another that allows users to control how much AI content appears on their For You Page
1
. The company has also released moderation tools to address safety concerns around AI implementation. Tech firms continue deploying AI products to boost user engagement, though efforts often meet resistance when tools malfunction. While AI developers claim the technology has vastly improved in ability and accuracy, the TikTok incident demonstrates that fundamental challenges remain in ensuring AI systems accurately interpret and describe visual content before widespread deployment.Summarized by
Navi
29 Oct 2025•Technology

19 Nov 2025•Technology

14 May 2025•Technology

1
Technology

2
Entertainment and Society

3
Policy and Regulation
