2 Sources
2 Sources
[1]
Google sent an AI-generated push alert that included a racial slur
Google sent out an AI-generated news alert that included the N-word, . The push notification featured a link to a story by The Hollywood Reporter regarding an incident at the recent BAFTA Film Awards. The word appeared in the notification under the link. This was first spotted by , who accompanied a screengrab with a caption reading "what an interesting Black History Month this has turned out to be." Google has since apologized and said that it has "removed the offensive notification" and is "working to prevent this from happening again." This story follows the aforementioned BAFTA incident, in which an audience member with Tourette syndrome shouted the N-word when Michael B. Jordan and Delroy Lindo took to the stage to present an award. Tourette syndrome activist John Davidson, who made the comment, "deeply mortified if anyone considers my involuntary tics to be intention or to carry any meaning." The incident has sparked outrage and a renewed discussion on the realities of living with vocal tics. AI makes lots of high-profile errors and this isn't the first time it has ruined a news alert. Apple actually when the tool made a , including wrongly telling readers that the man accused of killing UnitedHealthcare CEO Brian Thompson, Luigi Mangione, had shot himself.
[2]
Google Apologizes for Sending the Worst Push Notification You Can Possibly Imagine
Disaster tends to strike when you let automated systems distribute news, especially on sensitive topics. The latest case in point: how Google's automated news alerts accidentally ended up throwing more fuel onto an already hurtful racial slur controversy that unfolded at the BAFTA awards this weekend. As Deadline reported, Google pushed out a notification linking to an article with the headline, "How the Tourette's Fallout Unfolded at the BAFTA Film Award." The problem was the message appended to the notification, which prompted readers to "see more on n****rs." The blunder became viral after Instagram influencer Danny Price posted a screenshot of the alert to his followers, calling it "absolutely f**ked." "What an interesting Black History month this has turned out to be," Price wrote. Google apologized for the slur after the incident gained news coverage. "We're very sorry for this mistake. We've removed the offensive notification and are working to prevent this from happening again," a spokesperson told media outlets. Understandably, the error prompted fiery discussions online about the irresponsibility of allowing AI systems to report and repackage the news. But Google claims that no AI was involved in the blunder. In a clarification provided by Deadline after it originally reported the notification was AI-generated, Google told the paper that its systems "recognized a euphemism for an offensive term on several web pages, and accidentally applied the offensive term to the notification text." "This system error did not involve AI," the company emphasized. "Our safety filters did not properly trigger, which is what caused this." Nonetheless, the criticisms of AI butting its way into journalism aren't unwarranted. When Apple launched an AI feature that summarized headlines in 2024, it falsely told users that Luigi Manione had shot himself, and other lies, leading to the BBC filing a formal complaint against the tech giant after the tool repeatedly butchered its stories. Last December, the Washington Post deployed an AI-generated feature for creating personalized podcasts that summarized its stories, which immediately invented and misattributed quotes, among other mistakes. Google, whose non-chatbot models are notorious for producing outrageous hallucinations, has itself been guilty of similar blunders, too. Last month, its Google Discover feed was caught showing users sensationalized AI-generated headlines that replaced the publication's original headline, The Verge found.
Share
Share
Copy Link
Google sent out a notification containing a racial slur while linking to coverage of a Tourette syndrome incident at the BAFTA Film Awards. The company apologized and removed the offensive push notification, claiming it was a system error rather than AI-generated content. The incident highlights ongoing concerns about automated news distribution and AI errors in news alerts.
Google found itself at the center of a major controversy after distributing a push notification that included the N-word, a racial slur that appeared in a news alert linking to coverage of the BAFTA Film Awards
1
. The offensive push notification was first spotted by Instagram influencer Danny Price, who shared a screenshot with his followers and noted, "what an interesting Black History Month this has turned out to be"2
. The alert directed users to a Hollywood Reporter story about a Tourette syndrome incident at the recent awards ceremony, but the notification text itself contained the deeply offensive term.
Source: Futurism
Google apologizes for the incident and moved quickly to address the situation, stating "We're very sorry for this mistake. We've removed the offensive notification and are working to prevent this from happening again"
2
. While initial reports suggested the alert was AI-generated, Google clarified that its systems "recognized a euphemism for an offensive term on several web pages, and accidentally applied the offensive term to the notification text"2
. The company emphasized this was a system error where safety filters failed to trigger properly, insisting no AI was involved in the blunder.The notification referenced an incident at the BAFTA Film Awards where audience member John Davidson, who has Tourette syndrome, involuntarily shouted the N-word when Michael B. Jordan and Delroy Lindo took the stage to present an award
1
. Davidson, a Tourette syndrome activist, expressed being "deeply mortified if anyone considers my involuntary tics to be intention or to carry any meaning"1
. The incident has sparked discussions about the realities of living with vocal tics, but Google's notification error added another layer of controversy to an already sensitive situation.Related Stories
Regardless of whether AI was directly involved in this specific case, the incident underscores broader concerns about the pitfalls of using AI systems and automated tools in journalism. AI errors in news alerts have become increasingly common across major tech platforms. Apple faced criticism when its news summarization feature made multiple false claims, including wrongly telling readers that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself
1
. The BBC filed a formal complaint against Apple after the tool repeatedly misrepresented its stories2
.
Source: Engadget
Google itself has struggled with AI-generated content quality. Last month, its Google Discover feed was caught displaying sensationalized AI-generated headlines that replaced publications' original headlines
2
. The Washington Post deployed an AI-generated feature for creating personalized podcasts that immediately invented and misattributed quotes2
. Google's non-chatbot models have become notorious for producing outrageous hallucinations, adding to concerns about reliability in automated news systems2
. These incidents raise critical questions about whether tech companies are moving too quickly to automate sensitive editorial processes without adequate safeguards, particularly when dealing with topics involving race, violence, or other sensitive subjects that require human judgment and cultural awareness.Summarized by
Navi
1
Technology

2
Technology

3
Policy and Regulation
