12 Sources
12 Sources
[1]
Google is experimentally replacing news headlines with AI clickbait nonsense
Did you know that BG3 players exploit children? Are you aware that Qi2 slows older Pixels? If we wrote those misleading headlines, readers would rip us a new one -- but Google is experimentally beginning to replace the original headlines on stories it serves with AI nonsense like that. I read a lot of my bedtime news via Google Discover, aka "swipe right on your Samsung Galaxy or Google Pixel homescreen until you see a news feed appear," and that's where these new AI headlines are beginning to show up. They're not all bad. For example, "Origami model wins prize" and "Hyundai, Kia gain share" seem fine, even if not remotely as interesting as the original headlines. ("Hyundai and Kia are lapping the competition as US market share reaches a new record" and "14-year-old wins prize for origami that can hold 10,000 times its own weight" sound like they're actually worth a click!) But in the seeming attempt to boil down every story to four words or less, Google's new headline experiment is attaching plenty of misleading and inane headlines to journalists' work, and with little disclosure that Google's AI is rewriting them. The very first one I saw was "Steam Machine price revealed," which it most certainly was not! Valve won't reveal that till next year. Ars Technica's original headline was the far more reasonable "Valve's Steam Machine looks like a console, but don't expect it to be priced like one." "Microsoft developers using AI"? No shit, Sherlock. (That one was tacked on my colleague Tom Warren's story about "How Microsoft's developers are using AI" -- Google removed the six letters that make a silly headline into a real one!) I also saw Google try to claim that "AMD GPU tops Nvidia," as if AMD had announced a new groundbreaking graphics card, when the actual Wccftech story is about how a single German retailer managed to sell more AMD units than Nvidia units within a single week's span. Wccftech's headline was relatively responsible, but Google turned it into clickbait. Then there are the headlines that simply don't make sense out of context, something real human editors avoid like plague. What does "Schedule 1 farming backup" mean? How about "AI tag debate heats"? Make no mistake, the problem isn't just that these AI headlines are bad. It's that Google is taking away our agency to market our own work, like if we'd written a book and the bookstore decided to replace its cover. We try hard to craft headlines that invite readers in, ones that responsibly encapsulate the news, ones that help you understand why a story matters right away, and get you excited when it's justified. (Does my headline for this story seem the right amount of excited?) And yet Google seems to think it can just replace these headlines, in a way that might confuse our readers and think we're the ones generating clickbait, since our publication's names appear right next to them. Google does disclose that something about these news items is "Generated with AI, which can make mistakes," but not what, and readers only see that message if they tap the "See more" button: It's too easy for readers to think we intentionally send our stories to Google Discover with these headlines. The good news is, this is a Google experiment. If there's enough backlash, the company probably won't proceed. "These screenshots show a small UI experiment for a subset of Discover users," Google spokesperson Mallory Deleon tells The Verge. "We are testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web." But the overall trend at Google has been to prioritize its own products at the expense of sending clicks to news websites. While the company swears it isn't destroying the web with AI search, you'd be hard pressed to find a news outlet that agrees, and even Google has admitted in court that "the open web is already in rapid decline." It's the reason The Verge now has a subscription: we can't survive Google Zero without your help.
[2]
Google Discover Trials AI Rewrites of News Story Headlines
Discover is a major source of news for millions of people around the world, and Google is continuing its experiments with AI on the platform. A new test uses AI to rewrite the headlines provided by publications, and sometimes it's doing so inaccurately. Spotted by The Verge, the trial is showing select users AI-generated headlines without the original post's title included until you click through. The AI-generated headlines shorten the description to four words with at least nine different instances appearing in the website's research. Some AI-generated rewrites have misunderstood the article and show false information in the replacement headline. An article from Ars Technica, titled "Valve's Steam Machine looks like a console, but don't expect it to be priced like one," was rewritten by the AI as "Steam Machine price revealed." Valve has yet to publicly comment on the price of its upcoming gadget. Another example saw a PC Gamer article with an original headline detailing how some Baldur's Gate 3 players were building an in-game army of non-player characters who are designed to look like children. It was retitled by Google's AI to "BG3 players exploit children," without a reference to those children being NPCs in the game. Some other examples took away the unique angle of a story they were recommending. An article written by The Verge on how Microsoft's team is using AI was retitled to "Microsoft developers using AI," losing the story's original context. PCMag was unable to activate AI-generated headlines. A Google spokesperson told The Verge that only a select "subset of Discover users" would see the "small UI experiment." The spokesperson for Google said, "We are testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web." Google previously tested its own summaries of stories appearing in Discover, taking the article and providing its own AI-generated synopsis. Google said at the time it wanted to test ways to make it easier for readers to decide which websites to visit.
[3]
Google Discover is testing AI-generated headlines and they aren't good
Artificial intelligence is showing up everywhere in Google's services these days, whether or not people want them and sometimes in places where they really don't make a lick of sense. The latest trial from Google appears to be giving articles the AI treatment in Google Discover. The Verge noticed that some articles were being displayed in Google Discover with AI-generated headlines different from the ones in the original posts. And to the surprise of absolutely no one, some of these headlines are misleading or flat-out wrong. For instance, one rewritten headline claimed "Steam Machine price revealed," but the Ars Technica article's actual headline was "Valve's Steam Machine looks like a console, but don't expect it to be priced like one." No costs have been shared yet for the hardware, either in that post or elsewhere from Valve. In our own explorations, Engadget staff also found that Discover was providing original headlines accompanied by AI-generated summaries. In both cases, the content is tagged as "Generated with AI, which can make mistakes." But it sure would be nice if the company just didn't use AI at all in this situation and thus avoided the mistakes entirely. The instances The Verge found were apparently "a small UI experiment for a subset of Discover users," Google rep Mallory Deleon told the publication. "We are testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web." That sounds innocuous enough, but Google has a history of hostility towards online media its frequent role as middleman between publishers and readers. Web publishers have made multiple attempts over the years to get compensation from Google for displaying portions of their content, and in at least two instances, Google has responded by cutting out those sources from search results and later claiming that showing news doesn't do much for the bottom line of its ad business. For those of you who do in fact want more AI in your Google Search experience, you're in luck. AI Mode, the chatbot that's already been called outright "theft" by the News Media Alliance, is getting an even more symbiotic integration into the mobile search platform. Google Search's Vice President of Product Robby Stein posted yesterday on X that the company is testing having AI Mode accessible on the same screen as an AI Overview rather than the two services existing in separate tabs.
[4]
Google is replacing Discover news headlines with laughably awful AI-generated titles
Google's AI also generated the title "Steam Machine price revealed" for an Ars Technica story that doesn't reveal the price at all. Either way, the search giant is churning out four-word headlines of mostly crappy quality. A Google representative told The Verge that this was just a test for now rather than a full-scale feature release: This statement suggests that these AI-generated titles won't see a broad release, at least not in their current state. Nevertheless, it's concerning that Google saw fit to push this experiment out to any users in the first place when the results are so obviously awful. There's also no visible label or disclosure that this is an AI-generated title, and no disclosure that Google is behind these headlines rather than a publisher. So we can see more than a few readers getting angry at publications after being duped by a low-effort clickbait title.
[5]
Google AI is posting clickbait headlines on Discover
If you're reading this story because a tantalizing headline on Google Discover made you click, well there's a chance that was the work of AI and not the human who wrote it (me). This is not to knock my own headline writing skills. It's because, as The Verge spotted, Google has been testing AI-generated titles and they frequently are far from the facts of the stories they lead to. The Verge found the headline "Qi2 slows older Pixels" on a story from 9to5Google that was actually titled "Don't buy a Qi2 25W wireless charger hoping for faster speeds - just get the 'slower' one instead." And on an article from Ars Technica, "Valve's Steam Machine looks like a console, but don't expect it to be priced like one," Google's AI decided to go with the totally incorrect "Steam Machine price revealed." Some headlines bypassed lies and ended up flat-out unintelligible, like the one on a PCGamer story that said, "Schedule 1 farming backup." It's far from the highly engaging title the story actually holds, "Schedule 1 creator had a backup plan if Steam rejected it -- pack up the product, don a farmer's hat, and 'pivot it to be a farming game' like Stardew Valley." On the rare chance that a reader tries to delve into who's to blame for this mess and taps "See more," they'll learn that what Google Discover serves up is "generated with AI, which can make mistakes." The effort to rewrite headlines is not a mistake, though, as a Google spokesperson told The Verge. "These screenshots show a small UI experiment for a subset of Discover users," said Mallory De Leon, communications manager at Google. "We are testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web." That experiment, though, is harmful to publications. While I could not duplicate it as I apparently do not fall into that "subset of Discover users," I have spent over two decades in tech media and have witnessed what Google has given to digital media and what it has taken away, and the latter grows every day. Google's AI Overview has gutted readership and revenue and this latest experiment threatens to lessen trust in the media at a time when it's already in serious jeopardy. While there's no way for those who do have access to the feature to let Google know that they dislike it, there is an option to report a story as clickbait in Discover. Unfortunately, this punishes the publication by limiting its visibility and not the party actually responsible for the misleading headlines.
[6]
Google Discover starts rewriting headlines using AI -- with mixed results
Some of the AI-generated headlines are misleading or outright inaccurate. Google has started testing AI-generated headlines in the Google Discover feed, reports The Verge. As part of the test, some articles are showing AI-written headlines in place of the ones published by their respective outlets, and some of those AI-produced replacements are resulting in misleading or outright incorrect headlines. For example, one AI-generated headline claimed that the price of the Steam Machine had been revealed, even though the original article by Ars Technica contained no pricing information at all. Google says the new AI-powered headlines are only part of a limited interface test for some users, and their purpose is supposedly to make information more easily accessible.
[7]
Google Discover is now rewriting headlines with AI -- and the results are pretty sloppy
While some of the new headlines feel off, some are misleading altogether. That's even on top of false AI-generated summaries, not to mention misguided AI Overviews. But in the case of the latest AI headlines, most are stripped of the tone, accuracy and context that real human editors like me spend time crafting. Google calls it an experiment. But if you ask me, it's a problem for those who rely on Discover for quick news hits. If the headlines are wrong or misleading, it could shape the way you consume information. You may recall, Apple Intelligence had to scrap its AI summaries because of false and misleading headlines. Although the new headlines are labeled with a tiny AI label, at first glance, you'll find that when you tap "See more," the headline appears to be written by a human. Some early examples reported by The Verge include an AI-generated headline of "Steam Machine price revealed," but the original story did not reveal a price at all. Similarly, another headline reads, "AMD GPU tops Nvidia," but the article was about a retailer's weekly sales data -- not a sweeping GPU‑market verdict. The AI headline amplifies and distorts the coverage. Another real headline was: "Schedule 1 farming backup," got a similarly vague and nonsensical micro‑headline like "AI tag debate heats" / "Microsoft developers using AI." Several of these headlines appear to carry no clear meaning, are out of context, or oversell minor points, which betray the journalistic integrity of the writer and the publications. Understandably, it's clear the results of Google's AI headline testing could potentially lead to a feed full of short, punchy micro-headlines that are optimized for clicks, but completely miss the essence of the story entirely. This change is part of a broader push to layer generative AI into the Google Discover experience, such as AI summaries that condense multi-source reporting, expandable "Overview" boxes and "See more" buttons that reveal AI explanations. Yet now, the AI is coming for the headline, which, if you ask me, is arguably one of the most important parts of the article. According to Google, the goal is to make content feel more scannable and useful at a glance. But here's the problem, headlines carry voice, tone and inten,t and we all know AI doesn't always get that right. For me, my fellow journalists and those who simply enjoy reading the news, this raises two major red flags. And that's a slippery slope. Because AI-generated headlines affect more than the publishers -- they change how you consume news. When algorithms rewrite headlines, you may click on stories with the wrong expectations, encounter oversimplified or distorted context and struggle to tell what's human-written versus AI-generated, all while Google Discover shifts from a newsfeed into a content curator. It also pushes Google further toward a closed ecosystem where you stay inside Google instead of visiting the publisher's site. Although the AI-headline experiment currently appears limited to a subset of users, if engagement improves, a broader rollout seems likely. With all of the ways Gemini 3.0 is integrated into Google, it's become more than a search engine. But if AI headlines continue, perhaps AI has become too baked into the platform. This might be the strongest signal yet that your news feed is about to sound more like AI slop than anything human. If this trend continues, it won't just influence how headlines are written -- it'll influence what we trust when we click.
[8]
Google Caught Replacing News Headlines With AI-Generated Nonsense
Google's forays into exposing users to dubious generative AI features have an incredibly poor track record. From error-ridden AI Overviews to AI slop dominating Google's image search results, users have had to put up with a lot of needless and often easily avoidable nonsense. Now, Google Discover, the firm's personalized content feed that's heavily featured on Android phones, is showing users misleading and seemingly AI-generated headlines that replace the actual ones on articles, as The Verge reports. It's yet another annoying feature that not only will lead to plenty of confusion, but it also directly undermines the agency of online publications, highlighting a deteriorating relationship between Google and the news media. "BG3 players exploit children," reads one headline, referring to the popular role-playing video game "Baldur's Gate 3." The actual piece by PC Gamer is about how players have discovered how to clone virtual children inside the popular game to break it -- and isn't an instance of actual child labor, as the dumbed-down headline suggests. A separate four-word headline The Verge spotted claimed "Steam Machine price revealed" -- even though game company Valve has yet to announce the price of its upcoming console. The original headline by Ars Technica reads: "Valve's Steam Machine looks like a console, but don't expect it to be priced like one." Google acknowledges in a small notice below a short description of the content on Discover that some elements of its links are "generated with AI, which can make mistakes." But that leaves the question of why the company chose to add bungled headlines to its Discover feature in the first place. What are its misleading, four-word headlines doing better than the often carefully crafted ones written by human editors? Is it simply a matter of saving screen real estate? A Google spokesperson clarified to The Verge that "these screenshots show a small UI experiment for a subset of Discover users." "We are testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web," the spokesperson said.
[9]
Your Google Discover feed is getting an AI makeover, for better or worse
A new AI experiment is replacing real headlines with algorithm-written blurbs. What's happened? Google has started experimenting with automatically rewritten, AI-generated headlines inside its Discover feed instead of showing the original headlines written by publishers. According to The Verge, these AI headlines often oversimplify, exaggerate, or completely alter the tone of the original reporting. Google says the feature is only being tested with a small group of users, but for those seeing it live, the experience is already unsettling. Google replaces the original headline with a short, AI-generated summary in Discover. The AI versions often turn nuanced reporting into vague, clickbait-style phrases. Users only see the original publisher headline after tapping "See more." Google says it is a "small experiment" designed to help users decide what to read. Why this is important: It's one thing for Google to push AI with its AI mode when we are searching for something. However, news headlines are not just labels; they are context. They frame how you understand a story before you even open it. When an AI system rewrites that framing, it introduces a layer of interpretation that may not match the journalist's intent, tone, or facts. In fact, some of the rewritten Discover headlines flatten important details and replace them with vague or sensational phrasing. Recommended Videos There is also a trust issue here. News outlets spend time crafting accurate, responsible headlines to avoid misleading readers. If AI rewrites become the first thing you see, it blurs accountability. When a summary is wrong, exaggerated, or confusing, it is no longer clear who is responsible: the publisher or Google's algorithm. Suppose Discover becomes a feed of AI-written blurbs instead of real headlines. In that case, publishers lose control over how their work is presented, and readers lose a reliable signal of editorial credibility. Why should I care? For many people, Google Discover is their front page of the internet. If you rely on it for updates on tech, politics, finance, or global news, these AI rewrites could subtly reshape what you believe a story is about before you ever click. A serious investigation can suddenly look like a casual trend piece. A nuanced policy story can turn into a vague curiosity hook. And once that framing sticks in your head, it is hard to undo it fully. There is also a practical risk. If you are scanning headlines quickly, as most people do, you may skip stories that actually matter because the AI summary sounds dull, confusing, or misleading. Or worse, you may click something expecting one thing and get something entirely different. Either way, your attention, time, and understanding of the news are now being filtered through a system that is not accountable to journalistic standards. Okay, so what's next? For now, this is officially just a test, and Google says it is limited to a small group of users. But history shows that many "small experiments" quietly grow into default features. If you start noticing weirdly vague or click-heavy headlines in your Discover feed, that is your cue to be extra cautious and tap through to the original source before trusting what you see. Over the coming weeks, expect more scrutiny from publishers, regulators, and users alike, because this experiment sits right at the uncomfortable intersection of AI automation, platform power, and public trust in journalism.
[10]
Google Is Testing AI-Generated Headlines, and It's Not Going Well
AI headlines in Google Discover have a few key tells that set them apart from real headlines. Take a look at the top of this article. See that headline? If it looks different than what you clicked on to get to this page, congratulations: Google might have chosen you to participate in its latest AI experiment: rewriting news headlines for some users in Google Discover. Evidence of the new effort was first spotted by The Verge, as it seems writer Sean Hollister was affected by the update. Here's what's going on: When you swipe right on your Pixel or Galaxy home screen (or scroll down in the Google app on iPhone, or open up a new Chrome browser window with Google as your homepage), there's now a chance the article previews you'll see from Google Discover were actually generated by AI, rather than mirroring the headlines and/or descriptions handwritten by those articles' actual authors and editors. Sometimes, these AI headlines are just clunky or vague -- one AI headline introduced another Verge story about specific AI initiatives within Microsoft as "Microsoft developers using AI," which doesn't tell you much, especially in the current tech landscape. But more dangerously, these headlines can also get the facts of the story wrong. In Hollister's case, his Google Discover fed him a headline saying "Steam Machine price revealed," whereas the original article from Ars Technica simply said "Valve's Steam Machine looks like a console, but don't expect it to be priced like one." Clicking through leads to an article with quotes from a Valve designer hinting that the upcoming PC/home console hybrid won't have a subsidized price like most home consoles, which is not at all the same thing as an official price reveal. Another headline Hollister saw said "Qi2 slows older Pixels," which implies using a Qi2 charger on your phone could hurt its performance. The original article simply said that older pixels won't be able to use the full extent of a Qi2 charger's fast-charging. Granted, mistakes with consumer tech headlines will probably only cause some momentary disappointment or confusion, or maybe a missed opportunity to buy the best charger for your phone. But imagine that misinformation applied to a story about something more serious, like the Luigi Mangione case. Considering previous attempts other companies have made to summarize the news with AI, it's hardly unlikely. Perhaps worst of all, it also seems these AI headlines can throw shade when it wasn't intended, introducing a risk of libel. Recently, PCGamer wrote a cheeky story about Baldur's Gate 3, covering gamers who discovered that they can use the Polymorph and Dominate Beast spells to recruit child NPCs to their cause who, thanks to real-world German laws, can't die. You can imagine how that would be useful in a game, and hey, it's all fiction, right? Unfortunately, Google's AI headline chose to change PCGamer's original "Child labor is unbeatable" into "BG3 players exploit children." Yikes. Both Hollister and I reached out to Google for comment, and were given the same response: The new headlines are part of a "small UI experiment for a subset of Discover users," and follow up on similar AI previews introduced into Google Discover in October. Those previews featured short AI summaries of articles that users could expand to see more information (and even an AI headline), but didn't outright replace existing, author-written headlines. The new experiment "changes the placement of existing headlines to make topic details easier to digest," which seems to be code for the AI headlines now being placed up-top, where you would expect the real headlines to be. I'm personally not part of the UI experiment, but Hollister reported he wasn't able to see the actual headlines until he clicked through to the real articles. Obviously, there's a number of problems with this test. The AI headlines could misreport the news, as they already have in Hollister's case, or make false accusations. And unfortunately, since they're right where actual headlines have been shown in the past, it's totally understandable for a reader to think they were approved by the articles' authors or editors. If a Discover headline looks fishy to you, there are three ways to identify whether it was written by AI. Unfortunately, there does not seem to be a way to opt out of these AI headlines, as Google did not provide me with one when I asked, instead simply reiterating that this is a "small UI experiment." That means not everyone is seeing these for now., at least As someone who made frequent use of Google Discover back before I moved to an iPhone, that's still a major bummer. In the past, it's been a convenient way to catch up on stories that were relevant to me without having to scroll social media or check multiple homepages, but I can imagine that having to scrutinize every headline to know whether or not it's real will make things a lot rougher. It's also not great for journalists, who both rely on Google Discover for traffic, and could take the brunt of user ire about inaccurate headlines from readers who don't realize a machine created them. As it is, I think the latter is the more likely outcome. But even if Google eventually works out the kinks with AI headlines, they could still hurt web traffic, potentially removing the incentive to click that is part of all good headline writing. Google will continue to use outside content to keep people on its platform, but the people behind that content will get fewer eyes on it. (Of course, as always, if you want to get the most accurate idea of what an article says, it's best to read it thoroughly before forming an opinion.)
[11]
Google's toying with nonsense AI-made headlines on articles like ours in the Discover feed, so please don't blame me for clickbait like 'BG3 players exploit children'
Pop quiz, hotshot: what was the headline on this article when you clicked on it? Was it classic PC Gamer style -- witty, insightful, with the rare power of capturing the essence of a story with neither artifice nor evasion, and unintentionally but unmistakably implying the incredible mental powers and physical beauty of the writer? Or did it say something like "The headlines are all screwed"? If the latter, bad luck sport: you may have fallen prey to Google's latest experiment with its Discover newsfeed, replacing human-created headlines with sometimes-meaningless AI slop. As spotted by The Verge, one of the corporation's latest AI adventures is cramming it into your news feed, taking headlines like ours and condensing them into something at best shorter and clickbait-ier, and at worst actively nonsensical. So, for instance, Lincoln's headline, "'Child labor is unbeatable': Baldur's Gate 3 players discover how to build an army of unkillable kids through the power of polymorph and German media laws" became, ah, "BG3 players exploit children." Harvey's "Schedule 1 creator had a backup plan if Steam rejected it -- pack up the product, don a farmer's hat, and 'pivot it to be a farming game' like Stardew Valley" became the completely incomprehensible "Schedule 1 farming backup". It's not just us it's happening to, of course (though you may, and perhaps should, get angry about the bastardisation of our precious words most of all). Poor Ars Technica, for instance, had an article with the original headline "Valve's Steam Machine looks like a console, but don't expect it to be priced like one." All very reasonable. Google's AI turned that into "Steam Machine price revealed," which is actively misleading. That last one really sticks in my craw. Those of you who studied something useful at university and therefore don't work in journalism might not be aware of this, but Google has certain rules it likes sites like ours to follow, and woe betide any who violate them, because they could find themselves demoted in the algorithm. The rules mostly make sense! For instance, ol' Goog is very insistent that sites don't write things like 'release date revealed' in the headlines for articles that, you know, aren't about a release date being revealed, or which are actually about a wide release window. Entirely fair. Then, of course, Google's own hallucination engine turns around and slaps precisely that kind of misleading headline on Ars Technica's story, which did not have a misleading headline originally. It feels like a bad joke for the corporation to throw out its own rules like this in pursuit of slapping a shareholder-powered "AI-driven" badge on yet another enshittified product. Even worse, Google tucks away its "AI-generated" notice behind a See More button, leaving readers likely to assume the terrible headlines belong to the sites in question themselves. "These screenshots show a small UI experiment for a subset of Discover users," a Google rep told The Verge. "We are testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web." Well, I'd say mission failed on that front. With any luck, this is an experiment that's soon to end.
[12]
Google Discover tests AI headlines that rewrite news with errors
Google has implemented a new artificial intelligence (AI) experiment that rewrites news headlines for some users in Google Discover, leading to concerns about accuracy and potential misinformation. The firm confirmed the new headlines are part of a "small UI experiment for a subset of Discover users." The feature impacts content on Google Discover, which users access by swiping right on Pixel or Galaxy home screens, scrolling in the Google app on iPhone, or opening a new Chrome browser window with Google as the homepage. AI-generated article previews now sometimes replace headlines and descriptions written by authors and editors. Evidence of the initiative first appeared when The Verge staff writer Sean Hollister experienced the altered headlines. While some AI headlines appeared vague, others presented factual inaccuracies, according to reports. For instance, an AI headline stated, "Steam Machine price revealed," though the original Ars Technica article indicated Valve's Steam Machine would not be priced like a typical console. Another AI headline implied "Qi2 slows older Pixels," while the original article merely stated that older Pixel phones cannot use the full extent of Qi2 fast-charging capabilities. Concerns have also emerged regarding the AI's capacity to generate misleading or potentially libelous headlines. In one instance, a PCGamer article with the original headline "Child labor is unbeatable" was rewritten by Google's AI as "BG3 players exploit children," presenting a significantly different and potentially harmful implication. Google previously introduced similar AI previews in October, which included short AI summaries that users could expand. The current experiment, however, changes the placement of headlines, positioning the AI-generated text where original headlines typically appear. Users can identify AI-generated headlines by several characteristics: Google has confirmed no opt-out mechanism for this AI experiment is available to users at this time.
Share
Share
Copy Link
Google is testing AI-generated headlines in its Discover feed that replace original news titles with shortened, often inaccurate versions. The experiment has drawn sharp criticism from publishers who say the AI clickbait nonsense misrepresents their work and erodes public trust in media, all while offering minimal disclosure about the automated rewrites.
Google has launched a small UI experiment that replaces original news headlines with AI headlines in its Discover feed, a move that has sparked immediate backlash from publishers and journalists. The test, which Google spokesperson Mallory Deleon confirmed to
The Verge
1
, affects only a subset of Discover users and attempts to condense headlines to approximately four words or less. According to Google, the company is "testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web"2
.
Source: Digital Trends
The AI rewriting news story headlines feature appears in Google Discover, the news feed accessible by swiping right on Samsung Galaxy or Google Pixel homescreens. While some AI-altered headlines like "Origami model wins prize" and "Hyundai, Kia gain share" maintain basic accuracy, many examples demonstrate how the system generates inaccurate and misleading headlines that fundamentally misrepresent the original stories
1
.The most egregious examples reveal how Google's AI creates misleading content that could damage news websites and publishers. An
Ars Technica
article titled "Valve's Steam Machine looks like a console, but don't expect it to be priced like one" was rewritten as "Steam Machine price revealed" — despite no price being announced1
2
. A PC Gamer story aboutBaldur's Gate 3
players building armies of child-like NPCs became "BG3 players exploit children," removing critical context that these were non-player characters in a video game2
.
Source: The Verge
Other headlines simply became unintelligible. A PCGamer story was reduced to "Schedule 1 farming backup," while another appeared as "AI tag debate heats"
1
.The Verge
also noted that a story about "How Microsoft's developers are using AI" was shortened to "Microsoft developers using AI," stripping away the unique angle that made the story newsworthy1
2
.The experiment represents a significant challenge for publishers who carefully craft headlines to accurately represent their stories and attract readership. Journalists invest considerable effort in creating headlines that invite readers in, responsibly encapsulate the news, and help audiences understand why a story matters. Google's intervention effectively removes this agency, similar to a bookstore replacing a book's cover without the author's permission
1
.
Source: PC Gamer
The disclosure problem compounds the issue. Google includes a note stating content is "Generated with AI, which can make mistakes," but this message only appears after users tap a "See more" button
1
3
. There's no visible label indicating these are AI-generated titles rather than publisher-created headlines4
. Publication names appear directly next to the AI headlines, making it easy for readers to assume the clickbait originated from the news outlet itself1
.Related Stories
The timing of this experiment is particularly concerning for the media industry. Public trust in journalism already faces serious challenges, and AI-generated misleading headlines threaten to worsen the situation by making legitimate news websites appear to engage in clickbait practices
5
. When readers encounter inaccurate headlines, they may blame publishers rather than Google, potentially damaging hard-earned reputations.The financial implications extend beyond trust issues. Google's
AI Overview
has already impacted readership and revenue for news websites, and this latest experiment adds another layer of concern5
. The overall trend at Google has been to prioritize its own products at the expense of sending clicks to news websites. Even Google has admitted in court that "the open web is already in rapid decline"1
. Users who encounter AI headlines in Discover can report stories as clickbait, but this punishes publishers by limiting their visibility rather than holding Google accountable for the misleading content5
.For now, this remains a limited test rather than a full-scale feature release, and sufficient backlash could prevent Google from proceeding with broader implementation
4
. The experiment raises fundamental questions about the balance between platform innovation and the sustainability of quality journalism in an AI-driven ecosystem.Summarized by
Navi
[2]
[4]
[5]
1
Technology

2
Technology

3
Technology
