17 Sources
[1]
Google search's made-up AI explanations for sayings no one ever said, explained
Last week, the phrase "You can't lick a badger twice" unexpectedly went viral on social media. The nonsense sentence -- which was likely never uttered by a human before last week -- had become the poster child for the newly discovered way Google search's AI Overviews makes up plausible-sounding explanations for made-up idioms. Google users quickly discovered that typing any concocted phrase into the search bar with the word "meaning" attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google's AI Overview, created right there on the spot. In the wake of the "lick a badger" post, countless users flocked to social media to share Google's AI interpretations of their own made-up idioms, often expressing horror or disbelief at Google's take on their nonsense. Those posts often highlight the overconfident way the AI Overview frames its idiomatic explanations and occasional problems with the model confabulating sources that don't exist. But after reading through dozens of publicly shared examples of Google's explanations for fake idioms -- and generating a few of my own -- I've come away somewhat impressed with the model's almost poetic attempts to glean meaning from gibberish and make sense out of the senseless. Talk to me like a child Let's try a thought experiment: Say a child asked you what the phrase "you can't lick a badger twice" means. You'd probably say you've never heard that particular phrase or ask the child where they heard it. You might say that you're not familiar with that phrase or that it doesn't really make sense without more context. Someone on Threads noticed you can type any random sentence into Google, then add "meaning" afterwards, and you'll get an AI explanation of a famous idiom or phrase you just made up. Here is mine [image or embed] -- Greg Jenner (@gregjenner.bsky.social) April 23, 2025 at 6:15 AM But let's say the child persisted and really wanted an explanation for what the phrase means. So you'd do your best to generate a plausible-sounding answer. You'd search your memory for possible connotations for the word "lick" and/or symbolic meaning for the noble badger to force the idiom into some semblance of sense. You'd reach back to other similar idioms you know to try to fit this new, unfamiliar phrase into a wider pattern (anyone who has played the excellent board game Wise and Otherwise might be familiar with the process). Google's AI Overview doesn't go through exactly that kind of human thought process when faced with a similar question about the same saying. But in its own way, the large language model also does its best to generate a plausible-sounding response to an unreasonable request. As seen in Greg Jenner's viral Bluesky post, Google's AI Overview suggests that "you can't lick a badger twice" means that "you can't trick or deceive someone a second time after they've been tricked once. It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again." As an attempt to derive meaning from a meaningless phrase -- which was, after all, the user's request -- that's not half bad. Faced with a phrase that has no inherent meaning, the AI Overview still makes a good-faith effort to answer the user's request and draw some plausible explanation out of troll-worthy nonsense. Contrary to the computer science truism of "garbage in, garbage out, Google here is taking in some garbage and spitting out... well, a workable interpretation of garbage, at the very least. Google's AI Overview even goes into more detail explaining its thought process. "Lick" here means to "trick or deceive" someone, it says, a bit of a stretch from the dictionary definition of lick as "comprehensively defeat," but probably close enough for an idiom (and a plausible iteration of the idiom, "Fool me once shame on you, fool me twice, shame on me..."). Google also explains that the badger part of the phrase "likely originates from the historical sport of badger baiting," a practice I was sure Google was hallucinating until I looked it up and found it was real. I found plenty of other examples where Google's AI derived more meaning than the original requester's gibberish probably deserved. Google interprets the phrase "dream makes the steam" as an almost poetic statement about imagination powering innovation. The line "you can't humble a tortoise" similarly gets interpreted as a statement about the difficulty of intimidating "someone with a strong, steady, unwavering character (like a tortoise)." Google also often finds connections that the original nonsense idiom creators likely didn't intend. For instance, Google could link the made-up idiom "A deft cat always rings the bell" to the real concept of belling the cat. And in attempting to interpret the nonsense phrase "two cats are better than grapes," the AI Overview correctly notes that grapes can be potentially toxic to cats. Brimming with confidence Even when Google's AI Overview works hard to make the best of a bad prompt, I can still understand why the responses rub a lot of users the wrong way. A lot of the problem, I think, has to do with the LLM's unearned confident tone, which pretends that any made-up idiom is a common saying with a well-established and authoritative meaning. Rather than framing its responses as a "best guess" at an unknown phrase (as a human might when responding to a child in the example above), Google generally provides the user with a single, authoritative explanation for what an idiom means, full stop. Even with the occasional use of couching words such as "likely," "probably," or "suggests," the AI Overview comes off as unnervingly sure of the accepted meaning for some nonsense the user made up five seconds ago. I was able to find one exception to this in my testing. When I asked Google the meaning of "when you see a tortoise, spin in a circle," Google reasonably told me that the phrase "doesn't have a widely recognized, specific meaning" and that it's "not a standard expression with a clear, universal meaning." With that context, Google then offered suggestions for what the phrase "seems to" mean and mentioned Japanese nursery rhymes that it "may be connected" to, before concluding that it is "open to interpretation." Those qualifiers go a long way toward properly contextualizing the guesswork Google's AI Overview is actually conducting here. And if Google provided that kind of context in every AI summary explanation of a made-up phrase, I don't think users would be quite as upset. Unfortunately, LLMs like this have trouble knowing what they don't know, meaning moments of self-doubt like the turtle interpretation here tend to be few and far between. It's not like Google's language model has some master list of idioms in its neural network that it can consult to determine what is and isn't a "standard expression" that it can be confident about. Usually, it's just projecting a self-assured tone while struggling to force the user's gibberish into meaning. Zeus disguised himself as what? The worst examples of Google's idiomatic AI guesswork are ones where the LLM slips past plausible interpretations and into sheer hallucination of completely fictional sources. The phrase "a dog never dances before sunset," for instance, did not appear in the film Before Sunrise, no matter what Google says. Similarly, "There are always two suns on Tuesday" does not appear in The Hitchhiker's Guide to the Galaxy film despite Google's insistence. Literally in the one I tried. [image or embed] -- Sarah Vaughan (@madamefelicie.bsky.social) April 23, 2025 at 7:52 AM There's also no indication that the made-up phrase "Welsh men jump the rabbit" originated on the Welsh island of Portland, or that "peanut butter platform heels" refers to a scientific experiment creating diamonds from the sticky snack. We're also unaware of any Greek myth where Zeus disguises himself as a golden shower to explain the phrase "beware what glitters in a golden shower." The fact that Google's AI Overview presents these completely made-up sources with the same self-assurance as its abstract interpretations is a big part of the problem here. It's also a persistent problem for LLMs that tend to make up news sources and cite fake legal cases regularly. As usual, one should be very wary when trusting anything an LLM presents as an objective fact. When it comes to the more artistic and symbolic interpretation of nonsense phrases, though, I think Google's AI Overviews have gotten something of a bad rap recently. Presented with the difficult task of explaining nigh-unexplainable phrases, the model does its best, generating interpretations that can border on the profound at times. While the authoritative tone of those responses can sometimes be annoying or actively misleading, it's at least amusing to see the model's best attempts to deal with our meaningless phrases.
[2]
'You Can't Lick a Badger Twice': Google Failures Highlight a Fundamental AI Flaw
Here's a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word "meaning," and search. Behold! Google's AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived. This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, "a loose dog won't surf" is "a playful way of saying that something is not likely to happen or that something is not going to work out." The invented phrase "wired is as wired does" is an idiom that means "someone's behavior or characteristics are a direct result of their inherent nature or 'wiring,' much like a computer's function is determined by its physical connections." It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It's also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while the fact that AI Overviews thinks "never throw a poodle at a pig" is a proverb with a biblical derivation is silly, it's also a tidy encapsulation of where generative AI still falls short. As a disclaimer at the bottom of every AI Overview notes, Google uses "experimental" generative AI to power its results. Generative AI is a powerful tool with all kinds of legitimate practical applications. But two of its defining characteristics come into play when it explains these invented phrases. First is that it's ultimately a probability machine; while it may seem like a large language model-based system has thoughts or even feelings, at a base level it's simply placing one most-likely word after another, laying the track as the train chugs forward. That makes it very good at coming up with an explanation of what these phrases would mean if they meant anything, which again, they don't. "The prediction of the next word is based on its vast training data," says Ziang Xiao, a computer scientist at Johns Hopkins University. "However, in many cases, the next coherent word does not lead us to the right answer." The other factor is that AI aims to please; research has shown that chatbots often tell people what they want to hear. In this case that means taking you at your word that "you can't lick a badger twice" is an accepted turn of phrase. In other contexts, it might mean reflecting your own biases back to you, as a team of researchers led by Xiao demonstrated in a study last year. "It's extremely difficult for this system to account for every individual query or a user's leading questions," says Xiao. "This is especially challenging for uncommon knowledge, languages in which significantly less content is available, and minority perspectives. Since search AI is such a complex system, the error cascades."
[3]
Google's AI Overviews Take a (Badger) Licking. Why It Matters
Expertise artificial intelligence, home energy, heating and cooling, home technology Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search's AI Overviews to define phrases never before uttered. What, you've never heard the phrase "blew up like a brook trout"? Sure, I just made it up, but Google's AI overviews result told me it's a "colloquial way of saying something exploded or became a sensation quickly," likely referring to the eye-catching colors and markings of the fish. No, it doesn't make sense. The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched on "peanut butter platform heels." Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure. It moved to other social media sites, like Bluesky, where people shared Google's interpretations of phrases like "you can't lick a badger twice." The game: Search for a novel, nonsensical phrase with "meaning" at the end. Things rolled on from there. The fake meanings of made-up sayings bring back memories of the all too true stories about Google's AI Overviews giving incredibly wrong answers to basic questions -- like when it suggested putting glue on pizza to help the cheese stick. This trend seems at least a bit more harmless because it doesn't center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same -- a large language model, like Google's Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense. A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features. "When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," the Google spokesperson said. "This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context." This particular case is a "data void," where there isn't a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear. You won't always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched "like glue on pizza meaning," and it didn't trigger an AI Overview. The problem doesn't appear to be universal across LLMs. I asked ChatGPT for the meaning of "you can't lick a badger twice" and it told me the phrase "isn't a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use." It did, though, try to offer a definition anyway, essentially: "If you do something reckless or provoke danger once, you might not survive to do it again." Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts This phenomenon is an entertaining example of LLMs' tendency to make stuff up -- what the AI world calls "hallucinating." When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn't rooted in reality. A majority of AI researchers in a recent survey reported they doubt AI's accuracy and trustworthiness issues would be solved soon. The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like "you can't get a turkey from a Cybertruck," you probably expect them to say they haven't heard of it and that it doesn't make sense. LLMs often react with the same confidence as if you're asking for the definition of a real idiom. In this case, Google says the phrase means Tesla's Cybertruck "is not designed or capable of delivering Thanksgiving turkeys or other similar items" and highlights "its distinct, futuristic design that is not conducive to carrying bulky goods." Burn. This humorous trend does have an ominous lesson: Don't trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won't necessarily indicate it's uncertain.
[4]
Google's AI Overviews Explain Made-Up Idioms With Confident Nonsense
Expertise artificial intelligence, home energy, heating and cooling, home technology Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search's AI Overviews to define phrases never before uttered. What, you've never heard the phrase "blew up like a brook trout"? Sure, I just made it up, but Google's AI overviews result told me it's a "colloquial way of saying something exploded or became a sensation quickly," likely referring to the eye-catching colors and markings of the fish. No, it doesn't make sense. The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched on "peanut butter platform heels." Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure. It moved to other social media sites, like Bluesky, where people shared Google's interpretations of phrases like "you can't lick a badger twice." The game: Search for a novel, nonsensical phrase with "meaning" at the end. Things rolled on from there. This meme is interesting for more reasons than comic relief. It shows how large language models might strain to provide an answer that sounds correct, not one that is correct. "They are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical," said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. "They are not trained to verify the truth. They are trained to complete the sentence." The fake meanings of made-up sayings bring back memories of the all too true stories about Google's AI Overviews giving incredibly wrong answers to basic questions -- like when it suggested putting glue on pizza to help the cheese stick. This trend seems at least a bit more harmless because it doesn't center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same -- a large language model, like Google's Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense. A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features. "When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," the Google spokesperson said. "This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context." This particular case is a "data void," where there isn't a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear. You won't always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched "like glue on pizza meaning," and it didn't trigger an AI Overview. The problem doesn't appear to be universal across LLMs. I asked ChatGPT for the meaning of "you can't lick a badger twice" and it told me the phrase "isn't a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use." It did, though, try to offer a definition anyway, essentially: "If you do something reckless or provoke danger once, you might not survive to do it again." Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts This phenomenon is an entertaining example of LLMs' tendency to make stuff up -- what the AI world calls "hallucinating." When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn't rooted in reality. LLMs are "not fact generators," Li said, they just predict the next logical bits of language based on their training. A majority of AI researchers in a recent survey reported they doubt AI's accuracy and trustworthiness issues would be solved soon. The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like "you can't get a turkey from a Cybertruck," you probably expect them to say they haven't heard of it and that it doesn't make sense. LLMs often react with the same confidence as if you're asking for the definition of a real idiom. In this case, Google says the phrase means Tesla's Cybertruck "is not designed or capable of delivering Thanksgiving turkeys or other similar items" and highlights "its distinct, futuristic design that is not conducive to carrying bulky goods." Burn. This humorous trend does have an ominous lesson: Don't trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won't necessarily indicate it's uncertain. "This is a perfect moment for educators and researchers to use these scenarios to teach people how the meaning is generated and how AI works and why it matters," Li said. "Users should always stay skeptical and verify claims." Since you can't trust an LLM to be skeptical on your behalf, you need to encourage it to take what you say with a grain of salt. "When users enter a prompt, the model just assumes it's valid and then proceeds to generate the most likely accurate answer for that," Li said. The solution is to introduce skepticism in your prompt. Don't ask for the meaning of an unfamiliar phrase or idiom. Ask if it's real. Li suggested you ask "is this a real idiom?" "That may help the model to recognize the phrase instead of just guessing," she said.
[5]
People are Googling fake sayings to see AI Overviews explain them - and it's hilarious
Also: Google's AI Overviews will decimate your business - here's what you need to do This time, users are pushing Google's AI past its limits by creating fake idioms. Go to Google and search for a fake idiom. Don't ask for an explanation, and don't ask for a backstory. Just simply search something like "A barking cat can't put out a fire," "You can't make grape jelly from an avocado," or "Never give your pig a dictionary." It may help if you add "meaning" at the end of your fake idiom when searching. Also: Google Search just got an AI upgrade that you might actually find useful - and it's free Google will not only confirm that what you've entered is a real saying, but it will also make up a definition and an origin story. The results can be pretty absurd. To test the theory, I headed to Google and searched a phrase my coworker made up about her dog named Duckdog: "A duckdog never blinks twice." Google's AI immediately responded with an explanation that this was a humorous phrase, not intended to be taken literally, and that it meant "a duck dog, or a duck-like dog, is so focused that it never blinks even twice." It then provided a plausible explanation: Some ducks sleep with one eye open, so a dog that's hunting a duck will need to be even more focused. Also: Google Search AI Mode is free for everyone now - how to try it and what it can do It was a pretty impressive explanation. When I Googled the same phrase again, the story changed entirely. Instead of meaning a hyper-focused dog, the backstory was now tied to something unbelievable -- like a duck-dog hybrid. "A duckdog never blinks twice," Google explained, "emphasizes that something is so unusual or unbelievable that it's almost impossible to accept, even when it's presented as fact." Googling it again produced yet another explanation (pictured above, along with the star of the fake idiom). Google's AI Overviews can be a nice way to get a quick answer, but as this trend shows, you can't always trust that they're accurate. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[6]
I Googled a fake saying and AI Overviews gave me a hilarious (and totally made-up) meaning for it
Also: Google's AI Overviews will decimate your business - here's what you need to do This time, users are pushing Google's AI past its limits by creating fake idioms. Go to Google and search for a fake idiom. Don't ask for an explanation, and don't ask for a backstory. Just simply search something like "A barking cat can't put out a fire," "You can't make grape jelly from an avocado," or "Never give your pig a dictionary." It may help if you add "meaning" at the end of your fake idiom when searching. Also: Google Search just got an AI upgrade that you might actually find useful - and it's free Google will not only confirm that what you've entered is a real saying, but it will also make up a definition and an origin story. The results can be pretty absurd. To test the theory, I headed to Google and searched a phrase my coworker made up about her dog named Duckdog: "A duckdog never blinks twice." Google's AI immediately responded with an explanation that this was a humorous phrase, not intended to be taken literally, and that it meant "a duck dog, or a duck-like dog, is so focused that it never blinks even twice." It then provided a plausible explanation: Some ducks sleep with one eye open, so a dog that's hunting a duck will need to be even more focused. Also: Google Search AI Mode is free for everyone now - how to try it and what it can do It was a pretty impressive explanation. When I Googled the same phrase again, the story changed entirely. Instead of meaning a hyper-focused dog, the backstory was now tied to something unbelievable -- like a duck-dog hybrid. "A duckdog never blinks twice," Google explained, "emphasizes that something is so unusual or unbelievable that it's almost impossible to accept, even when it's presented as fact." Googling it again produced yet another explanation (pictured above, along with the star of the fake idiom). Google's AI Overviews can be a nice way to get a quick answer, but as this trend shows, you can't always trust that they're accurate. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[7]
You can trick Google's AI Overviews into explaining made-up idioms
As Big Tech pours countless dollars and resources into AI, preaching the gospel of its utopia-creating brilliance, here's a reminder that algorithms can screw up. Big time. The latest evidence: You can trick Google's AI Overview (the automated answers at the top of your search queries) into explaining fictional, nonsensical idioms as if they were real. According to Google's AI Overview (via @gregjenner on Bluesky), "You can't lick a badger twice" means you can't trick or deceive someone a second time after they've been tricked once. That sounds like a logical attempt to explain the idiom -- if only it weren't poppycock. Google's Gemini-powered failure came in assuming the question referred to an established phrase rather than absurd mumbo jumbo designed to trick it. In other words, AI hallucinations are still alive and well. We plugged some silliness into it ourselves and found similar results. Google's answer claimed that "You can't golf without a fish" is a riddle or play on words, suggesting you can't play golf without the necessary equipment, specifically, a golf ball. Amusingly, the AI Overview added the clause that the golf ball "might be seen as a 'fish' due to its shape." Hmm. Then there's the age-old saying, "You can't open a peanut butter jar with two left feet." According to the AI Overview, this means you can't do something requiring skill or dexterity. Again, a noble stab at an assigned task without stepping back to fact-check the content's existence. There's more. "You can't marry pizza" is a playful way of expressing the concept of marriage as a commitment between two people, not a food item. (Naturally.) "Rope won't pull a dead fish" means that something can't be achieved through force or effort alone; it requires a willingness to cooperate or a natural progression. (Of course!) "Eat the biggest chalupa first" is a playful way of suggesting that when facing a large challenge or a plentiful meal, you should first start with the most substantial part or item. (Sage advice.) This is hardly the first example of AI hallucinations that, if not fact-checked by the user, could lead to misinformation or real-life consequences. Just ask the ChatGPT lawyers, Steven Schwartz and Peter LoDuca, who were fined $5,000 in 2023 for using ChatGPT to research a brief in a client's litigation. The AI chatbot generated nonexistent cases cited by the pair that the other side's attorneys (quite understandably) couldn't locate. The pair's response to the judge's discipline? "We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth."
[8]
Hilarious gibberish or AI's fatal flaw? Google Search confidently explains nonsense phrases
Even though Google labels AI Overviews as experimental, this behavior raises significant concerns about trust and accuracy in Google's search results. AI is already everywhere, but companies aren't stopping from adding it to even more places. Google is betting big on AI, adding it across every surface with Gemini Advanced and even bringing AI to Google Search with Google AI Overviews. However, the elephant in the room is that AI can hallucinate, confidently making up facts that never existed. The latest instance of AI hallucinations comes from Google AI Overviews, which confidently provides meaning to made-up idioms and phrases.
[9]
Here's what's going on with Google's funny explanations of made-up expressions
The line between new phrases and nonsense phrases is a fine one, though, and it's easy to see the logic Google tries to use to divine meaning. The internet is a sucker for a good fail, and we've certainly seen our fair share of them. Right now, AI is probably the easiest target around for embarrassing gaffes, whether we're looking at AI pictures where hands have the wrong number of fingers, or AI-fueled search results confusing satire for fact. This week, Google finds itself in the hot seat as users discover how willing AI Overviews are to dream up fantastic explanations for nonsense phrases. Why is this happening, is it an actual problem, and can we expect to see it get any better? Google's already stepping forward with some explanations.
[10]
Google AI is now hallucinating idioms -- these are the 5 most hilarious we found
Artificial Intelligence can be amazing. It has the world's knowledge at its imaginary fingertips and the ability to do so many incredible things, but, like us, it does make mistakes. Known as hallucinations, these mistakes are errors in judgment or understanding. Sometimes this can be serious, sometimes funny. This time, it's the latter with Google's AI Overview making up its own idioms. People online have been asking Google for the meanings of their own made-up idioms, with Google's AI-powered search filling in the blanks, coming up with detailed meanings to each one. Here are some of our favorite examples of this little glitch in Google's AI reasoning. This has the faint hint of being a real idiom. As Google puts it, this is "a metaphorical way of expressing the value of having a supportive environment or a team that pushes you forward, even if their goals or values aren't aligned with your own." Based on Google's understanding, it doesn't exactly sound like great advice, but Google certainly seems familiar with this made-up expression. This is an idiom I will be using from now on. "Never put a tiger in a Michelin star kitchen". It's similar to the idea of if you can't handle the heat get out of the kitchen, but the danger here is more tiger than fire. Google identifies this one as a truly skilled chef being able to handle any situation, even a tiger. Despite its best efforts, Google really struggled to come up with a deep meaning here. It went for something along the lines of don't drink and fly. However, it ends strong, explaining that "what's offered, even if seemingly abundant or desirable, won't actually deliver on the promised result." As Google says, this one is similar to the more famous (and actually real) idiom of "beating a dead horse". Don't waste your time and effort on something that is already gone or is no longer productive. Or as you can now say, don't milk the ghost cow. There's a lot going on here in Google's response. The made-up idiom of "always pack extra batteries for your milkshake" is completely nonsensical but that doesn't stop Google from trying. Apparently it's a play on words from the film There Will Be Blood. The extra batteries part is a humorous twist that suggests the exploitative power of the milkshake...? Okay Google, you've lost me.
[11]
Google AI overviews will explain any nonsense phrase you make up
Google's AI Overviews sometimes acts like a lost man who won't ask for directions: It would rather confidently make a mistake than admit it doesn't know something. We know this because folks online have noticed you can ask Google about any faux idiom -- any random, nonsense saying you make up -- and Google AI Overviews will often prescribe its meaning. That's not exactly surprising, as AI has shown a penchant for either hallucinating or inventing stuff in an effort to provide answers with insufficient data. In the case of made-up idioms, it's kind of funny to see how Google's AI responds to idiotic sayings like "You can't lick a badger twice." On X, SEO expert Lily Ray dubbed the phenomenon "AI-splaining." I tested the "make up an idiom" trend, too. One phrase -- "don't give me homemade ketchup and tell me it's the good stuff" -- got the response "AI Overview is not available for this search." However, my next made up phrase -- "you can't shake hands with an old bear" -- got a response. Apparently Google's AI thinks this phrase suggests the "old bear" is an untrustworthy person. In this instance, Google AI Overview's penchant for making stuff up is kind of funny. In other instances -- say, getting the NFL's overtime rules wrong -- it can be relatively harmless. And when it first launched, it was telling folks to eat rocks and put glue on pizza. Other examples of AI hallucinations are less amusing. Keep in mind that Google warns users that AI Overviews can get facts wrong, though it remains at the top of many search results. So, as the old, time-honored idiom goes: Be wary of search with AI, what you see may be a lie.
[12]
'You can't lick a badger twice': How Google's AI Overview hallucinates idioms
The latest AI trend is a funny one, as a user has discovered that you can plug a made-up phrase into Google and append it with "meaning," then Google's AI Overview feature will hallucinate a meaning for the phrase. Historian Greg Jenner kicked off the trend with a post on Bluesky in which he asked Google to explain the meaning of "You can't lick a badger twice." AI Overview helpfully explained that this expression means that you can't deceive someone a second time after they've already been tricked once -- which seems like a reasonable explanation, but ignores the fact that this idiom didn't exist before this query went viral. Recommended Videos Since then, people have been having a lot of fun getting AI Overview to explain idioms like "A squid in a vase will speak no ill" (meaning that something outside of its natural environment will be unable to cause harm, apparently) or "You can take your dog to the beach but you can't sail it to Switzerland" (which is, according to AI Overview, a fairly straightforward phrase about the difficulty of international travel with pets). It doesn't work for all cases though, as some phrases don't return AI Overview results. "It's wildly inconsistent," cognitive scientist Gary Marcus said to Wired, "and that's what you expect of GenAI." Jenner points out that as entertaining as this is, it does indicate some of the pitfalls of relying too heavily on AI generated sources like AI Overview for information. "It's a warning sign that one of the key functions of Googling - the ability to factcheck a quote, verify a source, or track down something half remembered - will get so much harder if AI prefers to legitimate statistical possibilities over actual truth," Jenner wrote. This isn't the first time that people have pointed out the limitations of information provided by AI, and AI Overview in particular. When AI Overview was launched, it infamously suggested that people should eat one small rock per day and that they could put glue on their pizza, though these particular answers were quickly removed. Since then, Google has said in a statement to Digital Trends that the majority of AI Overviews provide helpful and factual information, and that it was still gathering feedback on its AI product. For now, though, let this serve as a reminder to double check the information which appears in the AI Overview box at the top of Google results, as it may not be accurate.
[13]
"You Can't Lick a Badger Twice": Google's AI Is Making Up Explanations for Nonexistent Folksy Sayings
Have you heard of the idiom "You Can't Lick a Badger Twice?" We haven't, either, because it doesn't exist -- but Google's AI seemingly has. As netizens discovered this week that adding the word "meaning" to nonexistent folksy sayings is causing the AI to cook up invented explanations for them. "The idiom 'you can't lick a badger twice' means you can't trick or deceive someone a second time after they've been tricked once," Google's AI Overviews feature happily suggests. "It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again." Author Meaghan Wilson-Anastasios, who first noticed the bizarre bug in a Threads post over the weekend, found that when she asked for the "meaning" of the phrase "peanut butter platform heels," the AI feature suggested it was a "reference to a scientific experiment" in which "peanut butter was used to demonstrate the creation of diamonds under high pressure." There are countless other examples. We found, for instance, that Google's AI also claimed that the made-up expression "the bicycle eats first" is a "humorous idiom" and a "playful way of saying that one should prioritize their nutrition, particularly carbohydrates, to support their cycling efforts." Even this author's name wasn't safe. Asked to explain the meaningless phrase "if you don't love me at my Victor, you don't deserve me at my Tangermann" the AI dutifully reported that it means "if someone can't appreciate or love you when you're at your lowest point (Victor), then they're not worthy of the positive qualities you bring to the relationship (Tangermann)." The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along. And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Even OpenAI's latest reasoning models, dubbed o3 and o4-mini, tend to hallucinate even more than their predecessors, showing that the company is actually headed in the wrong direction. Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users. When it launched, it even told users that glue belongs on pizza to ensure that toppings don't slide off. Its other outrageous gaffes have included claiming that baby elephants are small enough to sit in the palm of a human hand. Following public outrage over the feature's baffling -- and often comedic -- inaccuracy, Google admitted in a statement last year that "some odd, inaccurate or unhelpful AI Overviews certainly did show up." To tackle the issue, Google kicked off a massive game of cat and mouse, limiting some responses when it detected "nonsensical queries that shouldn't show an AI Overview." But considering the fictional idioms almost a year after the product was launched, Google still has a lot of work to do. Even worse, the feature is hurting websites by limiting click-through rates to traditional organic listings, as Search Engine Land reported this week. In other words, on top of spewing false information, Google's AI Overviews is undermining the business model of countless websites that host trustworthy info. Nonetheless, Google is doubling down, announcing last month that it was going to be "expanding" AI Overviews in the US to "help with harder questions, starting with coding, advanced math and multimodal queries." Earlier this year, Google announced that AI Overviews is even being entrusted with medical advice. The company claims that "power users" want "AI responses for even more of their searches." (For the time being, there are ways to turn off the feature.) At least the AI model appears to be aware of its own limitations. "The saying 'you can lead an AI to answer but you can't make it think' highlights the key difference between AI's ability to provide information and its lack of true understanding or independent thought," Google's AI Overviews told one Bluesky user.
[14]
Google's AI Overview is hallucinating again, this time with hilarious fake idioms
Google's AI Overview search feature is generating hilarious results again, but this time, it's explanations for fake idioms instead of recipes for glue pizza. When it first launched last year, Google's AI Overview made headlines for giving incorrect answers to (mostly) legitimate questions. What happens when you ask Google's AI about things that don't exist, though? Users all over the internet are finding out right now by Googling completely made-up sayings, and the results are pretty hysterical. See also: Best phone deals in April 2025 "A salamander can't laugh in the rain." "Never let your horse play Pokemon." "Short grass doesn't pay the bills." I entered those wise words in a Google search today, hoping for a ridiculous answer to my nonsense colloquialisms, and that's what I got. Google's AI Overview is spinning meanings and backstories out of fictional and illogical idioms like those, leading to some pretty funny search results, like this one: As enlightening as this information about salamanders was (who knew they're not always in the rain?), things got even weirder when I moved on to the age-old saying, "never let your horse play Pokemon." This led to Google repeatedly reminding me to "keep things in their proper context and avoid misusing or misinterpreting the behavior of different species." If you're reluctant to mow the lawn this weekend, you might appreciate Google's wise take on the classic colloquialism, "short grass doesn't pay the bills." The results for this one actually had some nearly coherent advice, suggesting, "The saying emphasizes the importance of focusing on tasks that actually produce financial benefits, such as working, investing, or pursuing other income-generating activities." If you want a laugh, try making up your own fake idiom and plugging it into a Google search. It helps to stick "meaning," "explanation," or "backstory" at the end. You can also start with a real idiom and put some absurd spin on it. For instance, you could take the saying "When life gives you lemons, make lemonade" and turn it into "When life gives you cats, make pasta" and see what Google's AI Overview comes up with. While this bug (or feature, depending on how you look at it) is goofy and relatively harmless, it's also a good reminder that AI-generated content can't always be trusted. As Sam Altman famously said, AI is still "incredibly dumb" and has a habit of hallucinating and generating nonsense results, like those above while making them look like legitimate results or info. So, if you're trying to do research or find concrete information, AI Overview might not always be your best bet. But if you're looking for a sage explanation for why salamanders never laugh in the rain? AI's your new best friend.
[15]
AI Overview is still 'yes, and'-ing completely made up idioms despite Google's best efforts to restrict it
Anyone studying a second language will tell you that learning idioms is often a stumbling block. I mean, just take my mother tongue, British English -- 'raining cats and dogs', 'on it like a car bonnet ', 'Bob's your uncle, Deborah's your aunt' -- I mean, what bizarre fairytale creature would even talk like this? Because idioms spring up from rich etymological contexts, AI has a snowball's chance in Hell of making heads or tails of them. Okay, I'll stop over-egging the pudding and dispense with the British gibberish for now. The point is, it's a lot of fun to make up idioms and watch Google's AI overview try its hardest to tell you what it means (via Futurism). We've had a lot of fun with this on the hardware team. For instance, asking Google's AI overview to decipher the nonsense idiom 'Never cook a processor next to your GPU' returns at least one valiant attempt at making sense via an explanation of hardware bottlenecking. When our Andy asked the AI overview, it returned, "The saying [...] is a humorous way of highlighting the importance of not having a CPU [...] and GPU [...] that are poorly matched, especially for gaming. It implies that if you try to run a game where the CPU is weak and the GPU is powerful, or vice versa, you'll end up with a frustrating experience because the weaker component will limit the performance of the other." However, when I asked just now, it said, "The saying [...] is a humorous way of suggesting that you should never attempt to repair a faulty GPU by heating it up in an oven, as this can cause more damage than it fixes. This practice, sometimes referred to as the "oven trick," has been discredited due to its potential to melt solder joints and cause further issues." Alright, fess up: who told the AI about the 'oven trick'? I know some have sworn by it for older, busted GPUs, but I can only strongly advise against it -- for the sake of your home if not your warranty. Because a Large Language Model is only ever trying to predict the word that's most likely to come next, it parses any and all information uncritically. For this reason -- and their tendency to return different information to the same prompt as demonstrated above -- LLM-based AI tends not to be reliable or, one might argue, even particularly useful as a referencing tool. For one recent example, a solo developer attempting to cram a Doom-like game onto a QR code turned to three different AI chatbots for a solution to his storage woes. It took two days and nearly 300 different prompts for even one of the AI chatbots to spit out something helpful. Google's AI Overview is almost never going to turn around and tell you 'no, you've just made that up' -- except I've stumbled upon a turn of phrase that's obviously made someone overseeing this AI's output think twice. I asked Google the meaning of the phrase, 'Never send an AI to do a human's job,' and was promptly told that AI Overview was simply "not available for this search." Our Dave, on the other hand, got an explanation that cites Agent Smith from The Matrix, which I'm not going to read too deeply into here. At any rate, there are always more humans involved in fine-tuning AI outputs than you may have been led to believe, and I'm seeing those fingerprints on Google's AI Overview refusing to play ball with me. Indeed, last year Google said in a blog post that it has been attempting to clamp down on "nonsensical queries that shouldn't show an AI Overview" and "the use of user-generated content in responses that could offer misleading advice." Undeterred, I changed the language of my own search prompt to be specifically gendered and got told by the AI Overview that a 'man's job' specifically "refers to a task that requires specific knowledge, skills, or experience, often beyond the capabilities of someone less experienced." Right, what about a 'woman's job', then? Google's AI overview refused to comment.
[16]
Google's Latest Nonsensical Overview Results Illustrate Yet Another Problem With AI
You might not be familiar with the phrase "peanut butter platform heels" but it apparently originates from a scientific experiment, where peanut butter was transformed into a diamond-like structure, under very high pressure -- hence the "heels" reference. Except this never happened. The phrase is complete nonsense, but was given a definition and backstory by Google AI Overviews when asked by writer Meaghan Wilson-Anastasios, as per this Threads post (which contains some other amusing examples). The internet picked this up and ran with it. Apparently, "you can't lick a badger twice" means you can't trick someone twice (Bluesky), "a loose dog won't surf" means something is unlikely to happen (Wired), and "the bicycle eats first" is a way of saying that you should prioritize your nutrition when training for a cycle ride (Futurism). Google, however, is not amused. I was keen to put together my own collection of nonsense phrases and apparent meanings, but it seems the trick is no longer possible: Google will now refuse to show an AI Overview or tell you you're mistaken if you try and get an explanation of a nonsensical phrase. If you go to an actual AI chatbot, it's a little different. I ran some quick tests with Gemini, Claude, and ChatGPT, and the bots attempt to explain these phrases logically, while also flagging that they appear to be nonsensical, and don't seem to be in common use. That's a much more nuanced approach, with context that has been lacking from AI Overviews. Now, AI Overviews are still labeled as "experimental," but most people won't take much notice of that. They'll assume the information they see is accurate and reliable, built on information scraped from web articles. And while Google's engineers may have wised up to this particular type of mistake, much like the glue on pizza one last year, it probably won't be long before another similar issue crops up. It speaks to some basic problems with getting all of our information from AI, rather than references written by actual humans. Fundamentally, these AI Overviews are built to provide answers and synthesize information even if there's no exact match for your query -- which is where this phrase-definition problem starts. The AI feature is also perhaps not the best judge of what is and isn't reliable information on the internet. Looking to fix a laptop problem? Previously you'd get a list of blue links from Reddit and various support forums (and maybe Lifehacker), but with AI Overviews, Google sucks up everything it can find on those links and tries to patch together a smart answer -- even if no one has had the specific problem you're asking about. Sometimes that can be helpful, and sometimes you might end up making your problems worse. Anecdotally, I've also noticed AI bots have a tendency to want to agree with prompts, and affirm what a prompt says, even if it's inaccurate. These models are eager to please, and essentially want to be helpful even if they can't be. Depending on how you word your query, you can get AI to agree with something that isn't right. I didn't manage to get any nonsensical idioms defined by Google AI Overviews, but I did ask the AI why R.E.M.'s second album was recorded in London: That was down to the choice of producer Joe Boyd, the AI Overview told me. But in fact, R.E.M.'s second album wasn't recorded in London, it was recorded in North Carolina -- it's the third LP that was recorded in London, and produced by Joe Boyd. The actual Gemini app gives the right response: that the second album wasn't recorded in London. But the way AI Overviews attempt to combine multiple online sources into a coherent whole seems to be rather suspect in terms of its accuracy, especially if your search query makes some confident claims of its own. "When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," Google told Android Authority in an official statement. "This is true of Search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context." We seem to be barreling towards having search engines that always respond with AI rather than information compiled by actual people, but of course AI has never fixed a faucet, tested an iPhone camera, or listened to R.E.M. -- it's just synthesizing vast amounts of data from people who have, and trying to compose answers by figuring out which word is most likely to go in front of the previous one.
[17]
My New Online Hobby Is Asking Google AI What Made-Up Proverbs Mean
Google's AI Overview isn't shy of an AI hallucination or two, and its latest one is another classic to add to the list. AI Overview Believes Everything Is a Idiom, and It's Wonderful In short, if you head over to Google Search and input a random sentence that sounds vaguely like an idiom or proverb, the AI Overview will do its very best to place some meaning to your empty words. First spotted on Threads, though brought to my attention through Greg Jenner's Bluesky account, these AI hallucinations are some of my favorites.There are some amazing examples of the lengths Google's AI Overview will go to explain how something makes sense or fits into its vision of the input. One particular favorite came from MakeUseOf's Editor in Chief, Ben Stegner: "There's no irony like a wet golf course meaning." To which the AI Overview dug deep and responded, "The saying 'there's no irony like a wet golf course' plays on the common understanding that gold, a sport often associated with sunny, well-maintained greens, can be surprisingly challenging and frustrating when conditions are wet." Another one I tried was "giant pandas always fall twice," which had the AI Overview detailing how pandas are clumsy and enjoy rolling around instead of walking. But not content to stop there, it began delving into the metabolism and energy conservation efforts of pandas. AI Overview's Latest Hallucination Is Why You Cannot Trust AI Chatbots As amusing as these wonderfully weird, forced explanations are, they highlight the very real problem with AI chatbots (not just AI Overview). AI hallucination is real and very much an issue, especially if its output is taken at face value. When AI hallucination was confined to folks specifically using AI chatbots like ChatGPT, Claude, Gemini, and so on, the potential danger was somewhat limited. Sure, the AI hallucinated, and it was a problem, but those people were specifically seeking out AI chatbots. Google's AI Overview and its next version, AI Mode, change the rules. Anyone attempting to use Google for a regular search runs the risk of encountering fake, AI-slop responses, delivered and presented to you as if they were fact. Without serious scrutiny, Google Search as we know it is on its way out, replaced by something far worse, requiring greater literacy skills than before. This latest round of AI hallucination is the perfect example of that. In one example from The Sleight Doctor, AI Overview went as far as to cite a Bible verse, from which this supposed idiom was derived. That phrase? "Never throw your poodle at a pig."
Share
Copy Link
Google's AI-powered search feature is confidently explaining made-up idioms, sparking a viral trend and raising concerns about AI hallucinations and overconfidence.
A recent trend on social media has exposed an intriguing quirk in Google's AI-powered search feature, known as AI Overviews. Users discovered that when searching for made-up phrases followed by the word "meaning," Google's AI generates confident explanations for these nonsensical idioms 123.
The trend began with phrases like "You can't lick a badger twice" and quickly spread across platforms like Threads, Bluesky, and others. Users found that Google's AI would not only confirm these fabricated sayings as real but also provide detailed explanations and sometimes even origin stories 134.
Despite the absurdity of the queries, Google's AI Overviews often produce surprisingly coherent and plausible-sounding explanations. For instance, "You can't lick a badger twice" was interpreted as a warning about not being able to deceive someone twice, with the AI even linking it to the historical practice of badger baiting 1.
Experts explain that this phenomenon highlights key characteristics of large language models (LLMs):
Probability-based responses: LLMs generate text by predicting the most likely next word, which can lead to coherent but inaccurate information 2.
Aim to please: AI systems often attempt to provide an answer, even when faced with nonsensical or false premises 34.
Confidence in uncertainty: The AI presents its made-up explanations with unwarranted certainty, rarely expressing doubt or admitting lack of knowledge 23.
While many find this trend entertaining, it raises important questions about AI limitations:
Hallucinations: This is a clear example of AI "hallucinations," where models generate plausible-sounding but false information 4.
Overconfidence: The AI's unwavering certainty in its explanations could mislead users who aren't aware of its limitations 13.
Data voids: These instances highlight how AI systems struggle with queries that lack reliable information sources 3.
A Google spokesperson acknowledged the issue, explaining that AI Overviews attempt to find relevant results even for nonsensical searches. The company is working on limiting AI Overviews for queries with insufficient information and preventing misleading or unhelpful content 34.
This phenomenon serves as a reminder to approach AI-generated content with skepticism. Experts advise:
Verifying claims: Don't trust AI responses without fact-checking, especially for unusual or unfamiliar topics 4.
Understanding AI limitations: Recognize that LLMs are designed to generate fluent text, not necessarily factual information 4.
Educational opportunity: Use these examples to teach about AI functioning and the importance of critical thinking in the age of generative AI 5.
As AI continues to integrate into our daily lives, maintaining a healthy skepticism and understanding its capabilities and limitations becomes increasingly crucial.
Summarized by
Navi
Leading AI researchers from major tech companies have jointly published a paper urging for more research into chain-of-thought (CoT) monitoring, a crucial method for understanding AI reasoning that may become impossible as AI systems advance.
7 Sources
Technology
22 hrs ago
7 Sources
Technology
22 hrs ago
Google's AI agent 'Big Sleep' has made history by detecting and preventing a critical vulnerability in SQLite before it could be exploited, showcasing the potential of AI in proactive cybersecurity.
4 Sources
Technology
22 hrs ago
4 Sources
Technology
22 hrs ago
Microsoft is rolling out an update to Copilot Vision AI for Windows Insiders, allowing it to analyze and interact with the entire desktop, enhancing its ability to provide real-time assistance and insights.
9 Sources
Technology
22 hrs ago
9 Sources
Technology
22 hrs ago
Google announces major advancements in AI-driven cybersecurity, including Big Sleep's discovery of critical vulnerabilities and the expansion of AI capabilities in forensic tools, ahead of major security conferences.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
Meta addressed a significant security vulnerability in its AI chatbot that could have exposed users' private prompts and AI-generated responses. The bug, discovered by a security researcher, was fixed and resulted in a $10,000 bug bounty reward.
7 Sources
Technology
22 hrs ago
7 Sources
Technology
22 hrs ago