Curated by THEOUTPOST
On Thu, 24 Apr, 12:05 AM UTC
16 Sources
[1]
'You Can't Lick a Badger Twice': Google Failures Highlight a Fundamental AI Flaw
Here's a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word "meaning," and search. Behold! Google's AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived. This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, "a loose dog won't surf" is "a playful way of saying that something is not likely to happen or that something is not going to work out." The invented phrase "wired is as wired does" is an idiom that means "someone's behavior or characteristics are a direct result of their inherent nature or 'wiring,' much like a computer's function is determined by its physical connections." It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It's also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while the fact that AI Overviews thinks "never throw a poodle at a pig" is a proverb with a biblical derivation is silly, it's also a tidy encapsulation of where generative AI still falls short. As a disclaimer at the bottom of every AI Overview notes, Google uses "experimental" generative AI to power its results. Generative AI is a powerful tool with all kinds of legitimate practical applications. But two of its defining characteristics come into play when it explains these invented phrases. First is that it's ultimately a probability machine; while it may seem like a large language model-based system has thoughts or even feelings, at a base level it's simply placing one most-likely word after another, laying the track as the train chugs forward. That makes it very good at coming up with an explanation of what these phrases would mean if they meant anything, which again, they don't. "The prediction of the next word is based on its vast training data," says Ziang Xiao, a computer scientist at Johns Hopkins University. "However, in many cases, the next coherent word does not lead us to the right answer." The other factor is that AI aims to please; research has shown that chatbots often tell people what they want to hear. In this case that means taking you at your word that "you can't lick a badger twice" is an accepted turn of phrase. In other contexts, it might mean reflecting your own biases back to you, as a team of researchers led by Xiao demonstrated in a study last year. "It's extremely difficult for this system to account for every individual query or a user's leading questions," says Xiao. "This is especially challenging for uncommon knowledge, languages in which significantly less content is available, and minority perspectives. Since search AI is such a complex system, the error cascades."
[2]
Google's AI Overviews Explain Made-Up Idioms With Confident Nonsense
Expertise artificial intelligence, home energy, heating and cooling, home technology Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search's AI Overviews to define phrases never before uttered. What, you've never heard the phrase "blew up like a brook trout"? Sure, I just made it up, but Google's AI overviews result told me it's a "colloquial way of saying something exploded or became a sensation quickly," likely referring to the eye-catching colors and markings of the fish. No, it doesn't make sense. The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched on "peanut butter platform heels." Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure. It moved to other social media sites, like Bluesky, where people shared Google's interpretations of phrases like "you can't lick a badger twice." The game: Search for a novel, nonsensical phrase with "meaning" at the end. Things rolled on from there. This meme is interesting for more reasons than comic relief. It shows how large language models might strain to provide an answer that sounds correct, not one that is correct. "They are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical," said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. "They are not trained to verify the truth. They are trained to complete the sentence." The fake meanings of made-up sayings bring back memories of the all too true stories about Google's AI Overviews giving incredibly wrong answers to basic questions -- like when it suggested putting glue on pizza to help the cheese stick. This trend seems at least a bit more harmless because it doesn't center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same -- a large language model, like Google's Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense. A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features. "When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," the Google spokesperson said. "This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context." This particular case is a "data void," where there isn't a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear. You won't always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched "like glue on pizza meaning," and it didn't trigger an AI Overview. The problem doesn't appear to be universal across LLMs. I asked ChatGPT for the meaning of "you can't lick a badger twice" and it told me the phrase "isn't a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use." It did, though, try to offer a definition anyway, essentially: "If you do something reckless or provoke danger once, you might not survive to do it again." Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts This phenomenon is an entertaining example of LLMs' tendency to make stuff up -- what the AI world calls "hallucinating." When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn't rooted in reality. LLMs are "not fact generators," Li said, they just predict the next logical bits of language based on their training. A majority of AI researchers in a recent survey reported they doubt AI's accuracy and trustworthiness issues would be solved soon. The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like "you can't get a turkey from a Cybertruck," you probably expect them to say they haven't heard of it and that it doesn't make sense. LLMs often react with the same confidence as if you're asking for the definition of a real idiom. In this case, Google says the phrase means Tesla's Cybertruck "is not designed or capable of delivering Thanksgiving turkeys or other similar items" and highlights "its distinct, futuristic design that is not conducive to carrying bulky goods." Burn. This humorous trend does have an ominous lesson: Don't trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won't necessarily indicate it's uncertain. "This is a perfect moment for educators and researchers to use these scenarios to teach people how the meaning is generated and how AI works and why it matters," Li said. "Users should always stay skeptical and verify claims." Since you can't trust an LLM to be skeptical on your behalf, you need to encourage it to take what you say with a grain of salt. "When users enter a prompt, the model just assumes it's valid and then proceeds to generate the most likely accurate answer for that," Li said. The solution is to introduce skepticism in your prompt. Don't ask for the meaning of an unfamiliar phrase or idiom. Ask if it's real. Li suggested you ask "is this a real idiom?" "That may help the model to recognize the phrase instead of just guessing," she said.
[3]
Google's AI Overviews Take a (Badger) Licking. Why It Matters
Expertise artificial intelligence, home energy, heating and cooling, home technology Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search's AI Overviews to define phrases never before uttered. What, you've never heard the phrase "blew up like a brook trout"? Sure, I just made it up, but Google's AI overviews result told me it's a "colloquial way of saying something exploded or became a sensation quickly," likely referring to the eye-catching colors and markings of the fish. No, it doesn't make sense. The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched on "peanut butter platform heels." Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure. It moved to other social media sites, like Bluesky, where people shared Google's interpretations of phrases like "you can't lick a badger twice." The game: Search for a novel, nonsensical phrase with "meaning" at the end. Things rolled on from there. The fake meanings of made-up sayings bring back memories of the all too true stories about Google's AI Overviews giving incredibly wrong answers to basic questions -- like when it suggested putting glue on pizza to help the cheese stick. This trend seems at least a bit more harmless because it doesn't center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same -- a large language model, like Google's Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense. A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features. "When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," the Google spokesperson said. "This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context." This particular case is a "data void," where there isn't a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear. You won't always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched "like glue on pizza meaning," and it didn't trigger an AI Overview. The problem doesn't appear to be universal across LLMs. I asked ChatGPT for the meaning of "you can't lick a badger twice" and it told me the phrase "isn't a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use." It did, though, try to offer a definition anyway, essentially: "If you do something reckless or provoke danger once, you might not survive to do it again." Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts This phenomenon is an entertaining example of LLMs' tendency to make stuff up -- what the AI world calls "hallucinating." When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn't rooted in reality. A majority of AI researchers in a recent survey reported they doubt AI's accuracy and trustworthiness issues would be solved soon. The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like "you can't get a turkey from a Cybertruck," you probably expect them to say they haven't heard of it and that it doesn't make sense. LLMs often react with the same confidence as if you're asking for the definition of a real idiom. In this case, Google says the phrase means Tesla's Cybertruck "is not designed or capable of delivering Thanksgiving turkeys or other similar items" and highlights "its distinct, futuristic design that is not conducive to carrying bulky goods." Burn. This humorous trend does have an ominous lesson: Don't trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won't necessarily indicate it's uncertain.
[4]
People are Googling fake sayings to see AI Overviews explain them - and it's hilarious
Also: Google's AI Overviews will decimate your business - here's what you need to do This time, users are pushing Google's AI past its limits by creating fake idioms. Go to Google and search for a fake idiom. Don't ask for an explanation, and don't ask for a backstory. Just simply search something like "A barking cat can't put out a fire," "You can't make grape jelly from an avocado," or "Never give your pig a dictionary." It may help if you add "meaning" at the end of your fake idiom when searching. Also: Google Search just got an AI upgrade that you might actually find useful - and it's free Google will not only confirm that what you've entered is a real saying, but it will also make up a definition and an origin story. The results can be pretty absurd. To test the theory, I headed to Google and searched a phrase my coworker made up about her dog named Duckdog: "A duckdog never blinks twice." Google's AI immediately responded with an explanation that this was a humorous phrase, not intended to be taken literally, and that it meant "a duck dog, or a duck-like dog, is so focused that it never blinks even twice." It then provided a plausible explanation: Some ducks sleep with one eye open, so a dog that's hunting a duck will need to be even more focused. Also: Google Search AI Mode is free for everyone now - how to try it and what it can do It was a pretty impressive explanation. When I Googled the same phrase again, the story changed entirely. Instead of meaning a hyper-focused dog, the backstory was now tied to something unbelievable -- like a duck-dog hybrid. "A duckdog never blinks twice," Google explained, "emphasizes that something is so unusual or unbelievable that it's almost impossible to accept, even when it's presented as fact." Googling it again produced yet another explanation (pictured above, along with the star of the fake idiom). Google's AI Overviews can be a nice way to get a quick answer, but as this trend shows, you can't always trust that they're accurate. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[5]
I Googled a fake saying and AI Overviews gave me a hilarious (and totally made-up) meaning for it
Also: Google's AI Overviews will decimate your business - here's what you need to do This time, users are pushing Google's AI past its limits by creating fake idioms. Go to Google and search for a fake idiom. Don't ask for an explanation, and don't ask for a backstory. Just simply search something like "A barking cat can't put out a fire," "You can't make grape jelly from an avocado," or "Never give your pig a dictionary." It may help if you add "meaning" at the end of your fake idiom when searching. Also: Google Search just got an AI upgrade that you might actually find useful - and it's free Google will not only confirm that what you've entered is a real saying, but it will also make up a definition and an origin story. The results can be pretty absurd. To test the theory, I headed to Google and searched a phrase my coworker made up about her dog named Duckdog: "A duckdog never blinks twice." Google's AI immediately responded with an explanation that this was a humorous phrase, not intended to be taken literally, and that it meant "a duck dog, or a duck-like dog, is so focused that it never blinks even twice." It then provided a plausible explanation: Some ducks sleep with one eye open, so a dog that's hunting a duck will need to be even more focused. Also: Google Search AI Mode is free for everyone now - how to try it and what it can do It was a pretty impressive explanation. When I Googled the same phrase again, the story changed entirely. Instead of meaning a hyper-focused dog, the backstory was now tied to something unbelievable -- like a duck-dog hybrid. "A duckdog never blinks twice," Google explained, "emphasizes that something is so unusual or unbelievable that it's almost impossible to accept, even when it's presented as fact." Googling it again produced yet another explanation (pictured above, along with the star of the fake idiom). Google's AI Overviews can be a nice way to get a quick answer, but as this trend shows, you can't always trust that they're accurate. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[6]
You can trick Google's AI Overviews into explaining made-up idioms
As Big Tech pours countless dollars and resources into AI, preaching the gospel of its utopia-creating brilliance, here's a reminder that algorithms can screw up. Big time. The latest evidence: You can trick Google's AI Overview (the automated answers at the top of your search queries) into explaining fictional, nonsensical idioms as if they were real. According to Google's AI Overview (via @gregjenner on Bluesky), "You can't lick a badger twice" means you can't trick or deceive someone a second time after they've been tricked once. That sounds like a logical attempt to explain the idiom -- if only it weren't poppycock. Google's Gemini-powered failure came in assuming the question referred to an established phrase rather than absurd mumbo jumbo designed to trick it. In other words, AI hallucinations are still alive and well. We plugged some silliness into it ourselves and found similar results. Google's answer claimed that "You can't golf without a fish" is a riddle or play on words, suggesting you can't play golf without the necessary equipment, specifically, a golf ball. Amusingly, the AI Overview added the clause that the golf ball "might be seen as a 'fish' due to its shape." Hmm. Then there's the age-old saying, "You can't open a peanut butter jar with two left feet." According to the AI Overview, this means you can't do something requiring skill or dexterity. Again, a noble stab at an assigned task without stepping back to fact-check the content's existence. There's more. "You can't marry pizza" is a playful way of expressing the concept of marriage as a commitment between two people, not a food item. (Naturally.) "Rope won't pull a dead fish" means that something can't be achieved through force or effort alone; it requires a willingness to cooperate or a natural progression. (Of course!) "Eat the biggest chalupa first" is a playful way of suggesting that when facing a large challenge or a plentiful meal, you should first start with the most substantial part or item. (Sage advice.) This is hardly the first example of AI hallucinations that, if not fact-checked by the user, could lead to misinformation or real-life consequences. Just ask the ChatGPT lawyers, Steven Schwartz and Peter LoDuca, who were fined $5,000 in 2023 for using ChatGPT to research a brief in a client's litigation. The AI chatbot generated nonexistent cases cited by the pair that the other side's attorneys (quite understandably) couldn't locate. The pair's response to the judge's discipline? "We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth."
[7]
Here's what's going on with Google's funny explanations of made-up expressions
The line between new phrases and nonsense phrases is a fine one, though, and it's easy to see the logic Google tries to use to divine meaning. The internet is a sucker for a good fail, and we've certainly seen our fair share of them. Right now, AI is probably the easiest target around for embarrassing gaffes, whether we're looking at AI pictures where hands have the wrong number of fingers, or AI-fueled search results confusing satire for fact. This week, Google finds itself in the hot seat as users discover how willing AI Overviews are to dream up fantastic explanations for nonsense phrases. Why is this happening, is it an actual problem, and can we expect to see it get any better? Google's already stepping forward with some explanations.
[8]
Hilarious gibberish or AI's fatal flaw? Google Search confidently explains nonsense phrases
Even though Google labels AI Overviews as experimental, this behavior raises significant concerns about trust and accuracy in Google's search results. AI is already everywhere, but companies aren't stopping from adding it to even more places. Google is betting big on AI, adding it across every surface with Gemini Advanced and even bringing AI to Google Search with Google AI Overviews. However, the elephant in the room is that AI can hallucinate, confidently making up facts that never existed. The latest instance of AI hallucinations comes from Google AI Overviews, which confidently provides meaning to made-up idioms and phrases.
[9]
Google AI is now hallucinating idioms -- these are the 5 most hilarious we found
Artificial Intelligence can be amazing. It has the world's knowledge at its imaginary fingertips and the ability to do so many incredible things, but, like us, it does make mistakes. Known as hallucinations, these mistakes are errors in judgment or understanding. Sometimes this can be serious, sometimes funny. This time, it's the latter with Google's AI Overview making up its own idioms. People online have been asking Google for the meanings of their own made-up idioms, with Google's AI-powered search filling in the blanks, coming up with detailed meanings to each one. Here are some of our favorite examples of this little glitch in Google's AI reasoning. This has the faint hint of being a real idiom. As Google puts it, this is "a metaphorical way of expressing the value of having a supportive environment or a team that pushes you forward, even if their goals or values aren't aligned with your own." Based on Google's understanding, it doesn't exactly sound like great advice, but Google certainly seems familiar with this made-up expression. This is an idiom I will be using from now on. "Never put a tiger in a Michelin star kitchen". It's similar to the idea of if you can't handle the heat get out of the kitchen, but the danger here is more tiger than fire. Google identifies this one as a truly skilled chef being able to handle any situation, even a tiger. Despite its best efforts, Google really struggled to come up with a deep meaning here. It went for something along the lines of don't drink and fly. However, it ends strong, explaining that "what's offered, even if seemingly abundant or desirable, won't actually deliver on the promised result." As Google says, this one is similar to the more famous (and actually real) idiom of "beating a dead horse". Don't waste your time and effort on something that is already gone or is no longer productive. Or as you can now say, don't milk the ghost cow. There's a lot going on here in Google's response. The made-up idiom of "always pack extra batteries for your milkshake" is completely nonsensical but that doesn't stop Google from trying. Apparently it's a play on words from the film There Will Be Blood. The extra batteries part is a humorous twist that suggests the exploitative power of the milkshake...? Okay Google, you've lost me.
[10]
Google AI overviews will explain any nonsense phrase you make up
Google's AI Overviews sometimes acts like a lost man who won't ask for directions: It would rather confidently make a mistake than admit it doesn't know something. We know this because folks online have noticed you can ask Google about any faux idiom -- any random, nonsense saying you make up -- and Google AI Overviews will often prescribe its meaning. That's not exactly surprising, as AI has shown a penchant for either hallucinating or inventing stuff in an effort to provide answers with insufficient data. In the case of made-up idioms, it's kind of funny to see how Google's AI responds to idiotic sayings like "You can't lick a badger twice." On X, SEO expert Lily Ray dubbed the phenomenon "AI-splaining." I tested the "make up an idiom" trend, too. One phrase -- "don't give me homemade ketchup and tell me it's the good stuff" -- got the response "AI Overview is not available for this search." However, my next made up phrase -- "you can't shake hands with an old bear" -- got a response. Apparently Google's AI thinks this phrase suggests the "old bear" is an untrustworthy person. In this instance, Google AI Overview's penchant for making stuff up is kind of funny. In other instances -- say, getting the NFL's overtime rules wrong -- it can be relatively harmless. And when it first launched, it was telling folks to eat rocks and put glue on pizza. Other examples of AI hallucinations are less amusing. Keep in mind that Google warns users that AI Overviews can get facts wrong, though it remains at the top of many search results. So, as the old, time-honored idiom goes: Be wary of search with AI, what you see may be a lie.
[11]
"You Can't Lick a Badger Twice": Google's AI Is Making Up Explanations for Nonexistent Folksy Sayings
Have you heard of the idiom "You Can't Lick a Badger Twice?" We haven't, either, because it doesn't exist -- but Google's AI seemingly has. As netizens discovered this week that adding the word "meaning" to nonexistent folksy sayings is causing the AI to cook up invented explanations for them. "The idiom 'you can't lick a badger twice' means you can't trick or deceive someone a second time after they've been tricked once," Google's AI Overviews feature happily suggests. "It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again." Author Meaghan Wilson-Anastasios, who first noticed the bizarre bug in a Threads post over the weekend, found that when she asked for the "meaning" of the phrase "peanut butter platform heels," the AI feature suggested it was a "reference to a scientific experiment" in which "peanut butter was used to demonstrate the creation of diamonds under high pressure." There are countless other examples. We found, for instance, that Google's AI also claimed that the made-up expression "the bicycle eats first" is a "humorous idiom" and a "playful way of saying that one should prioritize their nutrition, particularly carbohydrates, to support their cycling efforts." Even this author's name wasn't safe. Asked to explain the meaningless phrase "if you don't love me at my Victor, you don't deserve me at my Tangermann" the AI dutifully reported that it means "if someone can't appreciate or love you when you're at your lowest point (Victor), then they're not worthy of the positive qualities you bring to the relationship (Tangermann)." The bizarre replies are the perfect distillation of one of AI's biggest flaws: rampant hallucinations. Large language model-based AIs have a long and troubled history of rattling off made-up facts and even gaslighting users into thinking they were wrong all along. And despite AI companies' extensive attempts to squash the bug, their models continue to hallucinate. Even OpenAI's latest reasoning models, dubbed o3 and o4-mini, tend to hallucinate even more than their predecessors, showing that the company is actually headed in the wrong direction. Google's AI Overviews feature, which the company rolled out in May of last year, still has a strong tendency to hallucinate facts as well, making it far more of an irritating nuisance than a helpful research assistant for users. When it launched, it even told users that glue belongs on pizza to ensure that toppings don't slide off. Its other outrageous gaffes have included claiming that baby elephants are small enough to sit in the palm of a human hand. Following public outrage over the feature's baffling -- and often comedic -- inaccuracy, Google admitted in a statement last year that "some odd, inaccurate or unhelpful AI Overviews certainly did show up." To tackle the issue, Google kicked off a massive game of cat and mouse, limiting some responses when it detected "nonsensical queries that shouldn't show an AI Overview." But considering the fictional idioms almost a year after the product was launched, Google still has a lot of work to do. Even worse, the feature is hurting websites by limiting click-through rates to traditional organic listings, as Search Engine Land reported this week. In other words, on top of spewing false information, Google's AI Overviews is undermining the business model of countless websites that host trustworthy info. Nonetheless, Google is doubling down, announcing last month that it was going to be "expanding" AI Overviews in the US to "help with harder questions, starting with coding, advanced math and multimodal queries." Earlier this year, Google announced that AI Overviews is even being entrusted with medical advice. The company claims that "power users" want "AI responses for even more of their searches." (For the time being, there are ways to turn off the feature.) At least the AI model appears to be aware of its own limitations. "The saying 'you can lead an AI to answer but you can't make it think' highlights the key difference between AI's ability to provide information and its lack of true understanding or independent thought," Google's AI Overviews told one Bluesky user.
[12]
'You can't lick a badger twice': How Google's AI Overview hallucinates idioms
The latest AI trend is a funny one, as a user has discovered that you can plug a made-up phrase into Google and append it with "meaning," then Google's AI Overview feature will hallucinate a meaning for the phrase. Historian Greg Jenner kicked off the trend with a post on Bluesky in which he asked Google to explain the meaning of "You can't lick a badger twice." AI Overview helpfully explained that this expression means that you can't deceive someone a second time after they've already been tricked once -- which seems like a reasonable explanation, but ignores the fact that this idiom didn't exist before this query went viral. Recommended Videos Since then, people have been having a lot of fun getting AI Overview to explain idioms like "A squid in a vase will speak no ill" (meaning that something outside of its natural environment will be unable to cause harm, apparently) or "You can take your dog to the beach but you can't sail it to Switzerland" (which is, according to AI Overview, a fairly straightforward phrase about the difficulty of international travel with pets). It doesn't work for all cases though, as some phrases don't return AI Overview results. "It's wildly inconsistent," cognitive scientist Gary Marcus said to Wired, "and that's what you expect of GenAI." Jenner points out that as entertaining as this is, it does indicate some of the pitfalls of relying too heavily on AI generated sources like AI Overview for information. "It's a warning sign that one of the key functions of Googling - the ability to factcheck a quote, verify a source, or track down something half remembered - will get so much harder if AI prefers to legitimate statistical possibilities over actual truth," Jenner wrote. This isn't the first time that people have pointed out the limitations of information provided by AI, and AI Overview in particular. When AI Overview was launched, it infamously suggested that people should eat one small rock per day and that they could put glue on their pizza, though these particular answers were quickly removed. Since then, Google has said in a statement to Digital Trends that the majority of AI Overviews provide helpful and factual information, and that it was still gathering feedback on its AI product. For now, though, let this serve as a reminder to double check the information which appears in the AI Overview box at the top of Google results, as it may not be accurate.
[13]
Google's AI Overview is hallucinating again, this time with hilarious fake idioms
Google's AI Overview search feature is generating hilarious results again, but this time, it's explanations for fake idioms instead of recipes for glue pizza. When it first launched last year, Google's AI Overview made headlines for giving incorrect answers to (mostly) legitimate questions. What happens when you ask Google's AI about things that don't exist, though? Users all over the internet are finding out right now by Googling completely made-up sayings, and the results are pretty hysterical. See also: Best phone deals in April 2025 "A salamander can't laugh in the rain." "Never let your horse play Pokemon." "Short grass doesn't pay the bills." I entered those wise words in a Google search today, hoping for a ridiculous answer to my nonsense colloquialisms, and that's what I got. Google's AI Overview is spinning meanings and backstories out of fictional and illogical idioms like those, leading to some pretty funny search results, like this one: As enlightening as this information about salamanders was (who knew they're not always in the rain?), things got even weirder when I moved on to the age-old saying, "never let your horse play Pokemon." This led to Google repeatedly reminding me to "keep things in their proper context and avoid misusing or misinterpreting the behavior of different species." If you're reluctant to mow the lawn this weekend, you might appreciate Google's wise take on the classic colloquialism, "short grass doesn't pay the bills." The results for this one actually had some nearly coherent advice, suggesting, "The saying emphasizes the importance of focusing on tasks that actually produce financial benefits, such as working, investing, or pursuing other income-generating activities." If you want a laugh, try making up your own fake idiom and plugging it into a Google search. It helps to stick "meaning," "explanation," or "backstory" at the end. You can also start with a real idiom and put some absurd spin on it. For instance, you could take the saying "When life gives you lemons, make lemonade" and turn it into "When life gives you cats, make pasta" and see what Google's AI Overview comes up with. While this bug (or feature, depending on how you look at it) is goofy and relatively harmless, it's also a good reminder that AI-generated content can't always be trusted. As Sam Altman famously said, AI is still "incredibly dumb" and has a habit of hallucinating and generating nonsense results, like those above while making them look like legitimate results or info. So, if you're trying to do research or find concrete information, AI Overview might not always be your best bet. But if you're looking for a sage explanation for why salamanders never laugh in the rain? AI's your new best friend.
[14]
Google's Latest Nonsensical Overview Results Illustrate Yet Another Problem With AI
You might not be familiar with the phrase "peanut butter platform heels" but it apparently originates from a scientific experiment, where peanut butter was transformed into a diamond-like structure, under very high pressure -- hence the "heels" reference. Except this never happened. The phrase is complete nonsense, but was given a definition and backstory by Google AI Overviews when asked by writer Meaghan Wilson-Anastasios, as per this Threads post (which contains some other amusing examples). The internet picked this up and ran with it. Apparently, "you can't lick a badger twice" means you can't trick someone twice (Bluesky), "a loose dog won't surf" means something is unlikely to happen (Wired), and "the bicycle eats first" is a way of saying that you should prioritize your nutrition when training for a cycle ride (Futurism). Google, however, is not amused. I was keen to put together my own collection of nonsense phrases and apparent meanings, but it seems the trick is no longer possible: Google will now refuse to show an AI Overview or tell you you're mistaken if you try and get an explanation of a nonsensical phrase. If you go to an actual AI chatbot, it's a little different. I ran some quick tests with Gemini, Claude, and ChatGPT, and the bots attempt to explain these phrases logically, while also flagging that they appear to be nonsensical, and don't seem to be in common use. That's a much more nuanced approach, with context that has been lacking from AI Overviews. Now, AI Overviews are still labeled as "experimental," but most people won't take much notice of that. They'll assume the information they see is accurate and reliable, built on information scraped from web articles. And while Google's engineers may have wised up to this particular type of mistake, much like the glue on pizza one last year, it probably won't be long before another similar issue crops up. It speaks to some basic problems with getting all of our information from AI, rather than references written by actual humans. Fundamentally, these AI Overviews are built to provide answers and synthesize information even if there's no exact match for your query -- which is where this phrase-definition problem starts. The AI feature is also perhaps not the best judge of what is and isn't reliable information on the internet. Looking to fix a laptop problem? Previously you'd get a list of blue links from Reddit and various support forums (and maybe Lifehacker), but with AI Overviews, Google sucks up everything it can find on those links and tries to patch together a smart answer -- even if no one has had the specific problem you're asking about. Sometimes that can be helpful, and sometimes you might end up making your problems worse. Anecdotally, I've also noticed AI bots have a tendency to want to agree with prompts, and affirm what a prompt says, even if it's inaccurate. These models are eager to please, and essentially want to be helpful even if they can't be. Depending on how you word your query, you can get AI to agree with something that isn't right. I didn't manage to get any nonsensical idioms defined by Google AI Overviews, but I did ask the AI why R.E.M.'s second album was recorded in London: That was down to the choice of producer Joe Boyd, the AI Overview told me. But in fact, R.E.M.'s second album wasn't recorded in London, it was recorded in North Carolina -- it's the third LP that was recorded in London, and produced by Joe Boyd. The actual Gemini app gives the right response: that the second album wasn't recorded in London. But the way AI Overviews attempt to combine multiple online sources into a coherent whole seems to be rather suspect in terms of its accuracy, especially if your search query makes some confident claims of its own. "When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available," Google told Android Authority in an official statement. "This is true of Search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context." We seem to be barreling towards having search engines that always respond with AI rather than information compiled by actual people, but of course AI has never fixed a faucet, tested an iPhone camera, or listened to R.E.M. -- it's just synthesizing vast amounts of data from people who have, and trying to compose answers by figuring out which word is most likely to go in front of the previous one.
[15]
AI Overview is still 'yes, and'-ing completely made up idioms despite Google's best efforts to restrict it
Anyone studying a second language will tell you that learning idioms is often a stumbling block. I mean, just take my mother tongue, British English -- 'raining cats and dogs', 'on it like a car bonnet ', 'Bob's your uncle, Deborah's your aunt' -- I mean, what bizarre fairytale creature would even talk like this? Because idioms spring up from rich etymological contexts, AI has a snowball's chance in Hell of making heads or tails of them. Okay, I'll stop over-egging the pudding and dispense with the British gibberish for now. The point is, it's a lot of fun to make up idioms and watch Google's AI overview try its hardest to tell you what it means (via Futurism). We've had a lot of fun with this on the hardware team. For instance, asking Google's AI overview to decipher the nonsense idiom 'Never cook a processor next to your GPU' returns at least one valiant attempt at making sense via an explanation of hardware bottlenecking. When our Andy asked the AI overview, it returned, "The saying [...] is a humorous way of highlighting the importance of not having a CPU [...] and GPU [...] that are poorly matched, especially for gaming. It implies that if you try to run a game where the CPU is weak and the GPU is powerful, or vice versa, you'll end up with a frustrating experience because the weaker component will limit the performance of the other." However, when I asked just now, it said, "The saying [...] is a humorous way of suggesting that you should never attempt to repair a faulty GPU by heating it up in an oven, as this can cause more damage than it fixes. This practice, sometimes referred to as the "oven trick," has been discredited due to its potential to melt solder joints and cause further issues." Alright, fess up: who told the AI about the 'oven trick'? I know some have sworn by it for older, busted GPUs, but I can only strongly advise against it -- for the sake of your home if not your warranty. Because a Large Language Model is only ever trying to predict the word that's most likely to come next, it parses any and all information uncritically. For this reason -- and their tendency to return different information to the same prompt as demonstrated above -- LLM-based AI tends not to be reliable or, one might argue, even particularly useful as a referencing tool. For one recent example, a solo developer attempting to cram a Doom-like game onto a QR code turned to three different AI chatbots for a solution to his storage woes. It took two days and nearly 300 different prompts for even one of the AI chatbots to spit out something helpful. Google's AI Overview is almost never going to turn around and tell you 'no, you've just made that up' -- except I've stumbled upon a turn of phrase that's obviously made someone overseeing this AI's output think twice. I asked Google the meaning of the phrase, 'Never send an AI to do a human's job,' and was promptly told that AI Overview was simply "not available for this search." Our Dave, on the other hand, got an explanation that cites Agent Smith from The Matrix, which I'm not going to read too deeply into here. At any rate, there are always more humans involved in fine-tuning AI outputs than you may have been led to believe, and I'm seeing those fingerprints on Google's AI Overview refusing to play ball with me. Indeed, last year Google said in a blog post that it has been attempting to clamp down on "nonsensical queries that shouldn't show an AI Overview" and "the use of user-generated content in responses that could offer misleading advice." Undeterred, I changed the language of my own search prompt to be specifically gendered and got told by the AI Overview that a 'man's job' specifically "refers to a task that requires specific knowledge, skills, or experience, often beyond the capabilities of someone less experienced." Right, what about a 'woman's job', then? Google's AI overview refused to comment.
[16]
My New Online Hobby Is Asking Google AI What Made-Up Proverbs Mean
Google's AI Overview isn't shy of an AI hallucination or two, and its latest one is another classic to add to the list. AI Overview Believes Everything Is a Idiom, and It's Wonderful In short, if you head over to Google Search and input a random sentence that sounds vaguely like an idiom or proverb, the AI Overview will do its very best to place some meaning to your empty words. First spotted on Threads, though brought to my attention through Greg Jenner's Bluesky account, these AI hallucinations are some of my favorites.There are some amazing examples of the lengths Google's AI Overview will go to explain how something makes sense or fits into its vision of the input. One particular favorite came from MakeUseOf's Editor in Chief, Ben Stegner: "There's no irony like a wet golf course meaning." To which the AI Overview dug deep and responded, "The saying 'there's no irony like a wet golf course' plays on the common understanding that gold, a sport often associated with sunny, well-maintained greens, can be surprisingly challenging and frustrating when conditions are wet." Another one I tried was "giant pandas always fall twice," which had the AI Overview detailing how pandas are clumsy and enjoy rolling around instead of walking. But not content to stop there, it began delving into the metabolism and energy conservation efforts of pandas. AI Overview's Latest Hallucination Is Why You Cannot Trust AI Chatbots As amusing as these wonderfully weird, forced explanations are, they highlight the very real problem with AI chatbots (not just AI Overview). AI hallucination is real and very much an issue, especially if its output is taken at face value. When AI hallucination was confined to folks specifically using AI chatbots like ChatGPT, Claude, Gemini, and so on, the potential danger was somewhat limited. Sure, the AI hallucinated, and it was a problem, but those people were specifically seeking out AI chatbots. Google's AI Overview and its next version, AI Mode, change the rules. Anyone attempting to use Google for a regular search runs the risk of encountering fake, AI-slop responses, delivered and presented to you as if they were fact. Without serious scrutiny, Google Search as we know it is on its way out, replaced by something far worse, requiring greater literacy skills than before. This latest round of AI hallucination is the perfect example of that. In one example from The Sleight Doctor, AI Overview went as far as to cite a Bible verse, from which this supposed idiom was derived. That phrase? "Never throw your poodle at a pig."
Share
Share
Copy Link
Google's AI Overviews feature is generating plausible but entirely fictional explanations for made-up phrases, sparking a viral trend and raising concerns about AI's tendency to confidently present false information.
A new internet trend has emerged, highlighting both the capabilities and limitations of Google's AI-powered search feature. Users have discovered that when searching for made-up phrases followed by the word "meaning," Google's AI Overviews confidently provide explanations for these nonsensical expressions 12.
The phenomenon gained traction on social media platforms like Threads and Bluesky. Users began searching for absurd phrases such as "you can't lick a badger twice" or "peanut butter platform heels," only to find that Google's AI would generate plausible-sounding definitions and origins for these non-existent idioms 23.
This trend has brought attention to a fundamental flaw in large language models (LLMs) like the one powering Google's AI Overviews. These systems are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical 2. As Yafang Li, assistant professor at the University of Memphis, explains:
"They are not trained to verify the truth. They are trained to complete the sentence." 2
The AI's behavior can be attributed to two key characteristics of generative AI:
Probability-based generation: LLMs predict the most likely next word based on their training data, which doesn't always lead to factually correct information 1.
People-pleasing tendency: AI systems often aim to provide answers that users expect or want to hear, even if those answers are not grounded in reality 14.
While this trend may seem harmless or even entertaining, it raises important questions about the reliability of AI-generated information. Ziang Xiao, a computer scientist at Johns Hopkins University, points out:
"It's extremely difficult for this system to account for every individual query or a user's leading questions. This is especially challenging for uncommon knowledge, languages in which significantly less content is available, and minority perspectives." 1
A Google spokesperson acknowledged the issue, explaining that AI Overviews are designed to display information supported by top web results. They stated:
"When people do nonsensical or 'false premise' searches, our systems will try to find the most relevant results based on the limited web content available." 2
Google is reportedly working on limiting AI Overviews for searches without sufficient information and preventing them from providing misleading, satirical, or unhelpful content 23.
This phenomenon serves as a reminder of the ongoing challenges in AI development, particularly in ensuring accuracy and trustworthiness. A recent survey of AI researchers indicated doubt that these issues would be resolved soon 4.
The confident inaccuracy displayed by Google's AI Overviews highlights the importance of maintaining a critical perspective when interacting with AI-generated content. As users, we must remain skeptical and verify claims, especially when dealing with unfamiliar or seemingly implausible information 24.
Reference
A new study by Columbia's Tow Center for Digital Journalism finds that AI-driven search tools frequently provide incorrect information, with an average error rate of 60% when queried about news content.
11 Sources
11 Sources
OpenAI's upcoming SearchGPT is set to challenge Google's search dominance. This AI-powered search engine promises a new era of information retrieval, potentially reshaping the search landscape.
2 Sources
2 Sources
A new study reveals that state-of-the-art AI language models perform poorly on a test of understanding meaningful word combinations, highlighting limitations in their ability to make sense of language like humans do.
2 Sources
2 Sources
Recent research reveals GPT-4's ability to pass the Turing Test, raising questions about the test's validity as a measure of artificial general intelligence and prompting discussions on the nature of AI capabilities.
3 Sources
3 Sources
Google's Gemini AI made an inaccurate claim about Gouda cheese consumption in a Super Bowl ad, leading to edits and raising questions about AI reliability and fact-checking.
16 Sources
16 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved