In a weird but fun twist, Google's AI Summaries are carelessly assigning definitions to made-up phrases that never exist. Just type out any random phrased nonsense added by "meaning" and try Googling it in Search, and chances are you will find yourself with some high-grade-looking explanation that entirely appears valid.

Need a citation? Try "eat an anaconda" or "toss and turn with a worm." Google's AI will inform you that these are smart sayings, even providing beginnings and explanations—none of them genuine.

How AI Summaries Are Generating Phony Wisdom

This happens thanks to the generative power of the AI. Google's testing AI model isn't checking facts—it's forecasting. As computer scientist Ziang Xiao at Johns Hopkins University describes via Wired's report, AI creates answers by selecting the next most likely word in a sentence based on its extensive training data.

So when you give it rubbish, it completes the blanks with nonsense that sounds likely. What is more alarming is that the answers tend to contain reference links, creating the impression of authenticity.

AI's Desire to Please Can Bias Search Results

Another problem is the one researchers describe as "AI alignment with user expectations." In essence, AI aims to get you happy—it's been conditioned to generate messages you'll find believable or at least desire to hear. If you say to it, "You can't lick a badger twice," it never challenges the statement. It accepts it as valid and attempts to make sense out of it.

Someone on Threads noticed you can type any random sentence into Google, then add "meaning" afterwards, and you'll get an AI explanation of a famous idiom or phrase you just made up. Here is mine

[image or embed]

— Greg Jenner (@gregjenner.bsky.social) April 23, 2025 at 6:15 PM

This feedback loop can create misinformation, particularly where there is limited data or underrepresented perspectives. Xiao's study identifies that minority opinions and infrequent knowledge are particularly susceptible to distortion.

Google's Response to the AI Overview Glitch

Google has acknowledged that its generative AI remains experimental. The system attempts to offer context whenever it can, but weird or ridiculous prompts are still likely to cause AI Overviews, spokesperson Meghann Farnsworth said.

And it's not reliable. Cognitive scientist Gary Marcus pointed out that five minutes of testing yielded wildly disparate results. That's business as usual in generative AI, which excels at replicating patterns but isn't so great at abstract thinking.

Last year, Google AI Overviews posted incorrect search results. It confused everyone.

A Harmless Quirk or a Bigger Red Flag?

For now, this AI hiccup seems mostly harmless—and even fun if you're looking for a quirky distraction from work. But it's also a clear reminder of AI's limitations, especially when it's powering search results that millions rely on daily.

So the next time Google instantly defines a term such as "tech times sleeping," keep in mind that AI may just be fabricating it all. It can have a meaning, but it's not reliable at all.

Employ those AI-produced answers with discretion—and perhaps a bit of humor.

Originally published on Tech Times