Google’s AI Overview explains makeup idioms with confident nonsense
Languages can seem almost infinitely complex. Internal jokes and idioms can have meaning in only a few groups and can seem meaningless to others. Thank you for Generation AII found this week’s meaningless meaning as the internet exploded like Brook’s mass over capabilities. Google Search AI Overview To define phrases that have never been said before.
Have you never heard of the phrase “It exploded like Brook trout”? Certainly, I just made it, but the result of Google’s AI overview was “the colloquial way of saying something exploded or quickly became a sensation.” No, that doesn’t make sense.
It may be that trends have begun threadauthor and screenwriter Megan Wilson Anastasios I shared what happened When she searched for “peanut butter platform heels.” Google returned results referring to a (not real) scientific experiment demonstrating the creation of diamonds under high pressure using peanut butter.
Moved to another social media site Blue skiingpeople shared Google’s interpretations of phrases such as “You can’t lick a badger twice.” Game: Search for meaningless phrases from novels that have “meaning” at the end.
Things tumbled from there.
This meme is interesting for more reasons than comic relief. It shows how nervous the language model provides the answer sound Right, that’s not teeth correct.
“It is designed to produce fluent, plausible, sounding responses, even when inputs are completely meaningless,” he said. Yahwan LeeAssistant Professor at the Vogelmann School of Business and Economics at the University of Memphis. “They are not trained to verify the truth. They are trained to complete the sentence.”
Like pizza glue
The false meaning of the constructed proverb brings memories of a too-true story about Google’s AI overview. I suggested putting glue on the pizza To help with cheese sticks.
This trend seems a little more harmless, at least because it doesn’t focus on practical advice. In other words, I want everyone to try and lick the Badger once. But the problem behind it is the same – Big language modellike Google’s Gemini Behind the AI overview, we try to answer questions and provide actionable answers. Even if what it gives you is nonsense.
A Google spokesperson said the AI overview is designed to display information supported by top web results, and has the accuracy comparable to other search capabilities.
“When people search for meaningless or ‘false premises’, our system tries to find the most relevant results based on the limited web content available,” a Google spokesperson said. “This applies to overall searches. In some cases, an AI overview is also triggered to provide useful context.”
This particular case is “data invalid” and there is not much related information available for search queries. The spokesman said that Google is working on restrictions if an AI overview appears in searches without sufficient information and prevents it from providing misleading, satirical or useless content. Google uses information about such queries to better understand that AI overviews may and should not be displayed.
When you seek the meaning of a fake phrase, you don’t always get the constructed definition. When drafting the headline for this section, I searched for “like glue in the meaning of pizza” but it didn’t trigger an AI overview.
The problem does not seem universal across LLMS. I asked chatgpt “It’s not a standard idiom, but definitely the phrase was said, because it means “you can’t lick a badger twice.” sound Trying to provide a definition anyway is essentially something that someone might use.
read more: AI Essentials: 27 Ways to Make Gen AI Work for You, according to our experts
Draw meaning from anywhere
This phenomenon is an interesting example of the tendency of LLMS to make things – what the AI world calls.”Hallucinations“When Gen AI models hallucinate, they produce information that appears to be plausible or accurate, but they are not ingrained in reality.
LLM “is not a fact generator,” said Li, which simply predicts the next logical language bit based on training.
Most AI researchers in A Recent surveys They reported that they doubt that AI accuracy and reliability issues will be resolved soon.
False definitions are not only inaccurate, but Confidence Inaccuracy of LLMS. If you ask people what the meaning of a phrase like “you can’t get a turkey from a cyber truck,” you’ll probably say they haven’t heard it and say it doesn’t make sense. LLM responds with the same confidence as if they were looking for a definition of an actual idiom.
In this case, Google says the phrase means that Tesla’s Cybertruck “cannot or cannot deliver Thanksgiving turkeys or other similar items,” highlighting “its clear, futuristic design that doesn’t help carry bulky items.” But.
There are ominous lessons in this humorous trend. Don’t trust everything you see from a chatbot. It may be making things out of thin air, and it It does not necessarily indicate uncertainty.
“This is the perfect moment for educators and researchers to use these scenarios to teach people how meaning is generated, how AI works, and why it matters,” Li said. “The user should always remain skeptical and verify the claim.”
Beware of what you search for
You can’t trust LLM to be skeptical on your behalf, so you should encourage them to take what you say with a grain of salt.
“When the user enters the prompt, the model assumes it is valid and then proceeds to generate the most likely, accurate answer,” Li said.
The solution is to introduce skepticism at the prompt. Don’t seek the meaning of unfamiliar phrases or idioms. Ask if it’s real. Li suggested you ask, “Is this a real idiom?”
“It might help the model recognize the phrase, rather than just guessing,” she said.