“You can’t lick badger twice”: Google’s failure highlights basic AI flaws


This is a little distraction from your work day: Googleenter any make-up phrase and add the word “meaning” to search. Look! Google’s AI Overview Not only will you confirm that your whim is a real proverb, but it will tell you what it means and how it was derived.

This is really fun and you can find a lot example On social media. In the AI ​​overview world, “A loose dog won’t surf” is “a playful way of saying something is unlikely to happen or something goes wrong.” The invented phrase “wired is wired” is an idiom that means “someone’s behavior or characteristic is the direct result of an inherent nature or “wiring,” as a function of a computer is determined by a physical connection.”

It all sounds completely plausible and is delivered with unwavering confidence. Google also sometimes provides reference links, giving the response an additional sheen of authority. That’s wrong, at least in the sense that they create the impression that these are common phrases and not random bundles of words. And the fact that AI is outlined Please think about it “Never throw a poodle at a pig” is a proverb for the silly biblical origins, and also an orderly encapsulation of places where generative AI is still lacking.

As a disclaimer at the bottom of the All AI Summary Notes, Google uses “experimental” generation AI to power results. Generation AI is a powerful tool with all kinds of legitimate practical applications. However, when explaining these invented phrases, two of their critical properties work. The first is that it is ultimately a probability machine. While it may seem like there are even thoughts and feelings in a large linguistic model-based system, at the basic level, it simply places the most Rikei words one after another, and the tracks as the train moves forward. It’s very good at coming up with explanations of these phrases I’ll do it If they mean anything, then again it isn’t.

“The predictions for the next term are based on that vast amount of training data,” said Ziang Xiao, a computer scientist at Johns Hopkins University. “However, in many cases, the following consistent words don’t guide us to the right answer.”

Another factor is that AI aims to be happy. Research shows chatbots frequently Tell people what you want to hear. In this case, it means taking you in your words “You can’t lick a badger twice.” It’s the turn of an accepted phrase. In other contexts, a team of researchers led by Xiao study last year.

“It’s very difficult for this system to explain all the individual queries or key questions of users,” says Xiao. “This is especially difficult for rare knowledge, languages ​​with significantly less content available, and for minority perspectives. Because search AI is such a complex system, errors are cascaded.”

Leave a Reply

Your email address will not be published. Required fields are marked *