Google launches AI tools to practice languages through personalized lessons
On Tuesday, Google released three new AI experiments aimed at helping people learn to speak new languages in a more personalized way. The experiment is still in its early stages, but it is possible that they are considering taking on Duolingo with the help of Gemini, a leading multimodal language model from Google.
The first experiment will help you quickly learn the specific phrases you need for now, while the second experiment will help you sound local, not formal.
In the third experiment, you can use the camera to learn new words based on your surroundings.

Google points out that one of the most frustrating parts of learning a new language is when you realize you are in a situation where you need a specific phrase you haven’t learned yet.
The new “little lesson” experiment will explain situations like “finding a lost passport” and receive contextual vocabulary and grammar hints. You can also get suggestions for answers such as “I don’t know where I lost it” or “I want to report it to the police.”
The next experiment, “Slang Hang,” wants to help people sound like textbooks when speaking a new language. Google often learns to speak officially when you learn a new language, so that’s why it’s experimenting with ways to teach people to speak more colloquially.

This feature allows you to generate realistic conversations between native speakers and see how dialogs can unfold one message at a time. For example, you can learn through conversations where street vendors chat with customers, or situations where two long-lost friends reunite on the subway. You can learn what they mean and how they are used, beyond unfamiliar conditions.
Google says that the experiment sometimes misuses certain slang words and sometimes constitutes words, so users need to cross-reference them with trusted sources.

In the third experiment, “Word Cam,” you can take photos of your surroundings. Gemini then detects the object and labels it in the language it is learning. This feature also provides additional words that can be used to describe an object.
Google says you just need words for what’s right in front of you, as you can show what you don’t know yet. For example, you may know the word “window,” but you may not know the word “blind.”
The company notes that the idea behind these experiments is to see how AI can be used to more dynamic and personalize independent learning.
The new experiment supports Arabic, Chinese (China, Hong Kong, Taiwan), English (Australia, UK, USA), French (Canada, France), German, Greek, Hebrew, Hindi, Italian, Japanese, Korean, Portuguese (Brazil, Portuguese), Russian, Spanish (Latin American, Spine, Spine), and True. You can access the tool Google Labs.