The latest iteration of Google Lens announced at I/O 2019 features a convenient Translate filter. Google is now bringing that updated functionality to the full Google Translate app with a revamped UI, auto-detect, and more languages.
There is a new interface for camera translations, which we previously enabled in May, that features a bottom bar and three distinct modes.
“Instant” is the first and just requires users to “Aim at Text.” Like Google Lens, it can automatically identify what language is in view when the source is set to “Detect language.” Over 60 more languages are now supported, with Arabic, Hindi, Malay, Thai, and Vietnamese bringing the total list to 88. The result is then overlaid in the live preview to preserve that scene’s context.
Meanwhile, camera translations are no longer limited to pairings that include English. Converting between any of the more than 100 languages supported in Google Translate is now possible.
For example, say you are traveling through South America, where both Portuguese and Spanish is spoken, and you encounter a sign but you’re not sure what language that sign is in. Instead of trying to guess, the Translate app can now determine for you what language the sign is in, and then seamlessly translate it for you into your language of choice.
Accuracy is also getting a boost thanks to Neural Machine Translation. First introduced in 2016, NMT produces more natural translations by reducing errors by 55%-85% in certain language pairs. These improvements work offline, but connectivity will further improve quality.
The “Scan” mode produces a still image where users can manually highlight text, while “Import” allows for translation from your photo library. Google has also reduced camera flickering when viewing text that previously made it hard to read translations.