With Google Lens, you can search by pointing your camera at the real world and access useful “filters” to copy or translate text. The visual lookup feature also happens to work in virtual reality as a cool demo this week revealed.

Phasedragon on Twitter yesterday shared successfully using Google Lens in a VR environment. The scene in question is full of Korean text that Lens — in a phone-sized floating window — was able to natively translate using the filter announced at I/O 2019.

Just like in the real world, the translation is overlaid over the virtual scene to preserve context. While the main point of this Google Lens VR demo is showing the tool recognizing even the unreal, it also does a good job of demonstrating Google’s AR overlay capabilities. In the video, It’s working pretty well on signs, posters, and other long forms of texts.

Behind-the-scenes, Phasedragon says that they “just hooked together a few apps” and “didn’t really do anything myself other than try a bunch to see which ones worked.”

At a high-level, third-party software captures the VR view and exports it as a virtual camera feed for Lens, which is running in Android Studio. The tinkerer notes how they “tried microsoft translate” for this project, but “it’s simply not as good as google lens.”

It speaks to Google Lens’ versatility in not making a distinction between what is real and virtual. As long as Google is presented with an image, Lens can leverage its machine learning prowess to analyze what’s in front of it. Google Lens is available on Android and iOS via the Google Search app.

More about Google Lens:


Check out 9to5Google on YouTube for more news:

About the Author