Skip to main content

Google Lens adds visual AI intelligence and actions to Android, Assistant, more

At I/O 2017, Sundar Pichai announced Google Lens, a set of vision-based computing capabilities that can understand what you’re looking at and provide actions to interact with the world around you. It will first launch on Google Assistant and Photos…

When live, Google Assistant will feature a new camera input button that returns contextual actions for snapped images.

For example, taking a picture of a flower will identify its type, while snapping a picture of a Wi-Fi password label will automatically connect your device to the network. Underneath each photo there will be a row of suggestions or a dialogue box with the appropriate action.

Other features include translating signs in foreign languages and being able to get tickets or create a calendar event from an event poster. Lastly, snapping a restaurant sign will overlay reviews, ratings, and other information.

Lens in Google Photos can identify what buildings or locations are featured in the picture and show the correct listing or Search result. This also applies to paintings and screenshots, with a Lens button on every image page to scan the image and output actions.

For Google, understanding images and videos is the natural successor of understanding text and webpages during the company’s early days. It will be available in the coming months.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com