At I/O 2017, Sundar Pichai announced Google Lens, a set of vision-based computing capabilities that can understand what you’re looking at and provide actions to interact with the world around you. It will first launch on Google Assistant and Photos…
When live, Google Assistant will feature a new camera input button that returns contextual actions for snapped images.
For example, taking a picture of a flower will identify its type, while snapping a picture of a Wi-Fi password label will automatically connect your device to the network. Underneath each photo there will be a row of suggestions or a dialogue box with the appropriate action.
Other features include translating signs in foreign languages and being able to get tickets or create a calendar event from an event poster. Lastly, snapping a restaurant sign will overlay reviews, ratings, and other information.
Lens in Google Photos can identify what buildings or locations are featured in the picture and show the correct listing or Search result. This also applies to paintings and screenshots, with a Lens button on every image page to scan the image and output actions.
For Google, understanding images and videos is the natural successor of understanding text and webpages during the company’s early days. It will be available in the coming months.
FTC: We use income earning auto affiliate links. More.
Comments