Skip to main content

Google details VPS, ML tech behind Maps AR navigation and making cameras another sensor

A report yesterday revealed that the upcoming Google Maps AR navigation mode is first being tested with Local Guides. Google is now detailing the “global localization” technique behind the feature and how it leverages a Visual Positioning Service, Street View, and machine learning.

Today, the blue dot in Google Maps that marks your current location is infamously not accurate. GPS and compass have “physical limitations,” especially in urban environments, that often lead to your position on a map jumping around.

The process of identifying the position and orientation of a device relative to some reference point is referred to as localization. Various techniques approach localization in different ways.

Google’s solution with “global localization” is to add another sensor to get a better sense of orientation. Existing ones that can measure magnetic and gravity fields are easily skewed by magnetic objects, like cars and buildings, “resulting in errors that can be inaccurate by up to 180 degrees.”

VPS determines the location of a device based on imagery rather than GPS signals. VPS first creates a map by taking a series of images which have a known location and analyzing them for key visual features, such as the outline of buildings or bridges, to create a large scale and fast searchable index of those visual features. To localize the device, VPS compares the features in imagery from the phone to those in the VPS index. However, the accuracy of localization through VPS is greatly affected by the quality of the both the imagery and the location associated with it.

For Google, that VPS index is Street View data from 93 countries around the world with “trillions of strong reference points to apply triangulation.” As a user’s phone scans the world, it will first “filter out temporary parts of the scene and focus on permanent structure that doesn’t change over time” before matching. Machine learning is used to remove trees that can look different depending on the season, dynamic light movement, and construction.

Another aspect of walking navigation mode is ARCore to overlay directions and other points of interests like cafes and shops. This is another way Google is using the camera to add context to what you’re looking at in the world.

Using the smartphone camera as a sensor, this technology enables a more powerful and intuitive way to help people quickly determine which way to go.

The Google AI blog post ends by reiterating that testing is still need in non-optimal visual conditions, like late at night, in a snowstorm, or in torrential downpour.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Check out 9to5Google on YouTube for more news:

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications