Skip to main content

Google created a neural network that can accurately guess where a photo was taken

Screenshot 2016-02-25 at 9.26.26 AM

Google has trained a neural network, named PlaNet, to figure out where an image was taken down to the city and even exact street level. The machine only needs to analyze a photo’s pixels in order to accomplish the task.

Specifically, PlaNet is “able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy.” It has an even better rate (28.4%) of determining what country and continent (48%) a photo is taken in. Its accuracy rate was determined by running 2.3 million geotagged photos from Flickr.

The work was done by a team of Googlers led by Tobias Weyland. They first created a world grid with 26,000 squares of varying size based on how many photos were taken in the area. 91 million images with location data were then analyzed by the machine and placed in the correct grid. A further 34 million photos were used to validate the neural network. Surprisingly, the 126 million images with Exif data only totaled 377MB in size.

The team created a game, available to the public, that takes a random place from Google Street View and has users drop a pin on a world map. It will then tell users how close their actual guess is to the image’s actual location. When pitted against 10 humans, PlaNet was able to beat them in 28 out of 50 rounds.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com