Back in September, Google Street View vehicles received their first major upgrade in eight years that should result in high-resolution images. Today, Google Research detailed a new algorithm that should address a common flaw with current Street View panoramas.
Looking at Street View, it’s not too difficult to spot a misalignment in the 360-degree panorama that is created from stitching multiple photos together. Most result in weird visual quirks like non-straight or jagged surfaces, but more seriously, they can make text on signs unreadable.
These errors are primarily due to mis-calibration in the sphere of cameras, timing differences between adjacent cameras, and parallax. While algorithms and lens recalibration are used to tackle these problems, others like “visible seams in image overlap regions can still occur.”
However, Google now has a two-stage solution to this problem:
The idea is to subtly warp each input image such that the image content lines up within regions of overlap. This needs to be done carefully to avoid introducing new types of visual artifacts. The approach must also be robust to varying scene geometry, lighting conditions, calibration quality, and many other conditions.
The first called Optical Flow finds corresponding pixel locations on a pair of images that overlap and then tries to align them. This technique is also used by the PhotoScan app for digitizing old printed out photos with a smartphone camera.
Meanwhile, Global Optimization then warps all the image to “simultaneously align all of the corresponding points from overlap regions.”
As seen in the examples below, the new algorithm works and is currently retroactively restitching existing panoramas to improve their quality.