Back in November, Google showed off a machine learning technique that enhances low-res and blurry images. The RAISR technique is now being used in Google+ to display high-resolution photos while using an impressive 75% less bandwidth.
Previous methods to make images larger involved adding more pixels at the expense of blurriness and with the picture often being out of focus.
Rapid and Accurate Image Super-Resolution (RAISR) works by utilizing machine learning algorithms to do the same thing, but in a more intelligent way. Specifically, it retains original shapes and details more accurately, thus resulting in a noticeably better looking image.
A few short weeks later this ML technique is already being used in a significant way on a live service. Eventually, Google+ will no longer serve up original images. Instead a smaller version that is just a fourth the size will be sent, with RAISR being applied on-device to restore detail. In one example, a 100kb 1000 x 1500 image is replaced by a 25kb file that ends up having the original resolution after RAISR.
At the moment, this feature is only being rolled out for high-resolution images that appear in the Google+ stream of a subset of Android devices. However, 1 billion images per week have already taken advantage of RAISR, with total user bandwidth reduced by a third.
In the coming weeks, the technology will be rolled out more broadly to the social network. Hopefully, other services like Google Photos will also see benefit. With such AI techniques being applied in more and more areas, moves from Qualcomm to optimize their chipsets for machine learning makes increasing sense.