Google Street View is great for exploring an area from the comfort of your sofa, but if you want some more inspirational images to see how great a place can look, you probably seek out some better-quality photos. You may not have to do that for long thanks to a machine-learning project Google is running.
Google fed an experimental deep learning system a supply of professional photos from which to learn, then had it try its hand at emulating these from a set of around 40,000 plain old Street View snaps …
The AI system’s workflow was designed to emulate that of a professional photographer:
- first, compose the photo (achieved by the AI system cropping the image)
- second, adjust the camera settings (the AI used HDR and saturation effects)
- third, edit the photo (which the AI did by applying masks)
The above photo is just one example of the end result.
In a blog post, Google said it judged the success of the system by asking professional photographers to rate photos without knowing how they were created.
To judge how successful our algorithm was, we designed a “Turing-test”-like experiment: we mix our creations with other photos at different quality, and show them to several professional photographers. They were instructed to assign a quality score for each of them, with meaning defined as following:
1: Point-and-shoot without consideration for composition, lighting etc.
2: Good photos from general population without a background in photography. Nothing artistic stands out.
3: Semi-pro. Great photos showing clear artistic aspects. The photographer is on the right track of becoming a professional.
In about 40% of cases, the output was judged to have been a photo taken by a pro or semi-pro photographer.
Check out the gallery below for examples (click to see full size), with more available here.