On its first wholly-made device, Google earned high marks for the Pixel’s 12.3 megapixel camera. Surprisingly, the software behind the lens dates back to when Alphabet’s X division was working on Google Glass.
Gcam was first revealed last year when X revamped their site to list graduates of the “moonshot factory.” At the time, there were very few details besides:
Gcam improved mobile photography using techniques from computational photography.
The project started in 2011 as the team was trying to fit an image sensor into Glass that would “be on par with cellphone cameras.” Design constraints of the wearable meant that the physical sensor would have to be relatively small with reduced low-light and dynamic range performance in addition to already limited compute and battery power.
To compensate, X began working on the Gcam project to augment the hardware with “smart software choices.” The resulting solution involved a method called image fusion that “takes a rapid sequence of shots and then fuses them to create a single, higher quality image.”
It debuted on Glass in 2013 to “render dimly-lit scenes in greater detail, and mixed lighting scenes with greater clarity.” Gcam’s next iteration debuted on the Nexus 5 and 6 as HDR+, with Lens Blur also originating from the group’s work. With the Pixel, the Gcam HDR+ technology launched as the default camera mode.
Gcam has since graduated into Google Research, with the team working on Android, YouTube, Google Photos, and the Jump 360 VR rig. Moving forward, the technology might improve by using machine learning “to come up with a better white balance” or make better decisions on blur and lighting to improve an image’s background.
FTC: We use income earning auto affiliate links. More.
Comments