Earlier this year, Google lost the “Distinguished Engineer” responsible for the research that went into the Pixel’s imaging capabilities. Marc Levoy is now at Adobe, and sat down with The Verge for a very interesting interview about his career and what’s next in photography.

The hour-long podcast is definitely worth a full listen, but Levoy made some particularly interesting points and offered some insight into the Pixel.

The Pixel 5 is rumored to use the same Sony IMX363 image sensor for the primary camera as the Pixel 3, 4, and 4a. When asked by The Verge’s Nilay Patel, Levoy gave a general defense of that practice: 

The mobile sensor industry is fairly mature. It does improve, but the improvements are coming with some diminishing returns over the years. One variable that’s a particular interest is the read noise. As the read noise decreases, you can take pictures in lower and lower light. And so, if Sony or someone else comes up with a sensor that has lower read noise, a lot of people will grab onto it…

There are improvements being made in the sensors, but I’m not sure that they’re pivotal. They’re incremental.

As for why Levoy left Google, he cited being “intellectually restless,” which fits the trajectory of his long career in various fields.

I think it was time to declare victory and move on. There were diminishing returns among these table stakes of high dynamic range imaging and low light imaging, and it was time to look for a new frontier.

As for what’s next, he cites academic papers where people are looking at removing window reflections, harsh shadows, and then relighting and other distracting objects. Google Photos notably attempted to do the latter with fence removal, but its work has yet to come to fruition.

He thinks these capabilities will be genuinely useful, with a Larry Page factoid included:

I think there’s still a lot to do. And I think of those as being beyond table stakes, but features that pass what Larry Page at Google liked to call the toothbrush test. You use it twice a day and it improves your life. In other words, it’s a feature that people really want or need. 

When asked about the usefulness of smartphone makers adding more lenses and sensors, Levoy responded:

Potentially yes, potentially no. The Pixel 4 last year did add a telephoto lens and that did help. Google shipped a telephoto lens plus the Super Res Zoom technology that came out of my team, and the two of them worked together very well. So there’s definitely something to be said for more hardware. 

A depth sensor could help with a variety of tasks. I think hardware is important but I think what I’ve shown over the last 10 years is it software is very important. So, I think the two work hand in hand.

In terms of video, he provided a high-level answer on the difficulties involved: 

So, video is an entirely different ballgame. The computational photography that we did at Google on the Pixel was largely in the still photography area. There were some other teams that Google working on video, but there was less that they could do because they had to do it in real-time. 

The Verge also asked if Levoy worked on or had any input with camera hardware at Google: 

I gave them advice. Whether they listened to it or not would be another question. 

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

About the Author

Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com