Skip to main content

Pixel 3 camera tidbits: Pixel Visual Core tasks, Google’s opinion on dual-cameras, Night Sight details, more

As usual, the focus of Google’s Pixel 3 and Pixel 3 XL smartphones is the camera. Rick Osterloh even said when presenting the devices that the company had “built the best camera, and put it in the most helpful phone.” Now, a few days after the event, we’re still digging up bits of information on the camera. Here are a few more Pixel 3 camera details you might not have known.

Pixel Visual Core gets a new workload

On Google’s Pixel 2 and Pixel 2 XL, the company introduced its first custom chipset in the Pixel Visual Core. The purpose of this chipset was to improve the photography experience on the phone, but it was primarily used to give third-party apps access to Google’s HDR+ feature.

On the Pixel 3 camera, though, the Pixel Visual Core is much more important. We’ve been told by Google that this isn’t a new chipset, but according to WIRED, it has a much heavier workload this time around. Apparently, the Pixel Visual Core is being used on the Pixel 3 camera to enhance some of the new features including Top Shot and Photobooth.

This year, the Visual Core has been updated, and it has more camera-related tasks. Top Shot is one of those features. It captures a Motion Photo, and then automatically selects the best still image from the bunch. It’s looking for open eyes and big smiles, and rejecting shots with windswept hair or faces blurred from too much movement.

Here’s how Night Sight and Super Res Zoom work

That same WIRED piece also reveals some details about Google’s other notable camera modes, Night Sight and Super Res Zoom.

Focusing on the former first, Night Sight is designed to improve low light dramatically. As Google mentioned, it’s not launching with the phone, but now we do have some details on how it works. Apparently, Google uses longer exposure and a bunch of separate photos to stitch together a nighttime photo that shows off the details without needing the help of a flash.

If you’re trying to take a picture in the dark—so dark that your smartphone photos would normally look like garbage, as one Google product manager described it to me—the Pixel 3’s camera will suggest something called Night Sight. This isn’t launching with the phone, but is expected to come later this year. Night Sight requires a steady hand because it uses a longer exposure, but it fuses together a bunch of photos to create a nighttime photo that doesn’t look, well, like garbage. All of this without using the phone’s flash, too.

On top of that, Super Res Zoom actually requires more than just software to work. Apparently, it needs a lens that is sharper than the camera’s sensor and enhances the photo by using machine learning to mimic the movements of your hand.

Super Res Zoom, another feature new to Pixel 3, isn’t just a software tweak. It requires a lens that’s a little bit sharper than the camera’s sensor, so that the resolution isn’t limited by the sensor. But it enhances the resolution on a photo that you’ve zoomed way in on by using machine learning to adjust for the movement of your hand. (If you have the smartphone on a tripod or stable surface, you can actually see the frame moving slightly, as the camera mimics your hand movement.)

DPReview also spoke to Google about Super Res Zoom, revealing a few more details and some sample shots of the feature.

But Google – and Peyman Milanfar’s research team working on this particular feature – didn’t stop there. “We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic” explains Marc. If you have enough samples, you can expect any scene element to have fallen on a red, green, and blue pixel. After alignment, then, you have R, G, and B information for any given scene element, which removes the need to demosaic. That itself leads to an increase in resolution (since you don’t have to interpolate spatial data from neighboring pixels), and a decrease in noise since the math required for demosaicing is itself a source of noise. The benefits are essentially similar to what you get when shooting pixel shift modes on dedicated cameras.

There’s a small catch to all this – at least for now. Super Res only activates at 1.2x zoom or more. Not in the default ‘zoomed out’ 28mm equivalent mode. As expected, the lower your level of zoom, the more impressed you’ll be with the resulting Super Res images, and naturally the resolving power of the lens will be a limitation. But the claim is that you can get “digital zoom roughly competitive with a 2x optical zoom” according to Isaac Reynolds, and it all happens right on the phone.

The Pixel 3 Camera has a “Flicker Sensor”

Have you ever been taking a photo indoors and you got those weird lines in the frame? That can be caused by certain types of lighting, but apparently, it won’t be a problem for the Pixel 3 camera. WIRED explains that the updated sensor in the Pixel 3 is designed to avoid that flicker effect in photos and videos.

The 12.2-megapixel rear camera has been improved, and the camera sensor is a “newer generation sensor,” though Reynolds conceded that it “has a lot of the same features.” The Pixel 3 also has a flicker sensor, which is supposed to mitigate the flicker effect you get when you’re shooting a photo or video under certain indoor lighting.

Google doesn’t find a second rear camera “necessary”

One thing that many are still surprised by is the lack of a second camera on the Pixel 3. Where most flagships now have at least two cameras on the back, the Pixel sticks with one… on the back anyway.

Google representatives say that they find a second sensor “unnecessary” thanks to the software enhancements included on the device. The end results speak for themselves, but it would be nice to get the same wide-angle love on the back that the front gets, no? Maybe next year.

There’s also the fact that the Google Pixel 3 still has a single-lens rear camera, while all of its high-end smartphone competitors have gone with double or even triple the number of lenses. Google argues it doesn’t really need another lens—“we found it was unnecessary,” Queiroz says—because of the company’s expertise in machine learning technology. Pixel phones extract enough depth information already from the camera’s dual-pixel sensor, and then run machine learning algorithms, trained on over a million photos, to produce the desired photo effect.

If you haven’t seen it yet, watch this demo reel

The video capabilities of Google’s Pixel cameras haven’t exactly been their strong suit, but it appears that things might be a bit better this year. Google partnered with filmmaker Terrence Malick for a brief demo reel and, well, it’s just awesome.

Google’s “Motion Focus” feature plays a huge role in making this short film possible.

https://youtu.be/lNg3Cb76G6M

More on Google Pixel 3:


Check out 9to5Google on YouTube for more news:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel