Skip to main content

Google details more ways the Pixel 6 and 6 Pro cameras use Tensor

With the Pixel 6 and 6 Pro officially launching tomorrow, Google today detailed some additional places where its custom Tensor SoC is being used to improve and power the camera.

Google already detailed some of these advancements at the Pixel Launch Event, but started by reiterating how Live HDR+ Video is made possible by Tensor’s Image Signal Processor (ISP) directly featuring an accelerator that runs the HDRNet algorithm. It works in 4K60 and all other video modes, as well as in “popular social and chat apps.” Meanwhile, Tensor allows for real-time tone mapping and stabilization upgrades.

Night Sight of course makes use of a rear camera sensor that allows for 2.5x more light, but also benefits from the new laser detect auto focus (LDAF) system and Tensor’s ISP. Specifically, there are “new motion detection algorithms to capture shorter exposures that aren’t as blurry.”

Speaking of lens/sensor enhancements, the 6 Pro’s 4x telephoto makes use of an “updated version” of Super Res Zoom with HDR+ Bracketing, which makes possible 20x digital zoom.

Site default logo image

Real Tone, which Google is airing ads for, is the result of Tensor making possible an “advanced ML-based face detection model that more accurately auto-exposes photos of people of color.” There’s also an updated auto-white balance algorithm to “detect and correct inaccurate skin tones.” Similarly,  Frequent Faces will “learn how to better auto-white balance the people you photograph most frequently.”

Tensor drives Face Unblur by detecting when someone is moving quickly to fire up both the main camera (for a bright but blurry image) and ultrawide (dark but shaper). ML then combines the two to create a sharp face and well-exposed photo.

That ultrawide selfie camera on Pixel 6 Pro can be used with a Speech enhancement mode where Tensor’s TPU “simultaneously process audio and visual cues to isolate speech” and reduce background noise up to 80%. Google is aiming this feature at the vlogging crowd.

Site default logo image

The Action Pan (left) and Long Exposure (right) Motion Modes sees Tensor “handle motion vector calculations, frame interpolation, subject segmentation, hand-shake rejection, and blur rendering.”

Outside of capture, Magic Eraser uses “novel algorithms for confidence, segmentation, and inpainting.” The on-device ML models allow this feature to work without an internet connection, or practically speaking on “photos haven’t backed up yet.”

Morea on Pixel 6:

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com