Among other features and fixes, Android 8.1 in December opened up the Pixel Visual Core on the Pixel 2 and Pixel 2 XL. Today, Google is enabling its image processing chip for third-party photos apps, starting with Instagram, Snapchat, and WhatsApp.
Google’s first custom SOC introduced with the Pixel 2 allows for faster, low-power image processing. It lets third-party photo apps take advantage of the same computational photography and machine learning that powers Pixel 2’s HDR+ functionality. The chip can also be programmed for other uses, though Google has yet to specify what those include.
Pixel Visual Core is built to do heavy-lifting image processing while using less power, which saves battery. That means we’re able to use that additional computing power to improve the quality of your pictures by running the HDR+ algorithm.
Like the main Pixel camera, Pixel Visual Core also runs RAISR, which means zoomed-in shots look sharper and more detailed than ever before. Plus, it has Zero Shutter Lag to capture the frame right when you press the shutter, so you can time shots perfectly.
Developers first had access to it in November with a developer preview, but Google opted for a more gradual release. In one interview, Google notes that it aimed for a clean and nice rollout, working with partners on image quality, performance, and power. Third-party developers can learn more about taking advantage of the Visual Core in their apps here.
A sample gallery shows HDR+ in action. Images on the left don’t use the Pixel Visual Core, while images on the right do.
Instagram, Snapchat, and WhatsApp will be the first third-party apps to take advantage of the Pixel 2’s camera capabilities. The functionality is rolling out with the February security patch over the new few days.
FTC: We use income earning auto affiliate links. More.