If there’s one thing Google does better than anyone else, it’s using software to make camera features better. Now, a beta feature is rolling out to a limited group of YouTube Stories users which lets creators swap out their background images with nothing more than a phone.
This new “video segmentation tool” doesn’t use any depth-sensing tech, but simply uses the ordinary image to determine where the foreground and background meet. Once it does that, it applies the backgrounds users pick in real-time (at least 30 times each second), as Google details in a blog post.
As you might expect, Google managed to do this thanks in part to a neural network. That network uses thousands of labeled issues to learn how to identify things like hair, faces, glasses, and shoulders.
The final result, at least from what we’ve seen so far, is a nearly seamless background swap that even runs fast enough to be used in a video. Interestingly, Google was able to get it running at 40 frames per second on its own Pixel 2, but at over 100 frames per second on Apple’s iPhone 7.
Right now, this feature is only available for a select few users, so the only way you’ll know if you have it is to open YouTube Stories and see the option. Of course, that also means you’ll need to have access to the Stories feature as well. Currently, there’s no ETA on when or even if we’ll see this in a wider rollout.
FTC: We use income earning auto affiliate links. More.