Earlier this month, Google AR & VR announced that ARCore phones could detect depth using just a single lens. Object blending is now beginning to roll out in Google Search AR.
Using a single RGB camera, the new ARCore Depth API leverages depth-from-motion algorithms. A depth map is created by capturing “multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.”
This in turn allow for occlusion — the “ability for digital objects to accurately appear in front of or behind real-world objects.” It makes sure objects are not just floating in space or placed in a physically impossible position.
For end users, Google is referring to this AR principle as “object blending.” One of the first places you’re able to demo it is through Search when looking up 3D objects like animals or “Santa Search.”
When rolled out, a new circular icon that’s half-shaded appears in the top-right corner. An “Object blending On”/”Object blending Off” chip also appears underneath to confirm the state, while Google also has a first launch dialogue:
The object adapts to your environment by blending in with the real world.
At launch, Google shared an example with an AR cat, while the live demo is above. The examples we captured clearly show ARCore being aware of surfaces, but it leverages transparency rather than fully hiding an object. In AR “having a 3D understanding of the world,” experiences will become much more realistic, immersive, and less reality-breaking.
Google Search’s AR object blending began rolling out earlier this month to some of the 200 million ARCore-enabled Android devices. It’s not yet widely available for all users.
FTC: We use income earning auto affiliate links. More.