Life is about seeing the wonders of the world for yourself, but Google thinks that the experience you get from reliving these moments later on through the tiny windows that today’s cameras produce is pretty limited. Which is why the company today at its I/O conference announced the release of an open-source VR camera rig specification called “Jump” which will make it much easier for creators to capture, process, and share 360-degree virtual reality video for the whole world to enjoy.
Google’s hope with Jump is that it combined with Cardboard will drastically reduce the price both for creators who have traditionally had to purchase expensive, custom camera rigs and for consumers who’s only VR options have been bulky, expensive headsets with very limited collections of supported content.
Jump isn’t just the model for building a VR camera rig, though. There are three parts, all of which Google has built out, that cover every base from recording to editing to sharing with the world: a camera rig specification with very specialized geometry, an assembler which stitches raw footage from the camera into VR video, and finally a player through which people can actually experience the 360-degree, immersive video.
The camera rig itself that Google devised is comprised of 16 different cameras mounted in a circular array. The company says you can make the array that the cameras are placed into yourself with a range of materials –they say plastic, metal, and cardboard worked fine for them – but what really matters is the actual geometry of the rig which includes every detail from the size of the rig, the size and placement of each camera, the field of view, relative overlap, and a handful of other measurements. Google says that, just like Cardboard VR, the company will be open sourcing all the mathematics around how to make the perfect rig this summer so anyone with the motivation will be able to build it themselves. For those who don’t feel like doing all this work, they are working with GoPro to release a VR camera built to the Jump specification, although they didn’t say on stage when exactly it’ll be available for purchase. Below is a picture of what that rig will look like.
The assembler is where Google takes the video feeds of each of the 16 individual cameras and “uses a combination of computational photography, computer vision, and a whole lot of computers” to recreate the scene from every viewpoint along the circumference of the camera array. With all of these viewpoints, which Google says is in the thousands, they synthesize a stereoscopic VR video.
The work that goes into actually synthesizing these thousands of viewpoints into even one single frame of a video without seams or the different lighting coming from the different angles of each camera requires a powerful algorithm that has to make corrections for depth alignment, 3D alignment, color, lighting, and more. Google says that unlike other solutions you won’t see borders where cameras come together and you get beautiful, accurate, depth-corrected stereoscopic video in all directions. The assembler will be available for select creators worldwide starting this summer.
But the final piece of the puzzle is, once you’ve created a beautiful 360-degree immersive video, how and where will the average person actually be able to watch it?
Starting this summer, YouTube will support Jump video. To enjoy the videos in a fully immersive experience, all you’ll need is the YouTube app, a smartphone, and a cardboard headset which you can easily order from Google or make yourself. Those who don’t have a cardboard headset and smartphone will alternatively be able to use some simple up-down-left-right navigation buttons inside the player of Jump videos to look the 3D videos.
If you’re at Google I/O, there’s a Sandbox session where you can experience some content filmed through Jump yourself.