Google shows one of the most impressive efforts that traditional photography and video have turned into something more immersive: 3D video that allows the viewer to change their perspective and even view objects in the frame. If you don't have 46 replacement cameras to sync, you probably won't be creating these "light field videos" anytime soon.
The new technology to be presented at SIGGRAPH uses images from dozens of cameras that record simultaneously and form a kind of giant compound eye. These many perspectives are brought together into one, in which the viewer can shift his point of view and the scene reacts accordingly in real time.
This image from the work of the researchers shows how the cameras capture and segment the view.
The effect of high-resolution video and freedom of movement gives these light field videos a real sense of reality. Existing VR enhanced videos generally use fairly common stereoscopic 3D, which does not really allow for a change in perspective. Facebook's method of understanding the depth of photos and adding perspective to them is clever, but far more limited and only leads to a small change of perspective.
In Google’s videos, you can move your head to the side to peek around a corner or see the other side of a particular object. The image is photorealistic and fully articulated, but is rendered in 3D, so that the viewpoint changes even slightly, are reflected exactly.
Credit: Google
And because the rig is so wide, parts of the scene that are hidden from one perspective are visible from others. If you swing and zoom from right to left, you may find completely new features that are reminiscent of Blade Runner's notorious “enhancement” scene.
It's probably best to experience in VR, but you can test a static version of the system on the project website or watch a series of demo light field videos if you have Chrome and have experimental web platform features enabled (there are instructions) on site).
The experiment is closely related to the LED egg, which was used for the volumetric detection of human movements at the end of last year. Google’s AI department is clearly keen to enrich the media. However, how they do this in a pixel smartphone rather than a car-sized camera array is unclear.