Light Field from Smartphone-based Dual Video

Light Field from Smartphone-based Dual Video
Bernd Krolla, Maximilian Diebold, Didier Stricker
LF4CV - Workshop on Light Fields for Computer Vision European Conference on Computer Vision (ECCV-2014), located at European Conference on Computer Vision, September 7-11, Zürich, Switzerland

Abstract:
In this work, we introduce a light field acquisition approach for standard smartphones. The smartphone is manually translated along a horizontal rail, while recording synchronized video with front and rear camera. The front camera captures a control pattern, mounted parallel to the direction of translation to determine the smartphones current position. This information is used during a postprocessing step to identify an equally spaced subset of recorded frames from the rear camera, which captures the actual scene. From this data we assemble a light field representation of the scene. For subsequent disparity estimation, we apply a structure tensor approach in the epipolar plane images. We evaluate our method by comparing the light fields resulting from manual translation of the smartphone against those recorded with a constantly moving translation stage.
Keywords:
computer vision, light fields, video processing

Light Field from Smartphone-based Dual Video

Light Field from Smartphone-based Dual Video
(Hrsg.)
LF4CV - Workshop on Light Fields for Computer Vision European Conference on Computer Vision (ECCV-2014), located at European Conference on Computer Vision, September 7-11, Zürich, Switzerland

Abstract:
In this work, we introduce a light field acquisition approach for standard smartphones. The smartphone is manually translated along a horizontal rail, while recording synchronized video with front and rear camera. The front camera captures a control pattern, mounted parallel to the direction of translation to determine the smartphones current position. This information is used during a postprocessing step to identify an equally spaced subset of recorded frames from the rear camera, which captures the actual scene. From this data we assemble a light field representation of the scene. For subsequent disparity estimation, we apply a structure tensor approach in the epipolar plane images. We evaluate our method by comparing the light fields resulting from manual translation of the smartphone against those recorded with a constantly moving translation stage.
Keywords:
computer vision, light fields, video processing