Loading...

Abstract

In this work, we introduce a light field acquisition approach for standard smartphones. The smartphone is manually translated along a horizontal rail, while recording synchronized video with front and rear camera. The front camera captures a control pattern, mounted parallel to the direction of translation to determine the smartphones current position. This information is used during a postprocessing step to identify an equally spaced subset of recorded frames from the rear camera, which captures the actual scene. From this data we assemble a light field representation of the scene. For subsequent disparity estimation, we apply a structure tensor approach in the epipolar plane images.
We evaluate our method by comparing the light fields resulting from manual translation of the smartphone against those recorded with a constantly moving translation stage.

Keywords

computer vision, light fields, video processing

Video


Reference

@inproceedings{krolla2014lightfield,
    title={Light Field from Smartphone-based Dual Video},
    author={Krolla, Bernd and Diebold, Maximilian and Stricker, Didier},
    booktitle={ECCV Workshop on Light Fields for Computer Vision (LF4CV)},
    month={September},
    year={2014},
    keywords={computer vision, light fields, video processing},
    url={http://av.dfki.de/~krolla/pub_krolla2014lightfield.html}
}

Acknowledgements

This work was funded by Sony Deutschland, Stuttgart Technology Center, EuTEC and is a result of the research cooperation between STC EuTEC, the Heidelberg Collaboratory for Image Processing (HCI) and the German Research Center for Artificial Intelligence (DFKI). We would like to thank Thimo Emmerich, Yalcin Incesu and Oliver Erdler from STC EuTEC for their feedback to this work and all the fruitful discussions.