NRSfM-Flow: Recovering Non-Rigid Scene Flow from Monocular Image Sequences

NRSfM-Flow: Recovering Non-Rigid Scene Flow from Monocular Image Sequences
Vladislav Golyanik, Aman Shankar Mathur, Didier Stricker
Proceedings of the British Machine Vision Conference 2016 British Machine Vision Conference (BMVC-16), September 19-22, York, United Kingdom

Abstract:
Scene flow recovery from monocular image sequences is an emerging field in computer vision. While existing Monocular Scene Flow (MSF) methods extend the classical optical flow formulation to estimate depths/disparities and 3D motion, we propose a framework based on Non-Rigid Structure from Motion (NRSfM) technique — NRSfM-Flow. Therefore, both problems are formulated in the continuous domain and relation between them is established. To cope with real data, we propose two preprocessing steps for image sequences — redundancy removal and translation resolution — which increase quality of reconstructions and speedup computations. In contrast to the existing MSF methods which can cope with non-rigid deformations, our solution makes no strong assumptions about a scene such as known camera motion or camera velocity constancy and can handle occlusions. NRSfM-Flow is qualitatively evaluated on challenging real-world data. Experiments provide evidence that the proposed approach achieves high accuracy and outperforms state of the art in terms of the ability to reconstruct MSF with less prior knowledge about a scene.
Keywords:
monocular scene flow, non-rigid structure from motion, optical flow, 3D reconstruction, redundancy removal, translation resolution