Search
Publication Authors

Prof. Dr. Didier Stricker

Dr. Alain Pagani

Dr. Gerd Reis

Eric Thil

Keonna Cunningham

Dr. Oliver Wasenmüller

Dr. Gabriele Bleser
Dr. Bruno Mirbach

Dr. Jason Raphael Rambach

Dr. Bertram Taetz
Dr. Muhammad Zeshan Afzal

Sk Aziz Ali

Mhd Rashed Al Koutayni
Murad Almadani
Alaa Alshubbak
Yuriy Anisimov

Jilliam Maria Diaz Barros

Ramy Battrawy
Hammad Butt

Mahdi Chamseddine
Steve Dias da Cruz

Fangwen Shu

Torben Fetzer

Ahmet Firintepe
Sophie Folawiyo

David Michael Fürst
Kamalveerkaur Garewal

Christiano Couto Gava
Leif Eric Goebel

Tewodros Amberbir Habtegebrial
Simon Häring
Khurram Hashmi

Jigyasa Singh Katrolia

Andreas Kölsch
Onorina Kovalenko

Stephan Krauß
Paul Lesur

Muhammad Jameel Nawaz Malik
Michael Lorenz
Markus Miezal

Mina Ameli

Nareg Minaskan Karabid
Mohammad Minouei

Pramod Murthy

Mathias Musahl

Peter Neigel

Manthan Pancholi
Qinzhuan Qian

Engr. Kumail Raza
Dr. Nadia Robertini
María Alejandra Sánchez Marín
Dr. Kripasindhu Sarkar

Alexander Schäfer
Pascal Schneider

René Schuster

Mohamed Selim
Lukas Stefan Staecker

Dennis Stumpf

Yongzhi Su

Xiaoying Tan
Yaxu Xie

Dr. Vladislav Golyanik

Dr. Aditya Tewari

André Luiz Brandão
A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions
A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions
René Schuster, Christian Unger, Didier Stricker
Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision (WACV-2021) January 5-9 Waikoloa HI United States IEEE 2021 .
- Abstract:
- Motion estimation is one of the core challenges in computer vision. With traditional dual-frame approaches, occlusions and out-of-view motions are a limiting factor, especially in the context of environmental perception for vehicles due to the large (ego-) motion of objects. Our work proposes a novel data-driven approach for temporal fusion of scene flow estimates in a multi-frame setup to overcome the issue of occlusion. Contrary to most previous methods, we do not rely on a constant motion model, but instead learn a generic temporal relation of motion from data. In a second step, a neural network combines bi-directional scene flow estimates from a common reference frame, yielding a refined estimate and a natural byproduct of occlusion masks. This way, our approach provides a fast multi-frame extension for a variety of scene flow estimators, which outperforms the underlying dual-frame approaches.