Learned Fusion: 3D Object Detection using Calibration-Free Transformer Feature Fusion

Learned Fusion: 3D Object Detection using Calibration-Free Transformer Feature Fusion
David Michael Fürst, Rahul Jakkamsetty, René Schuster, Didier Stricker
In: Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods. International Conference on Pattern Recognition Applications and Methods (ICPRAM-2024), February 24-26, Rome, Italy, SCITEPRESS, 2024.

Abstract:
The state of the art in 3D object detection using sensor fusion heavily relies on calibration quality, which is difficult to maintain in large scale deployment outside a lab environment. We present the first calibration-free approach for 3D object detection. Thus, eliminating the need for complex and costly calibration procedures. Our approach uses transformers to map the features between multiple views of different sensors at multiple abstraction levels. In an extensive evaluation for object detection, we not only show that our approach outperforms single modal setups by 14.1% in BEV mAP, but also that the transformer indeed learns mapping. By showing calibration is not necessary for sensor fusion, we hope to motivate other researchers following the direction of calibration-free fusion. Additionally, resulting approaches have a substantial resilience against rotation and translation changes.