3rd place in Scan-to-BIM challenge (CV4_AEC Workshop, CVPR 2023) for HumanTech project team

The team of the EU Horizon Project HumanTech , consisting of Mahdi Chamseddine and Dr. Jason Rambach from DFKI Augmented Vision as well as Fabian Kaufmann from RPTU Kaiserslautern – department of Civil Engineering, received the 3rd place prize in the Scan-to-BIM challenge of the (Computer Vision in the Built Environment) CV4_AEC Workshop of the CVPR 2023 conference.

On the 18.6, the team presented their solution and results as part of the workshop program. Scan-to-BIM solutions are of great importance for the construction community as they automate the generation of as-built models of buildings from 3D scans, and can be used for quality monitoring, robotic task planning and XR visualization, among other applications.

HumanTech project: https://humantech-horizon.eu/

CV4AEC Workshop page: https://cv4aec.github.io/

Contact: Dr. Jason Rambach , Mahdi Chamseddine

Special Issue on the IEEE ARSO 2023 Conference: Human Factors in Construction Robotics

Dr. Jason Rambach, coordinator of the EU Horizon Project HumanTech co-organized a special session on “Human Factors in Construction Robotics” at the IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO 2023) in Berlin, Germany (5.6-7.6). The organization of the special session was done by Jason Rambach, Gabor Sziebig, Research Manager at SINTEF, and Mihoko Niitsuma, Professor at Chuo University.

The program of the special session included the following talks:

  • Serena Ivaldi (INRIA) – Teleoperating a robot for removing asbestos tiles on roofs: Insights from a pilot study
  • Jason Rambach (DFKI) – Machine perception for human-robot handover scenarios in construction
  • Patricia Helen Rosen (BAUA) – Design recommendations for construction robots – a human-centred perspective
  • Dimitrios Giakoumis (CERTH ITI) – Designing human-robot interaction interfaces for shotcrete construction robots; the RobetArme project case

HumanTech project: https://humantech-horizon.eu/

Contact: Dr. Jason Rambach

Workshop on AI and Robotics in Construction at ERF 2023

Dr. Jason Rambach, coordinator of the EU Horizon Project HumanTech co-organized a workshop on “AI and Robotics in Construction” at the European Robotics Forum 2023 in Odense, Denmark (March 14th to 16th, 2023) in cooperation with the construction Robotics projects Beeyonders and RobetArme.

From the project HumanTech, Jason Rambach presented an overview of the project objectives as well as insights into the results achieved by Month 9 of the project. Patrick Roth from the partner Implenia, presented the perspective and challenges of the construction industry on the use of Robotics and AI in construction sites, while the project partners Dr. Bharath Sankaran (Naska.AI) and Dr. Gabor Sziebig (SINTEF) participated in a panel session discussing the future of Robotics in construction.

Workshop schedule: https://erf2023.sdu.dk/timetable/event/ai-and-robotics-in-construction/

HumanTech project: https://humantech-horizon.eu/                                                              

Contact: Dr. Jason Rambach

Dr. Jason Rambach giving his presentation.
Article in IEEE Robotics and Automation Letter (RA-L) journal

We are happy to announce that our article “OPA-3D: Occlusion-Aware Pixel-Wise Aggregation for Monocular 3D Object Detection” was published in the prestigious IEEE Robotics and Automation Letters (RA-L) Journal. The work is a collaboration of DFKI with the TU Munich and Google. The article is openly accessible at: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10021668                                                                      

Abstract: Monocular 3D object detection has recently made a significant leap forward thanks to the use of pre-trained depth estimators for pseudo-LiDAR recovery. Yet, such two-stage methods typically suffer from overfitting and are incapable of explicitly encapsulating the geometric relation between depth and object bounding box. To overcome this limitation, we instead propose to jointly estimate dense scene depth with depth-bounding box residuals and object bounding boxes, allowing a two-stream detection of 3D objects that harnesses both geometry and context information. Thereby, the geometry stream combines visible depth and depth-bounding box residuals to recover the object bounding box via explicit occlusion-aware optimization. In addition, a bounding box based geometry projection scheme is employed in an effort to enhance distance perception. The second stream, named as the Context Stream, directly regresses 3D object location and size. This novel two-stream representation enables us to enforce cross-stream consistency terms, which aligns the outputs of both streams, and further improves the overall performance. Extensive experiments on the public benchmark demonstrate that OPA-3D outperforms state-of-the-art methods on the main Car category, whilst keeping a real-time inference speed.

Yongzhi Su, Yan Di, Guangyao Zhai, Fabian Manhardt, Jason Rambach, Benjamin Busam, Didier Stricker and Federico Tombari “OPA-3D: Occlusion-Aware Pixel-Wise Aggregation for Monocular 3D Object Detection.IEEE Robotics and Automation Letters (2023).

Contacts: Yongzhi Su, Dr. Jason Rambach

Radar Driving Activity Dataset (RaDA) Released

DFKI Augmented Vision recently released the first publicly available UWB Radar Driving Activity Dataset (RaDA), consisting of over 10k data samples from 10 different participants annotated with 6 driving activities. The dataset was recorded in the DFKI driving simulator environment. For more information and to download the dataset please check the project website:  https://projects.dfki.uni-kl.de/rada/

The dataset release is accompanied by an article publication at the Sensors journal:

Brishtel, Iuliia, Stephan Krauss, Mahdi Chamseddine, Jason Raphael Rambach, and Didier Stricker. “Driving Activity Recognition Using UWB Radar and Deep Neural Networks.” Sensors 23, no. 2 (2023): 818.

Contacts: Dr. Jason Rambach, Iuliia Brishtel

VIZTA Project successfully concluded after 42 months

The Augmented Vision department of DFKI participated in the VIZTA project, coordinated by ST Microelectronics, aiming at developing innovative technologies in the field of optical sensors and laser sources for short to long-range 3D-imaging and to demonstrate their value in several key applications including automotive, security, smart buildings, mobile robotics for smart cities, and industry4.0.

The final project review was successfully completed in Grenoble, France on November 17th-18th, 2022. The schedule included presentations on the achievements of all partners as well as live demonstrators of the developed technologies. DFKI presented their smart building person detection demonstrator based on a top-down view from a Time-of-flight (ToF) camera, developed in cooperation with the project partner IEE. A second demonstrator, showing an in-cabin monitoring system based on a wide-field-of-view, which is installed in DFKIs lab has been presented in a video.

During VIZTA, several key results were obtained at DFKI on the topics of in-car and smart building monitoring including:

Figure 1: In-car person and object detection (left), and top-down person detection and tracking for smart building applications (right).


Contact: Dr. Jason Rambach, Dr. Bruno Mirbach

DFKI Augmented Vision Researchers win two awards in Object Pose Estimation challenge (BOP Challenge, ECCV 2022)

DFKI Augmented Vision researchers Yongzhi Su, Praveen Nathan and Jason Rambach received their 1st place award in the prestigious BOP Challenge 2022 in the categories Overall Best Segmentation Method and The Best BlenderProc-Trained Segmentation Method.

The BOP benchmark and challenge addresses the problem of 6-degree-of-freedom object pose estimation, which is of great importance for many applications such as robot grasping or augmented reality. This year, the BOP challenge was held within the “Recovering 6D Object Pose” Workshop at the European Conference on Computer Vision (ECCV) in Tel Aviv, Israel https://eccv2022.ecva.net/ . A total award of $4000 was distributed among the winning teams of the BOP challenge, donated by Meta Reality Labs and Niantic.

The awards were received by Dr. Jason Rambach on behalf of the DFKI Team and a short presentation of the method followed. The winning method was based on the CVPR 2022 paper “ZebraPose”  

ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation
Yongzhi Su, Mahdi Saleh, Torben Fetzer, Jason Raphael Rambach, Nassir Navab, Benjamin Busam, Didier Stricker, Federico Tombari

The winning approach was develop by a team led by DFKI AV, with contributing researchers from TU Munich and Zhejiang University.

Contact: Yongzhi Su, Dr. Jason Rambach

Dr. Jason Rambach with the award
Kick-Off for EU Project “HumanTech”

Our Augmented Vision department is the coordinator of the new large European project “HumanTech”. The Kick-Off meeting was held on July 20th, 2022, at DFKI in Kaiserslautern. Please read the whole article here: Artificial intelligence for a safe and sustainable construction industry (dfki.de)


Please check out the article “Artificial intelligence for a safe and sustainable construction industry (dfki.de)” concerning the new EU project HumanTech which is coordinated by Dr. Jason Rambach, head of the Spatial Sensing and Machine Perception team (Augmented Reality/Augmented Vision department, Prof. Didier Stricker) at the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern.

Augmented Vision @CVPR 2022

DFKI Augmented Vision had a strong presence in the recent CVPR 2022 Conference held on June 19th-23rd, 2022, in New Orleans, USA. The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) is the premier annual computer vision event internationally. Homepage: https://cvpr2022.thecvf.com/ .

Overall, three publications were presented:

1. ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation
Yongzhi Su, Mahdi Saleh, Torben Fetzer, Jason Raphael Rambach, Nassir Navab, Benjamin Busam, Didier Stricker, Federico Tombari


2. SOMSI: Spherical Novel View Synthesis with Soft Occlusion Multi-Sphere Images Tewodros A Habtegebrial, Christiano Gava, Marcel Rogge, Didier Stricker, Varun Jampani


3. Unsupervised Anomaly Detection from Time-of-Flight Depth Images
Pascal Schneider, Jason Rambach, Bruno Mirbach , Didier Stricker