On the 18.6, the team presented their solution and results as part of the workshop program. Scan-to-BIM solutions are of great importance for the construction community as they automate the generation of as-built models of buildings from 3D scans, and can be used for quality monitoring, robotic task planning and XR visualization, among other applications.
We are excited to share that the Augmented Vision group got two papers accepted at the IEEE conference on Automatic Face and Gesture Recognition (FG 2024), the premier international forum for research in image and video-based face, gesture, and body movement recognition. FG 2024 took place in Istanbul, Turkey, from the 27th to 31st of May.
The 4th IEEE Workshop On AI Hardware: Test, Reliability And Security (AI-TREATS) held on May 23th and 24th 2024 was a great success!
The event, co-located with the 29th IEEE European Test Symposium, explored the recent advances in edge AI under the aspects of security, trust and testing. The workshop gathered many researchers with 15 presented papers, and a keynote by Prof. Yanjing Li (University of Chicago, USA).
On the opening day, Alain Pagani participated to a Panel Discussion about Networks of Excellence on Edge AI in Europe. Together with Mateo Sonza Reorda (Politecnico di Torino Italy, FAIR project) and Ovidiu Vermesan (SINTEF, Norway, EdgeAI project), he could present dAIEDGE and the main aspects of the Network of Excellence on Edge AI.
The workshop chaired by Annachiara Ruospo (Politecnico di Torino, Italy) and Haralampos Stratigopoulos (Sorbonne Universtié, France) was co-sponsored and co-organised by FAIR, dAIEDGE and EdgeAI
Dr. Jason Rambach, coordinator of the EU Horizon Project HumanTech , organized the 2nd workshop on “AI and Robotics in Construction” at the European Robotics Forum 2024 in Rimini, Italy (13.3-15.3) in cooperation with the construction Robotics projects Beeyonders and RobetArme and the Tech4Construction cluster.
The workshop included presentations on the current state of the 3 organizing projects (HumanTech, Beeyonders and RobetArme) from their coordinators, followed by technical presentations from project partners (KU Leuven, SINTEF, ITAINNOVA) on topics such as robotic vision and navigation and human-robot interaction for construction, and closing with a user evaluation insights presentation by the German Federal Institute for Occupational Safety and Health (BAUA). The presentations were followed by a very interesting round table discussion on the challenges of robotic projects in construction.
We are proud to announce that the researchers of the department Augmented Vision will present 6 papers at the upcoming CVPR conference taking place Mon Jun 17th through Fri Jun 21st, 2024 at the Seattle Convention Center, Seattle, USA.
The CVPR conference is the premier international conference in computer vision and pattern recognition.
HiPose: Hierarchical Binary Surface Encoding and Correspondence Pruning for RGB-D 6DoF Object Pose Estimation Yongliang Lin, Yongzhi Su, Praveen Nathan, Sandeep Inuganti, Yan Di, Martin Sundermeyer, Fabian Manhardt, Didier Stricker, Jason Rambach, Yu Zhang
SG-PGM: Partial Graph Matching Network with Semantic Geometric Fusion for 3D Scene Graph Alignment and Its Downstream Tasks Yaxu Xie, Alain Pagani, Didier Stricker
CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise Sketch Instance Guided Attention Mohammad Sadil Khan, Elona Dupont, Sk Aziz Ali, Kseniya Cherenkova, Anis Kacem, Djamila Aouada
EventEgo3D: 3D Human Motion Capture from Egocentric Event Streams Christen Millerdurai, Hiroyasu Akada, Jian Wang, Diogo Luvizon, Christian Theobalt, Vladislav Golyanik
Congratulations to the authors for this great achievement!
The paper introduces a simplified and improved extrinisic calibration approach for camera-radar systems without the need for external sensing and with additional optimization constraints for added robustness.
DFKI Augmented Vision presented 3 other papers at ICPRAM 2024.
The HumanTech project, coordinated by DFKI Augmented Vision – Dr. Jason Rambach, has reached an important milestone: A highly successfully Mid-Term Review Meeting!
From 22 to 24 January 2024, representatives from the 21 partner organisations that comprise the consortium gathered in Zurich, Switzerland (hosted by the partner Implenia) to comprehensively review its midterm progress since starting in June 2022. The process has helped the consortium to align priorities to further its mission — to achieve breakthroughs in cutting-edge technologies, contributing to a safer, more efficient and digitized European construction industry. The review meeting consisted of a construction site visit, presentations of the project progress for all work packages and an exciting demo event with live HumanTech technologies.
We are happy to announce that the Augmented Vision group presented 2 papers in the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) that took place from the 4th -8th January 2024 in Waikoloa, Hawaii.
The BERTHA project receives EU funding to develop a Driver Behavioral Model that will make autonomous vehicles safer and more human-like
The project, funded by the European Union with Grant Agreement nº 101076360, will receive 7.9 M€ under the umbrella of the Horizon Europe programme.
The BERTHA project will develop a scalable and probabilistic Driver Behavioral Model which will be key to achieving safer and more human-like connected autonomous vehicles, thus increasing their social acceptance. The solution will be available for academia and industry through an open-source data HUB and in the CARLA autonomous driving simulator.
The project’s consortium gathered on 22-24 November for the kick-off meeting, hosted by the coordinator Instituto de Biomecánica de Valencia at its facilities in Spain.
The Horizon Europe project BERTHA kicked off on November 22nd-24th in Valencia, Spain. The project has been granted €7,981,799.50 from the European Commission to develop a Driver Behavioral Model (DBM) that can be used in connected autonomous vehicles to make them safer and more human-like. The resulting DBM will be available on an open-source HUB to validate its feasibility, and it will also be implemented in CARLA, an open-source autonomous driving simulator.
The project celebrated its kick-off meeting on November 22nd to 24th, hosted by the coordinator Instituto de Biomecánica de Valencia (IBV) at its offices in Valencia, Spain. During the event, all partners met each other, shared their technical backgrounds and presented their expected contributions to the project.
The need for a Driver Behavioral Model in the CCAM industry
The industry of Connected, Cooperative, and Automated Mobility (CCAM) presents important opportunities for the European Union. However, its deployment requires new tools that enable the design and analysis of autonomous vehicle components, together with their digital validation, and a common language between Tier vendors and OEM manufacturers.
One of the shortcomings arises from the lack of a validated and scientifically based Driver Behavioral Model (DBM) to cover the aspects of human driving performance, which will allow to understand and test the interaction of connected autonomous vehicles (CAVs) with other cars in a safer and predictable way from a human perspective.
Therefore, a Driver Behavioral Model could guarantee digital validation of the components of autonomous vehicles and, if incorporated into the ECUs software, could generate a more human-like response of such vehicles, thus increasing their acceptance.
The contributions of BERTHA to the autonomous vehicles industry and research
To cover this need in the CCAM industry, the BERTHA project will develop a scalable and probabilistic Driver Behavioral Model (DBM), mostly based on Bayesian Belief Network, which will be key to achieving safer and more human-like autonomous vehicles.
The new DBM will be implemented on an open-source HUB, a repository that will allow industrial validation of its technological and practical feasibility, and become a unique approach for the model’s worldwide scalability.
The resulting DBM will be translated into CARLA, an open-source simulator for autonomous driving research developed by the Spanish partner Computer Vision System. The implementation of BERTHA’s DBM will use diverse demos which allow the building of new driving models in the simulator. This can be embedded in different immersive driving simulators as HAV from IBV.
BERTHA will also develop a methodology which, thanks to the HUB, will share the model with the scientific community to ease its growth. Moreover, its results will include a set of interrelated demonstrators to show the DBM approach as a reference to design human-like, easily predictable, and acceptable behaviour of automated driving functions in mixed traffic scenarios.