Teilnehmer des Kick-Off-Treffens des KIMBA Forschungsvorhabens stehen vor einem mobilen Prallbrecher von Projektpartner KLEEMANN. // Participants of the kick-off meeting of the KIMBA research project standing in front of a mobile impact crusher from project partner KLEEMANN
[Deutsche Version]
Im Rahmen der Digital GreenTech Konferenz 2023 in Karlsruhe wurden kürzlich 14 neue Forschungsprojekte aus den Bereichen Wasserwirtschaft, nachhaltiges Landmanagement, Ressourceneffizienz und Kreislaufwirtschaft vorgestellt, darunter auch Kimba. Hierbei arbeiten wir gemeinsam mit unseren Projektpartnern an einer KI-basierten Prozesssteuerung und automatisiertem Qualitätsmanagement für das Recycling von Bau- und Abbruchabfällen in Echtzeit. Das spart Kosten, Zeit sowie Ressourcen und schont die Umwelt. So unterstützen wir die Baubranche auf ihrem Weg in die Zukunft.
At the Digital GreenTech Conference 2023 in Karlsruhe, 14 new research projects in the fields of water management, sustainable land management, resource efficiency and circular economy were recently presented, including Kimba. Here, we are working with our project partners on AI-based process control and automated quality management for recycling construction and demolition waste in real time. This saves costs, time and resources and protects the environment. This is how we support the construction industry on its way into the future.
Alt-Text: Teilnehmer des Kick-Off-Treffens des ReVise-UP Forschungsvorhabens stehen vor dem Bergbaugebäude der RWTH Aachen University. // Participants of the kick-off meeting of the ReVise-UP research project stand in front of the mining building of RWTH Aachen University.
Deutsche Version
Forschungsvorhaben „ReVise-UP“ zur Verbesserung der Prozesseffizienz des werkstofflichen Kunststoffrecyclings mittels Sensortechnik gestartet
Im September 2023 startete das vom BMBF geförderte Forschungsvorhaben ReVise-UP („Verbesserung der Prozesseffizienz des werkstofflichen Recyclings von Post-Consumer Kunststoff-Verpackungsabfällen durch intelligentes Stoffstrommanagement – Umsetzungsphase“). In der vierjährigen Umsetzungsphase soll die Transparenz und Effizienz des werkstofflichen Kunststoffrecyclings durch Entwicklung und Demonstration sensorbasierter Stoffstromcharakterisierungsmethoden im großtechnischen Maßstab gesteigert werden.
Auf Basis der durch Sensordaten erzeugten Datentransparenz soll das bisherige Kunststoffrecycling durch drei Effekte verbessert werden: Erstens sollen durch die Datentransparenz positive Anreize für verbesserte Sammel- und Produktqualitäten und damit gesteigerte Rezyklatmengen und -qualitäten geschaffen werden. Zweitens sollen sensorbasiert erfasste Stoffstromcharakteristika dazu genutzt werden, Sortier-, Aufbereitungs- und Kunststoffverarbeitungsprozesse auf schwankende Stoffstromeigenschaften adaptieren zu können. Drittens soll die verbesserte Datenlage eine ganzheitliche ökologische und ökonomische Bewertung der Wertschöpfungskette ermöglichen.
Research project “ReVise-UP” started to improve the process efficiency of mechanical plastics recycling using sensor technology
In September 2023, the BMBF-funded research project ReVise-UP (“Improving the process efficiency of mechanical recycling of post-consumer plastic packaging waste through intelligent material flow management – implementation phase”) started. In the four-year implementation phase, the transparency and efficiency of mechanical plastics recycling is to be increased by developing and demonstrating sensor-based material flow characterization methods on an industrial scale.
Based on the data transparency generated by sensor data, the current plastics recycling shall be improved by three effects: First, data transparency is intended to create positive incentives for improved collection and product qualities and thus increased recyclate quantities and qualities. Second, sensor-based material flow characteristics are to be used to adapt sorting, treatment and plastics processing processes to fluctuating material flow properties. Third, the improved data situation should enable a holistic ecological and economic evaluation of the value chain.
DFKI Augmented Vision researchers Praveen Nathan, Sandeep Inuganti, Yongzhi Su and Jason Rambach received their 1st place award in the prestigious BOP Object Pose Estimation Challenge 2023 in the categories Overall Best RGB Method,Overall Best Segmentation Method and The Best BlenderProc-Trained Segmentation Method.
The BOP benchmark and challenge addresses the problem of 6-degree-of-freedom object pose estimation, which is of great importance for many applications such as robot grasping or augmented reality. This year, the BOP challenge was held within the “8th International Workshop on Recovering 6D Object Pose (R6D)” http://cmp.felk.cvut.cz/sixd/workshop_2023/ at the International Conference on Computer Vision (ICCV) in Paris, France https://iccv2023.thecvf.com/ .
The awards were received by Yongzhi Su and Dr. Jason Rambach on behalf of the DFKI Team and a short presentation of the method followed. The winning method was based on the CVPR 2022 paper “ZebraPose”
The winning approach was developed by a team led by DFKI AV, with contributing researchers from Zhejiang University.
List of contributing researchers:
DFKI Augmented Vision: Praveen Nathan, Sandeep Inuganti, Yongzhi Su, Didier Stricker, Jason Rambach
On the 5th and 6th of September 2023, the new EU project dAIEdge “A network of excellence for distributed, trustworthy, efficient and scalable AI at the Edge“ officially took off.
The kick-off meeting held at DFKI in Kaiserslautern was an excellent occasion to meet with the 36 partners from 15 European countries and launch the activities of the network!
The main goal of dAIEDGE is to support and ensure the rapid development and market adoption of distributed edge AI technologies, such as hardware, software, frameworks, and tools.
The applications of dAIEDGE will be used in a wide range of domains, such as the Internet of Things (IoT), intelligent transportation systems, satellite imagery and robotics.
The network has a project volume of €14.4 million, of which €10.7 million is funded by the European Union. Looking forward to a fruitful collaboration and a successful project!
DFKI Augmented Vision is collaborating with Stellantis on the topic of Radar-Camera Fusion for Automotive Object Detection using Deep Learning. Recently, two new publications were accepted to the GCPR 2023 and EUSIPCO 2023 conferences.
The 2 new publications are:
1. Cross-Dataset Experimental Study of Radar-Camera Fusion in Bird’s-Eye View, Proceedings of the 31st. European Signal Processing Conference (EUSIPCO-2023), September 4-8, Helsinki, Finland, IEEE, 2023.
This paper investigates the influence of the training dataset and transfer learning on camera-radar fusion approaches, showing that while the camera branch needs large and diverse training data, the radar branch benefits more from a high-performance radar.
2. RC-BEVFusion: A Plug-In Module for Radar-Camera Bird’s Eye View Feature Fusion, Proceedings of. Annual Symposium of the German Association for Pattern Recognition (DAGM-2023), September 19-22, Heidelberg, BW, Germany, DAGM, 9/2023.
This paper introduces a new Bird’s Eye view fusion network architecture for camera-radar fusion for 3D object detection that performs favorably on the NuScenes dataset benchmark.
We are happy to announce that the Augmented Vision group will present 4 papers in the upcoming ICCV 2023 Conference, 2-6 October, Paris, France. The IEEE/CVF International Conference in Computer Vision (ICCV) is the premier international computer vision event. Homepage: https://iccv2023.thecvf.com/
The 4 accepted papers are:
U-RED: Unsupervised 3D Shape Retrieval and Deformation for Partial Point Clouds Yan Di, Chenyangguang Zhang, Ruida Zhang, Fabian Manhardt, Yongzhi Su, Jason Raphael Rambach, Didier Stricker, Xiangyang Ji, Federico Tombari
Introducing Language Guidance in Prompt-based Continual Learning Muhammad Gulzain Ali Khan, Muhammad Ferjad Naeem; Luc Van Gool; Federico Tombari; Didier Stricker, Muhammad Zeshan Afzal
DELO: Deep Evidential LiDAR Odometry using Partial Optimal Transport Sk Aziz Ali, Djamila Aouada, Gerd Reis, Didier Stricker
The first AI-Observer Summer School was held at the Eratosthenes Center of Excellence in Limassol, Cyprus, from July 10-14. Training sessions were given by Prof. Fabio Del Frate, Giorgia Guerrisi and Lorenzo Giuliano Papale (Tor Vergata University of Rome), and Dr. Gerd Reis (German Research Center for Artificial Intelligence). During the five-day hybrid event, more than 50 participants learned about the application of artificial intelligence in Earth observation, with special focus on disaster risk management. Topics included deforestation, flood detection, and natural hazard management using Sentinel-1 Synthetic Aperture RADAR (SAR), and Sentinel-2 Multi-Spectral Imaging (MSI) data.
We are glad to announce that our colleague Michael Lorenz won a best Paper award for his work On Motions artifacts arising when integrating inertial sensors into loose clothing such as a working jacket.
Abstract
Inertial human motion capture (IHMC) has become a robust tool to estimate human kinematics in the wild such as industrial facilities.
In contrast to optical motion capture, where occlusions might take place, the kinematics of a worker can be continuously provided.
This is for instance a prerequisite for an ergonomic assessments of the workers.
State-of-the-art IHMC solutions require inertial sensors to be tightly attached to body segments.
This requires an additional setup time and lowers the practicability and ease of use when it comes to an industrial application.
In contrast, sensors integrated into loose clothing such as a working jacket, may yield corrupted kinematics estimates due to the additional motion of loose clothing.
In this work we present a study of orientations deviations obtained from kinematics estimates using tightly attached inertial sensors and into a working jacket integrated ones.
We performed a quantitative analysis using data from the two hardware setups worn by 19 subjects performing different industry related tasks and measures of their body shapes.
Using this data we approximated probability distributions of the deviation angles for each person and body segment.
Applying different statistical measures we could gain insights to questions like, how severe orientation deviations are, if there is an influence of body shapes on the distribution and how probability distributions of the deviation angles can indicate physical motion limitations of a sensor attached to a segment.
On the 18.6, the team presented their solution and results as part of the workshop program. Scan-to-BIM solutions are of great importance for the construction community as they automate the generation of as-built models of buildings from 3D scans, and can be used for quality monitoring, robotic task planning and XR visualization, among other applications.