Team SSMP (Spatial Sensing and Machine Perception) focuses on the use of diverse 3D/2D sensing modalities (RGB/Stereo/ToF cameras, Radar, Lidar) to address challenging scene perception problems using Machine Learning/Deep Learning and traditional geometric Computer Vision. Such problems include as Semantic Scene Reconstruction, 6DoF Object Pose Estimation, Unsupervised Anomaly Detection, SLAM, Deep Sensor Fusion. Indicative application areas are Smart building sensing, AI in construction, Autonomous Driving, Industrial Robotics and Augmented Reality.
Projects of SSMP Team
BERTHA
BEhavioural Replication of Human drivers for CCAM
The Horizon Europe project BERTHA kicked off on November 22nd-24th in Valencia, Spain. The project has been granted €7,981,799.50 from the European Commission to develop a Driver Behavioral Model (DBM) that can be used in connected autonomous vehicles to make them safer and more human-like. The resulting DBM will be available on an open-source HUB to validate its feasibility, and it will also be implemented in CARLA, an open-source autonomous driving simulator.
The industry of Connected, Cooperative, and Automated Mobility (CCAM) presents important opportunities for the European Union. However, its deployment requires new tools that enable the design and analysis of autonomous vehicle components, together with their digital validation, and a common language between Tier vendors and OEM manufacturers.
One of the shortcomings arises from the lack of a validated and scientifically based Driver Behavioral Model (DBM) to cover the aspects of human driving performance, which will allow to understand and test the interaction of connected autonomous vehicles (CAVs) with other cars in a safer and predictable way from a human perspective.
Therefore, a Driver Behavioral Model could guarantee digital validation of the components of autonomous vehicles and, if incorporated into the ECUs software, could generate a more human-like response of such vehicles, thus increasing their acceptance.
To cover this need in the CCAM industry, the BERTHA project will develop a scalable and probabilistic Driver Behavioral Model (DBM), mostly based on Bayesian Belief Network, which will be key to achieving safer and more human-like autonomous vehicles.
The new DBM will be implemented on an open-source HUB, a repository that will allow industrial validation of its technological and practical feasibility, and become a unique approach for the model’s worldwide scalability.
The resulting DBM will be translated into CARLA, an open-source simulator for autonomous driving research developed by the Spanish partner Computer Vision System. The implementation of BERTHA’s DBM will use diverse demos which allow the building of new driving models in the simulator. This can be embedded in different immersive driving simulators as HAV from IBV.
BERTHA will also develop a methodology which, thanks to the HUB, will share the model with the scientific community to ease its growth. Moreover, its results will include a set of interrelated demonstrators to show the DBM approach as a reference to design human-like, easily predictable, and acceptable behaviour of automated driving functions in mixed traffic scenarios.
Partners
Instituto de Biomecanica de Valencia (ES). Institut Vedecom (FR), Universite Gustave Eiffel (FR), German Research Center for Artificial Intelligence (DE), Computer Vision Center (ES), Altran Deutschland (DE), Continental Automotive France (FR), CIDAUT Foundation (ES), Austrian Institute of Technology (AT), Universitat de València (ES), Europcar International (FR), FI Group (PT), Panasonic Automotive Systems Europe (DE) Korea Transport Institute (KOTI)
KI-basierte Prozesssteuerung und automatisiertes Qualitätsmanagement im Recycling von Bau- und Abbruchabfällen durch sensorbasiertes Inline-Monitoring von Korngrößenverteilungen
With 587.4 million t/a of aggregates used, the construction industry is one of the most resource-intensive sectors in Germany. By substituting primary aggregates with recycled (RC) aggregates, natural resources are conserved and negative environmental impacts such as greenhouse gas emissions are reduced by up to 85%. So far, RC building materials cover only 12.5 wt% of the aggregate demand with 73.3 million t/a. With an use of 53.9 million t/a (73.5 wt%), their use has so far been limited mainly to underground construction applications. In order to secure and expand the ecological advantages of RC building materials, it is therefore crucial that in future more demanding applications in building construction can also be covered by RC building materials. For this purpose, on the one hand, a sufficient quality of RC building materials must be guaranteed, and on the other hand, the acceptance of the customers must be ensured by a guaranteed compliance with applicable standards for building construction applications. An essential quality criterion for RC building materials is the particle size distribution (PSD) according to DIN 66165-1, which is determined in the state-of-the-art by manual sampling and sieve analyses which is time-consuming and costly. In addition, analysis results are only available with a considerable time delay. Consequently, it is neither possible to react to quality changes at an early stage, nor can treatment processes be parameterized directly to changed material flow properties. This is where the KIMBA project steps in: Instead of time-consuming and costly sampling and sieve analyses, the PSD analysis in construction waste processing plants shall be automated in the future by sensor-based inline monitoring. The RC material produced will be measured inline during the processing stage using imaging sensor technology. Subsequently, deep-learning algorithms segment the measured heap into individual particles, whose grain size is predicted and aggregated to a digital PSD. The sensor-based PSDs are then to be used intelligently to increase the quality and thus acceptance of RC building materials and hence accelerate the transition to a sustainable circular economy. Based on the proof of concept, two applications will be developed and demonstrated on a large scale: An automated quality management system continuously records the PSD of the produced RC product in order to document it to the customers and to be able to intervene in the process at an early stage in case of deviations. An AI-based assistance system is to enable adaptive control of the preparation process on the basis of sensor-based monitored PSDs and machine parameters to enable consistently high product qualities to be produced even in the event of fluctuating input qualities.
Partners
MAV Krefeld GmbH Institut für Anthropogene Stoffkreisläufe (ANTS) Deutsche Forschungszentrum für Künstliche Intelligenz (DFKI) KLEEMANN GmbH Lehrstuhl für International Production Engineering and Management (IPEM) der Universität Siegen Point 8 GmbH vero – Verband der Bau- und Rohstoffindustrie e.V Verband Deutscher Maschinen- und Anlagenbau e.V. (VDMA)
Verbesserung der Prozesseffizienz des werkstofflichen Recyclings von Post-Consumer Kunststoff-Verpackungsabfällen durch intelligentes Stoffstrommanagement
At 3.2 million tonnes per year, post-consumer packaging waste represents the most significant plastic waste stream in Germany. Despite progress to date, mechanical plastics recycling still has significant potential for improvement: In 2021, only about 27 Ma.-% (1.02 million Mg/a) of post-consumer plastics could be converted into recyclates, and only about 12 Ma.-% (0.43 million Mg/a) served as substitutes for virgin plastics (Conversio Market & Strategy GmbH, 2022).
So far, mechanical plastics recycling has been limited by the high effort of manual material flow characterisation, which leads to a lack of transparency along the value chain. During the ReVise concept phase, it was shown that post-consumer material flows can be characterised automatically using inline sensor technology. The subsequent four-year ReVise implementation phase (ReVise-UP) will explore the extent to which sensor-based material flow characterisation can be implemented on an industrial scale to increase transparency and efficiency in plastics recycling.
Three main effects are expected from this increased data transparency. Firstly, positive incentives for improving collection and product qualities should be created in order to increase the quality and use of plastic recyclates. Secondly, sensor-based material flow characteristics are to be used to adapt sorting, treatment and plastics processing processes to fluctuating material flow properties. This promises a considerable increase in the efficiency of the existing technical infrastructure. Thirdly, the improved data situation should enable a holistic ecological and economic evaluation of the entire value chain. As a result, technical investments can be used in a more targeted manner to systematically optimise both ecological and economic benefits.
Our goal is to fundamentally improve the efficiency, cost-effectiveness and sustainability of post-consumer plastics recycling.
Partners
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH Deutsches Institut für Normung e. V. Human Technology Center der RWTH Aachen University Hündgen Entsorgungs GmbH & Co. KG Krones AG Kunststoff Recycling Grünstadt GmbH SKZ – KFE gGmbH STADLER Anlagenbau GmbH Wuppertal Institut für Klima, Umwelt, Energie gGmbH PreZero Recycling Deutschland GmbH & Co. KG bvse – Bundesverband Sekundärrohstoffe und Entsorgung e. V. cirplus GmbH HC Plastics GmbH Henkel AG Initiative „Mülltrennung wirkt“ Procter & Gamble Service GmbH TOMRA Sorting GmbH
TWIN4TRUCKS – Digitaler Zwilling und KI in der vernetzten Fabrik für die integrierte Nutzfahrzeugproduktion, Logistik und Qualitätssicherung
Am 1. September 2022 startete das Forschungsprojekt Twin4Trucks (T4T). Darin verbinden sich wissenschaftliche Forschung und industrielle Umsetzung in einzigartiger Weise. Das Projektkonsortium besteht aus sechs Unternehmen aus Forschung und Industrie: Die Daimler Truck AG (DTAG) ist Konsortialführer des Projekts. Sie ist der größte Nutzfahrzeughersteller der Welt und mithilfe von Twin4Trucks soll ihre Produktion durch die Implementierung neuer Technologien wie Digitaler Zwillinge oder eines Digital Foundation Layer optimiert werden. Die Technologie-Initiative SmartFactory Kaiserslautern (SF-KL) und das Deutsche Forschungszentrum für Künstliche Intelligenz (DFKI) geben als visionäre Wissenschaftseinrichtungen mit Production Level 4 die Entwicklungsrichtung vor. Der IT-Dienstleister Atos ist zuständig für den Datenaustausch über Gaia-X, die Qualitätssicherung durch KI-Methoden und das Umsetzungskonzept des DFL. Infosys ist zuständig für die Netzwerkarchitektur, 5G Netzwerke und Integrationsleistungen. Das Unternehmen PFALZKOM baut eine Regional Edge Cloud auf, sowie ein Datencenter. Dazu kommen Gaia-X Umsetzung und Betriebskonzepte für Netzwerke.
Human Centered Technologies for a Safer and Greener European Construction Industry
The European construction industry faces three major challenges: improve its productivity, increase the safety and wellbeing of its workforce and make a shift towards a green, resource efficient industry. To address these challenges adequately, HumanTech proposes a human-centered approach, involving breakthrough technologies such as wearables for worker safety and support, and intelligent robotic technology that can harmoniously co-exist with human workers while also contributing to the green transition of the industry.
Our aim is to achieve major advances beyond the current state-of-the-art in all these technologies, that can have a disruptive effect in the way construction is conducted.
These advances will include:
Introduction of robotic devices equipped with vision and intelligence to enable them to navigate autonomously and safely in a highly unstructured environment, collaborate with humans and dynamically update a semantic digital twin of the construction site.
Intelligent unobtrusive workers protection and support equipment ranging from exoskeletons triggered by wearable body pose and strain sensors, to wearable cameras and XR glasses to provide real-time worker localisation and guidance for the efficient and accurate fulfilment of their tasks.
An entirely new breed of Dynamic Semantic Digital Twins (DSDTs) of construction sites simulating in detail the current state of a construction site at geometric and semantic level, based on an extended BIM formulation (BIMxD)
Partners
Hypercliq IKE Technische Universität Kaiserslautern Scaled Robotics SL Bundesanstalt für Arbeitsschutz und Arbeitsmedizin Sci-Track GmbH SINTEF Manufacturing AS Acciona construccion SA STAM SRL Holo-Industrie 4.0 Software GmbH Fundacion Tecnalia Research & Innovation Catenda AS Technological University of the Shannon : Midlands Midwest Ricoh international BV Australo Interinnov Marketing Lab SL Prinstones GmbH Universita degli Studi di Padova European Builders Confederation Palfinger Structural Inspection GmbH Züricher Hochschule für Angewandte Wissenschaften Implenia Schweiz AG Kajima corporation
Radar sensors are very important in the automotive industry because they have the ability to directly measure the speed of other road users. The DFKI is working with our partners to develop intelligent software solutions to improve the performance of high-resolution radar sensors. We are using machine learning and deep neural networks to detect ghost targets in radar data thus improving their reliability and opens up a wide area of possibilities for highly automated driving.
Partners
ASTYX GmbH (Dr. Georg Kuschk), Lise-Meitner-Straße 2a, 85521, Ottobrunn, DE
BIT Technology Solutions gmbH (Geschäftsleitung), Gewerbering 3, 83539 Pfaffing OT Forsting, DE
Vision, Identification, with Z-sensing Technologies and key Applications
VIZTA project, coordinated by ST Micrelectronics, aims at developing innovative technologies in the field of optical sensors and laser sources for short to long-range 3D-imaging and to demonstrate their value in several key applications including automotive, security, smart buildings, mobile robotics for smart cities, and industry4.0. The key differentiating 12-inch Silicon sensing technologies developed during VIZTA are:
1-Innovative SPAD and lock-in pixel for Time of Flight architecture sensors. 2-Unprecedent and cost-effective NIR and RGB-Z filters on-chip solutions. 3-complex RGB+Z pixel architectures for multimodal 2D/3D imaging.
For short-range sensors : advanced VCSEL sources including wafer-level GaAs optics and associated high speed driver. These developed differentiating technologies allows the development and validation of innovative 3D imaging sensors products with the following highly integrated prototypes demonstrators:
1-High resolution (>77 000 points) time-of-flight ranging sensor module with integrated VCSEL, drivers, filters and optics. 2-Very High resolution (VGA min) depth camera sensor with integrated filters and optics.
For Medium and Long range sensing, VIZTA also adresses new LiDAR systems with dedicated sources, optics and sensors. Technology developments of sensors and emitters are carried out by leading semiconductor product suppliers (ST Microelectronics, Philips, III-V Lab) with the support of equipment suppliers (Amat, Semilab) and CEA Leti RTO.
VIZTA project also include the developement of 6 demonstrators for key applications including automotive, security, smart buildings, mobile robotics for smart cities, and industry4.0 with a good mix of industrial and academic partners (Ibeo, Veoneer, Ficosa, Beamagine, IEE, DFKI, UPC, Idemia, CEA-List, ISD, BCB, IDE, Eurecat). VIZTA consortium brings together 23 partners from 9 countries in Europe: France, Germany, Spain, Greece, Luxembourg, Latvia, Sweden, Hungary, and United Kingdom.
Partners
Universidad Politecnica Catalunya Commisariat a l Energie Atomique et aux Energies Alternatives (CEA Paris) Fundacio Eurecat STMICROELECTRONICS SA BCB Informática y Control Alter Technology TÜV Nord SA FICOMIRRORS SA Philips Photonics GmbH Applied Materials France SARL SEMILAB FELVEZETO FIZIKAI LABORATORIUM RESZVENYTARSASAG ELEKTRONIKAS UN DATORZINATNU INSTITUTS LUMIBIRD IEE S.A. IBEO Automotive Systems GmbH STMICROELECTRONICS RESEARCH & DEVELOPMENT LTD STMICROELECTRONICS SA IDEMIA IDENITY & SECURITY FRANCE Beamagine S.L. Integrated Systems Development S.A. VEONEER SWEDEN AB III-V Lab STMICROELECTRONICS (ALPS) SAS STMICROELECTRONICS GRENOBLE 2 SAS
Comprehensible, interactive experiments: practice and theory in the MINT study
The project is funded by the Federal Ministry of Education and Research (BMBF). Combine tangible, manipulatable objects (“tangibles”) with advanced technologies (“Augmented Reality”) to develop new, intuitive user interfaces. Through interactive experiments, it will be possible to actively support the learning process during the MINT study and to provide the learner with more theoretical information about physics.
In the project interfaces of Smartphones, Smartwatches or Smartglasses are used. For example, a data gadget that allows you to view content through a combination of subtle head movements, eyebrows, and voice commands, and view them on a display attached above the eye. Through this casual information processing, the students are not distracted in the execution of the experiment and can access the objects and manipulate them.
A research project developed as a preliminary study demonstrates the developments. For this purpose, scientists at the DFKI and at the Technical University Kaiserslautern have developed an app that supports students and students in the determination of the relationship between the fill level of a glass and the height of the sound. The gPhysics application captures the amount of water, measures the sound frequency and transfers the results into a diagram. The app can be operated only by gestures of the head and without manual interaction. In gPhysics, the water quantity is recorded with a camera and the value determined is corrected by means of head gestures or voice commands, if required. The microphone of the Google Glass measures the sound frequency. Both information is displayed in a graph that is continuously updated on the display of Google Glass. In this way, the learners can follow the frequency curve in relation to the water level directly when filling the glass. Since the generation of the curve is comparatively fast, the learners have the opportunity to test different hypotheses directly during the interaction process by varying various parameters of the experiment.
In the project, further experiments on the physical basis of mechanics and thermodynamics are constructed. In addition, the consortium develops technologies that enable learners to discuss video and sensor recordings as well as analyze their experiments in a cloud and to exchange ideas with fellow students or to compare results.
Partners
The DFKI is a co-ordinator of five other partners in research and practice: the Technical University of Kaiserslautern, studio klv GmbH & Co. KG Berlin, University of Stuttgart, Con Partners GmbH from Bremen and Embedded Systems Academy GmbH from Barsinghausen.
Dr. Jason Rambach, gave a talk on “Building Virtual Worlds with 3D Sensing and AI” at the NEM Summit 2024 in Brussels on the 23.10.2024. The presentation was part o... Read more
Dr. Jason Rambach, coordinator of the EU Horizon Project HumanTech , participated in the “Digital Twins for Sustainable Construction” session at the Read more
We are excited to share that the Augmented Vision group got two papers accepted at the IEEE conference on Automatic Face and Gesture Recognition (FG 2024), the premier international forum for research in image and video-based face, gesture, and body movement recognition. ... Read more
We are proud to announce that the researchers of the department Augmented Vision will present 6 papers at the upcoming CVPR conference taking place Mon Jun 17th through Fri Jun 21st, 2024 at the Seattle Convention Center, Seattle, USA.
We are happy to announce that the Augmented Vision group presented 2 papers in the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) that took place from the 4th -8th January 2024 in Waikoloa, Hawaii.
DFKI Augmented Vision is collaborating with Stellantis on the topic of Radar-Camera Fusion for Automotive Object Detection using Deep Learning. Recently, two new publications were accepted to the GCPR 2023 and Read more
We are happy to announce that the Augmented Vision group will present 4 papers in the upcoming ICCV 2023 Conference, 2-6 October, Paris, France. The IEEE/CVF International Conference in Computer Vision (ICCV) is the premier international computer vision event. Homepage: <... Read more
Dr. Jason Rambach, coordinator of the EU Horizon Project HumanTech co-organized a special session on “Human Factors in Construction Robotics” at the Read more
We are happy to announce that our article “OPA-3D: Occlusion-Aware Pixel-Wise Aggregation for Monocular 3D Object Detection” was published in the prestigious IEEE Robotics and Automation Letters (RA-L) Journal. The work is a collaboration of DFKI with the... Read more
DFKI Augmented Vision recently released the first publicly available UWB Radar Driving Activity Dataset (RaDA), consisting of over 10k data samples from 10 different participants annotated with 6 driving activities. The dataset was recorded in the DFKI driving simulator e... Read more
The Augmented Vision department of DFKI participated in the VIZTA project, coordinated by ST Microelectronics, aiming at developing innovative technologies in the field of optical sensors and laser sources for short to long-range 3D-imaging and to demonstrate their value ... Read more
DFKI Augmented Vision researchers Yongzhi Su, Praveen Nathan and Jason Rambach received their 1st place award in the prestigious BOP Challenge 2022 in the categories Overall Best Segmentation Method and The Best BlenderProc-Trained Segmentation Method.... Read more
Our Augmented Vision department is the coordinator of the new large European project "HumanTech". The Kick-Off meeting was held on July 20th, 2022, at DFKI in Kaiserslautern. Please read the whole article here: Artificia... Read more
DFKI Augmented Vision had a strong presence in the recent CVPR 2022 Conference held on June 19th-23rd, 2022, in New Orleans, USA. The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) is the premier annual computer vision event internationally. Homepage: ... Read more
On June 14th, 2022, Dr. Jason Rambach gave a keynote talk in the Computer Vision session of the Franco-German Research and Innovation Network event held at the Inria headquarters in Versailles, Paris, France. In the talk, an overview of the current activities of the... Read more
We are happy to announce that the Augmented Vision group will present two papers in the upcoming CVPR 2022 Conference from June 19th-23rd in New Orleans, USA. The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) is the premier annual computer vision even... Read more
DFKI Augmented Vision is working with Stellantis on the topic of Radar-Camera Fusion for Automotive Object Detection using Deep Learning since 2020. The collaboration has already led to two publications, in ICCV 2021 (International Conference on Computer Vision – Read more
As part of the research activities of DFKI Augmented Vision in the VIZTA project (https://www.vizta-ecsel.eu/), two publicly available datasets have been released and are available for download. TIMo dataset is a ... Read more
On July 29th, 2021, Dr. Jason Rambach presented the survey paper “A Survey on Applications of Augmented, Mixed and Virtual Reality for Nature and Environment” at the 23rd Human Computer Interaction Conference Read more
DFKI participates in the VIZTA project, coordinated by ST Micrelectronics, aiming at developing innovative technologies in the field of optical sensors and laser sources for short to long-range 3D-imaging and to demonstrate their value in several key applications in... Read more
As part of the research activities of DFKI Augmented Vision in the VIZTA project (https://www.vizta-ecsel.eu/), we have published the open-source dataset for automotive in-cabin monitoring with a wide-angle time-of-flight depth se... Read more
DFKI participates in the VIZTA project, coordinated by ST Micrelectronics, aiming at developing innovative technologies in the field of optical sensors and laser sources for short to long-range 3D-imaging and to demonstrate their value in several key applications in... Read more
We are happy to announce
that our paper “SynPo-Net–Accurate and Fast
CNN-Based 6DoF Object Pose Estimation Using Synthetic Training” has been
accepted for publication at the MDPI Sensors journal, Special Issue Object
Tracking and Motion Analysis. Sensors (ISSN 14... Read more
The Winter Conference on Applications of Computer Vision (WACV 2021) is IEEE’s and the PAMI-TC’s premier meeting on applications of computer vision. With its high quality and low cost, it provides an exceptional value for students... Read more