News Archive
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023

FUMOS

Fusion multimodaler optischer Sensoren zur 3D Bewegungserfassung in dichten, dynamischen Szenen für mobile, autonome Systeme

Fusion multimodaler optischer Sensoren zur 3D Bewegungserfassung in dichten, dynamischen Szenen für mobile, autonome Systeme

Autonomous vehicles will be an indispensable component of future mobility systems. Autonomous vehicles can significantly increase the safety of driving while simultaneously increasing traffic density. Autonomously operating vehicles must be able to continuously and accurately detect their environment and the movements of other road users. To this end, new types of real-time capable sensor systems must be researched. Cameras and laser scanners operate according to different principles and offer different advantages in capturing the environment. The aim of this project is to investigate whether and how the two sensor systems can be combined to reliably detect movements in traffic in real time. The challenge in this case is to suitably combine the heterogeneous data of both systems and to find suitable representations for the geometric and visual features of a traffic scene. These must be optimized to the extent that reliable information can be provided for vehicle control in real time. If such a hybrid sensor system can be designed and successfully built, this could represent a breakthrough for sensor equipment for autonomous vehicles and a decisive step for the implementation of this technology.

Contact

Ramy Battrawy, M.Sc.

Dr.-Ing. René Schuster

DECODE

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Machine Learning, and in particular Artificial Intelligence (AI) in Deep Learning, has revolutionized Computer Vision in almost all areas. These include topics such as motion estimation, object recognition, semantic segmentation (division and classification of parts of an image), pose estimation of people and hands, and many more. A major problem with this method is the distribution of the data. Training data often differs greatly from real applications and do not adequately cover them. Even if suitable data are available, extensive retraining is time-consuming and costly. Adaptive methods that continuously learn (lifelong learning) are the central challenge for the development of robust, realistic AI applications. In addition to the rich history in the field of general continuous learning, the topic of continuous learning for machine vision under real conditions has recently gained interest. The goal of the DECODE project is to explore continuously adaptive models for reconstructing and understanding human motion and the environment in application-related environments. For this purpose, mobile, visual and inertial sensors (accelerometers and angular rate sensors) will be used. For these different types of sensors and data, different approaches from the field of continuous learning will be researched and developed to ensure a smooth transfer from laboratory conditions to everyday, realistic scenarios. The work will concentrate on in the areas of segmented image and video segmentation, kinematic and pose estimation and the estimation of kinematics and pose of the human body as well as the representation of movements and their context. The field of potential applications for the methods developed in DECODE is wide-ranging and includes detailed ergonomic analysis of human-machine analysis of human-machine interactions, for example in the workplace, in factories, or in vehicles.

Contact

Dr.-Ing. Nadia Robertini

Dr.-Ing. René Schuster