News Archive
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

November 2021

Start des Projektes „DECODE”

KI zur Erkennung menschlicher Bewegungen und des Umfeldes

Adaptive Methoden die kontinuierlich dazu lernen (Lebenslanges Lernen), bilden eine zentrale Herausforderung zur Entwicklung von robusten, realitätsnahen KI-Anwendungen. Neben der reichen Historie auf dem Gebiet des allgemeinen kontinuierlichen Lernens („Continual Learning“) hat auch das Themenfeld von kontinuierlichem Lernen für Machinelles Sehen unter Realbedingungen jüngst an Interesse gewonnen.

Ziel des Projektes DECODE ist die Erforschung von kontinuierlich adaptierfähigen Modellen zur Rekonstruktion und dem Verständnis von menschlicher Bewegung und des Umfeldes in anwendungsbezogenen Umgebungen. Dazu sollen mobile, visuelle und inertiale Sensoren (Beschleunigungs- und Drehratensensoren) verwendet werden. Für diese verschiedenen Typen an Sensoren und Daten sollen unterschiedliche Ansätze aus dem Bereich des Continual Learnings erforscht und entwickelt werden um einen problemlosen Transfer von Laborbedingungen zu alltäglichen, realistischen Szenarien zu gewährleisten. Dabei konzentrieren sich die Arbeiten auf die Verbesserung in den Bereichen der semantischen Segmentierung von Bildern und Videos, der Schätzung von Kinematik und Pose des menschlichen Körpers sowie der Repräsentation von Bewegungen und deren Kontext. Das Feld potentieller Anwendungsgebiete für die in DECODE entwickelten Methoden ist weitreichend und umfasst eine detaillierte ergonomische Analyse von Mensch-Maschine Interaktionen zum Beispiel am Arbeitsplatz, in Fabriken, oder in Fahrzeugen.

Weitere Informationen: https://www.dfki.de/web/forschung/projekte-publikationen/projekte-uebersicht/projekt/decode

Contact: René Schuster

Tewodros Amberbir Habtegebrial honored with Google PhD Fellowship

Mr. Habtegebrial is a PhD student at the Augmented Vision research department at the German Research Center for Artificial Intelligence (DFKI) and at the same named lab at the Technical University of Kaiserslautern (TUK). He was awarded the Google PhD Fellowship for his exceptional and innovative research in the field of “Machine Perception“. The PhD fellowship is endowed with 80,000 US dollars. Google also provides each of the PhD students with a research mentor.

Professor Didier Stricker, Tewodros’ PhD supervisor and head of the respective research areas at TUK and DFKI on the award for his PhD student: “I am very pleased that Tewodros received a PhD Fellowship from Google. He earned the honor through his outstanding achievements in his research work in Machine Perception and Image Synthesis.”

As part of his PhD studies Mr Habtegebrial has been working on Image-Based Rendering (IBR). Recently, he has worked on a technique that enables Neural Networks to render realistic novel views, given a single 2D semantic map of the scene. The approach has been published together with google and Nvidia at the pemium conference Neurips 2020. In collaboration with researchers at DFKI and Google research, he is working on spherical light-field interpolation and realistic modelling of reflective surfaces in IBR. This enables the implementation of new applications in the field of realistic virtual reality (VR) and telepresence. In addition to his PhD, topic he has co-authored several articles on Optical Character Recognition (OCR) for Amharic language, which is the official language of Ethiopia.

Further information:
https://research.google/outreach/phd-fellowship/recipients/
https://www.dfki.de/en/web/news/google-phd-fellowship
https://www.uni-kl.de/pr-marketing/news/news/tewodros-amberbir-habtegebrial-mit-google-phd-fellowship-ausgezeichnet

Hitachi presents research with the German Research Center for Artificial Intelligence (DFKI)

Hitachi and DFKI have been collaborating on various research projects for many years. Hitachi is now presenting joint current research with DFKI, the AG wearHEALTH at the Technical University of Kaiserslautern (TUK), Xenoma Inc. and sci-track GmbH, a joint spin-off of DFKI and TUK, in the field of occupational safety in a video.

© Hitachi

The partners have jointly developed wearable AI technology that supports the monitoring of workers’ physical workload, the capturing of workflows and can be used to optimize them in terms of efficiency, occupational safety and health. Sensors are loosely integrated into normal working clothes to measure the pose and movements of the body segments. A new approach to handle cloth induced artefakts allows full wearing comfort and high capturing accuracy and reliability.  
 
Hitachi and DFKI will use the new solution to support worker and prevent dangerous poses to create a more efficient and safe working environment, while supporting full wearing comfort of any clothes.
 
Hitachi is a Principal Partner of the 2021 UN Climate Change Conference, known internationally as COP26, where it will present a video of its joint collaboration with DFKI, among other projects.
 
Further information:
Solution to visualize workers’ loads – Hitachi – YouTube
https://www.dfki.de/en/web/news/hitachi

Contact:  Prof. Dr. Didier Stricker

Hitachi, Ltd. (TSE: 6501), headquartered in Tokyo, Japan, is contributed to a sustainable society with a higher quality of life by driving innovation through data and technology as the Social Innovation Business. Hitachi is focused on strengthening its contribution to the Environment, the Resilience of business and social infrastructure as well as comprehensive programs to enhance Security & Safety. Hitachi resolves the issues faced by customers and society across six domains: IT, Energy, Mobility, Industry, Smart Life and Automotive Systems through its proprietary Lumada solutions. The company’s consolidated revenues for fiscal year 2020 (ended March 31, 2021) totaled 8,729.1 billion yen ($78.6 billion), with 871 consolidated subsidiaries and approximately 350,000 employees worldwide. Hitachi is a Principal Partner of COP26, playing a leading role in the efforts to achieve a Net Zero society and become a climate change innovator. Hitachi strives to achieve carbon neutrality at all its business sites by fiscal year 2030 and across the company’s entire value chain by fiscal year 2050. For more information on Hitachi, please visit the company’s website at https://www.hitachi.com.

Medica 2021: Better posture at the workplace thanks to new sensor technology

Whether pain in the back, shoulders or knees: Incorrect posture in the workplace can have consequences. A sensor system developed by researchers at the German Research Centre for Artificial Intelligence (DFKI) and TU Kaiserslautern might be of help. Sensors on the arms, legs and back, for example, detect movement sequences and software evaluates the data obtained. The system provides the user with direct feedback via a Smartwatch so that he can correct movement or posture. The sensors could be installed in working clothes and shoes. The researchers have presented this technology at the medical technology trade fair Medica held from November 15th to 18th, 2021 at the Rhineland-Palatinate research stand (hall 3, stand E80).

Assembling components in a bent posture, regularly putting away heavy crates on shelves or quickly writing an e-mail to a colleague on the computer – during work most people do not pay attention to an ergonomically sensible posture or a gentle sequence of movements. This can result in back pain that may well occur several times a month or week and develop into chronic pain over time. However, incorrect posture can also lead to permanent pain in the hips, neck or knees.

A technology currently being developed by a research team at DFKI and Technische Universität Kaiserslautern (TUK) can provide a remedy in the future. Sensors are used that are simply attached to different parts of the body such as arms, spine and legs. “Among other things, they measure accelerations and so-called angular velocities. The data obtained is then processed by our software,” says Markus Miezal from the wearHEALTH working group at TUK. On this basis, the software calculates motion parameters such as joint angles at arm and knee or the degree of flexion or twisting of the spine. “The technology immediately recognizes if a movement is performed incorrectly or if an incorrect posture is adopted,” continues his colleague Mathias Musahl from the Augmented Vision/Extended Reality research unit at the DFKI.

The Smartwatch is designed to inform the user directly in order to correct his movement or posture. Among other things, the researchers plan to install the sensors in work clothing and shoes. This technology is interesting, for example, for companies in industry, but it can also help to pay more attention to one’s own body in everyday office life at a desk.

All of this is part of the BIONIC project, which is funded by the European Union. BIONIC stands for “Personalized Body Sensor Networks with Built-In Intelligence for Real-Time Risk Assessment and Coaching of Ageing workers, in all types of working and living environments”. It is coordinated by Professor Didier Stricker, head of the Augmented Vision/Extended Reality research area at DFKI. The aim is to develop a sensor system with which incorrect posture and other stresses at the workplace can be reduced.

In addition to the DFKI and the TUK, the following are involved in the project: the Federal Institute for Occupational Safety and Health (BAuA) in Dortmund, the Spanish Instituto de Biomechanica de Valencia, the Fundación Laboral de la Construcción, also in Spain, the Roessingh Research and Development Centre at the University of Twente in the Netherlands, the Systems Security Lab at the Greek University of Piraeus, Interactive Wear GmbH in Munich, Hypercliq IKE in Greece, ACCIONA Construcción S.A. in Spain and Rolls-Royce Power Systems AG in Friedrichshafen.

Further information:
Website BIONIC
Video

Contact: Markus Miezal, Dipl.-Ing. Mathias Musahl

Related news: 02/26/2019 Launch of new EU project –“BIONIC,” an intelligent sensor network designed to reduce the physical demands at the workplace

DFKI AV – Stellantis Collaboration on Radar-Camera Fusion – 2 publications

DFKI Augmented Vision is working with Stellantis on the topic of Radar-Camera Fusion for Automotive Object Detection using Deep Learning since 2020. The collaboration has already led to two publications, in ICCV 2021 (International Conference on Computer Vision – ERCVAD Workshop on “Embedded and Real-World Computer Vision in Autonomous Driving”) and WACV 2022 (Winter Conference on Applications of Computer Vision).

The 2 publications are:

1.  Deployment of Deep Neural Networks for Object Detection on Edge AI Devices with Runtime OptimizationProceedings of the IEEE International Conference on Computer Vision Workshops – ERCVAD Workshop on Embedded and Real-World Computer Vision in Autonomous Driving

Lukas Stefan Stäcker, Juncong Fei, Philipp Heidenreich, Frank Bonarens, Jason Rambach, Didier Stricker, Christoph Stiller

This paper discusses the optimization of neural network based algorithms for object detection based on camera, radar, or lidar data in order to deploy them on an embedded system on a vehicle.

2. Fusion Point Pruning for Optimized 2D Object Detection with Radar-Camera FusionProceedings of the IEEE Winter Conference on Applications of Computer Vision, 2022

Lukas Stefan Stäcker, Juncong Fei, Philipp Heidenreich, Frank Bonarens, Jason Rambach, Didier Stricker, Christoph Stiller

This paper introduces fusion point pruning, a new method to optimize the selection of fusion points within the deep learning network architecture for radar-camera fusion.

Please view the abstract here: Fusion Point Pruning for Optimized 2D Object Detection with Radar-Camera Fusion (dfki.de)

Contact: Dr. Jason Rambach

GreifbAR Projekt – Greifbare Realität – Interaktion mit realen Werkzeugen in Mixed-Reality Welten

Am 01.10.2021 ist das Forschungsprojekt Projekt GreifbAR gestartet unter Leitung des DFKI (Forschungsbereich Erweiterte Realität). Ziel des Projekts GreifbAR ist es, Mixed-Reality Welten (MR), einschließlich virtueller (VR) und erweiterter Realität („Augmented Reality“ – AR), greifbar und fassbar zu machen, indem die Nutzer mit bloßen Händen mit realen und virtuellen Objekten interagieren können. Die Genauigkeit und Geschicklichkeit der Hand ist für die Ausführung präziser Aufgaben in vielen Bereichen von größter Bedeutung, aber die Erfassung der Hand-Objekt-Interaktion in aktuellen MR-Systemen ist völlig unzureichend. Derzeitige Systeme basieren auf handgehaltenen Controllern oder Erfassungsgeräten, die auf Handgesten ohne Kontakt mit realen Objekten beschränkt sind. GreifbAR löst diese Einschränkung, indem es ein Erfassungssystem einführt, das sowohl die vollständige Handhalterung inklusiv Handoberfläche als auch die Objektpose erkennt, wenn Benutzer mit realen Objekten oder Werkzeugen interagieren. Dieses Erfassungssystem wird in einen Mixed-Reality-Trainingssimulator integriert, der in zwei relevanten Anwendungsfällen demonstriert wird: industrielle Montage und Training chirurgischer Fertigkeiten. Die Nutzbarkeit und Anwendbarkeit sowie der Mehrwert für Trainingssituationen werden gründlich durch Benutzerstudien analysiert.

Fördergeber

Bundesministerium für Bildung und Forschung, BMBF

Förderkennzeichen

16SV8732                                                                                                                                                         

Projektlaufzeit

01.10.2021 – 30.09.2023

Verbundkoordination

Deutsches Forschungszentrum für Künstliche Intelligenz GmbH

Projektpartner

  • DFKI – Forschungsbereich Erweiterte Realität
  • NMY – Mixed Reality Communication GmbH
  • Charité – Universitätsmedizin Berlin
  • Universität Passau Lehrstuhl für Psychologie mit Schwerpunkt Mensch – Maschine – Interaktion

Fördervolumen

1.179.494 € (gesamt), 523.688 € (DFKI)

Kontakt: Dr. Jason Rambach