Tewodros Amberbir Habtegebrial honored with Google PhD Fellowship

Mr. Habtegebrial is a PhD student at the Augmented Vision research department at the German Research Center for Artificial Intelligence (DFKI) and at the same named lab at the Technical University of Kaiserslautern (TUK). He was awarded the Google PhD Fellowship for his exceptional and innovative research in the field of “Machine Perception“. The PhD fellowship is endowed with 80,000 US dollars. Google also provides each of the PhD students with a research mentor.

Professor Didier Stricker, Tewodros’ PhD supervisor and head of the respective research areas at TUK and DFKI on the award for his PhD student: “I am very pleased that Tewodros received a PhD Fellowship from Google. He earned the honor through his outstanding achievements in his research work in Machine Perception and Image Synthesis.”

As part of his PhD studies Mr Habtegebrial has been working on Image-Based Rendering (IBR). Recently, he has worked on a technique that enables Neural Networks to render realistic novel views, given a single 2D semantic map of the scene. The approach has been published together with google and Nvidia at the pemium conference Neurips 2020. In collaboration with researchers at DFKI and Google research, he is working on spherical light-field interpolation and realistic modelling of reflective surfaces in IBR. This enables the implementation of new applications in the field of realistic virtual reality (VR) and telepresence. In addition to his PhD, topic he has co-authored several articles on Optical Character Recognition (OCR) for Amharic language, which is the official language of Ethiopia.

Further information:
https://research.google/outreach/phd-fellowship/recipients/
https://www.dfki.de/en/web/news/google-phd-fellowship
https://www.uni-kl.de/pr-marketing/news/news/tewodros-amberbir-habtegebrial-mit-google-phd-fellowship-ausgezeichnet

Hitachi presents research with the German Research Center for Artificial Intelligence (DFKI)

Hitachi and DFKI have been collaborating on various research projects for many years. Hitachi is now presenting joint current research with DFKI, the AG wearHEALTH at the Technical University of Kaiserslautern (TUK), Xenoma Inc. and sci-track GmbH, a joint spin-off of DFKI and TUK, in the field of occupational safety in a video.

© Hitachi

The partners have jointly developed wearable AI technology that supports the monitoring of workers’ physical workload, the capturing of workflows and can be used to optimize them in terms of efficiency, occupational safety and health. Sensors are loosely integrated into normal working clothes to measure the pose and movements of the body segments. A new approach to handle cloth induced artefakts allows full wearing comfort and high capturing accuracy and reliability.  
 
Hitachi and DFKI will use the new solution to support worker and prevent dangerous poses to create a more efficient and safe working environment, while supporting full wearing comfort of any clothes.
 
Hitachi is a Principal Partner of the 2021 UN Climate Change Conference, known internationally as COP26, where it will present a video of its joint collaboration with DFKI, among other projects.
 
Further information:
Solution to visualize workers’ loads – Hitachi – YouTube
https://www.dfki.de/en/web/news/hitachi

Contact:  Prof. Dr. Didier Stricker

Hitachi, Ltd. (TSE: 6501), headquartered in Tokyo, Japan, is contributed to a sustainable society with a higher quality of life by driving innovation through data and technology as the Social Innovation Business. Hitachi is focused on strengthening its contribution to the Environment, the Resilience of business and social infrastructure as well as comprehensive programs to enhance Security & Safety. Hitachi resolves the issues faced by customers and society across six domains: IT, Energy, Mobility, Industry, Smart Life and Automotive Systems through its proprietary Lumada solutions. The company’s consolidated revenues for fiscal year 2020 (ended March 31, 2021) totaled 8,729.1 billion yen ($78.6 billion), with 871 consolidated subsidiaries and approximately 350,000 employees worldwide. Hitachi is a Principal Partner of COP26, playing a leading role in the efforts to achieve a Net Zero society and become a climate change innovator. Hitachi strives to achieve carbon neutrality at all its business sites by fiscal year 2030 and across the company’s entire value chain by fiscal year 2050. For more information on Hitachi, please visit the company’s website at https://www.hitachi.com.

Medica 2021: Better posture at the workplace thanks to new sensor technology

Whether pain in the back, shoulders or knees: Incorrect posture in the workplace can have consequences. A sensor system developed by researchers at the German Research Centre for Artificial Intelligence (DFKI) and TU Kaiserslautern might be of help. Sensors on the arms, legs and back, for example, detect movement sequences and software evaluates the data obtained. The system provides the user with direct feedback via a Smartwatch so that he can correct movement or posture. The sensors could be installed in working clothes and shoes. The researchers have presented this technology at the medical technology trade fair Medica held from November 15th to 18th, 2021 at the Rhineland-Palatinate research stand (hall 3, stand E80).

Assembling components in a bent posture, regularly putting away heavy crates on shelves or quickly writing an e-mail to a colleague on the computer – during work most people do not pay attention to an ergonomically sensible posture or a gentle sequence of movements. This can result in back pain that may well occur several times a month or week and develop into chronic pain over time. However, incorrect posture can also lead to permanent pain in the hips, neck or knees.

A technology currently being developed by a research team at DFKI and Technische Universität Kaiserslautern (TUK) can provide a remedy in the future. Sensors are used that are simply attached to different parts of the body such as arms, spine and legs. “Among other things, they measure accelerations and so-called angular velocities. The data obtained is then processed by our software,” says Markus Miezal from the wearHEALTH working group at TUK. On this basis, the software calculates motion parameters such as joint angles at arm and knee or the degree of flexion or twisting of the spine. “The technology immediately recognizes if a movement is performed incorrectly or if an incorrect posture is adopted,” continues his colleague Mathias Musahl from the Augmented Vision/Extended Reality research unit at the DFKI.

The Smartwatch is designed to inform the user directly in order to correct his movement or posture. Among other things, the researchers plan to install the sensors in work clothing and shoes. This technology is interesting, for example, for companies in industry, but it can also help to pay more attention to one’s own body in everyday office life at a desk.

All of this is part of the BIONIC project, which is funded by the European Union. BIONIC stands for “Personalized Body Sensor Networks with Built-In Intelligence for Real-Time Risk Assessment and Coaching of Ageing workers, in all types of working and living environments”. It is coordinated by Professor Didier Stricker, head of the Augmented Vision/Extended Reality research area at DFKI. The aim is to develop a sensor system with which incorrect posture and other stresses at the workplace can be reduced.

In addition to the DFKI and the TUK, the following are involved in the project: the Federal Institute for Occupational Safety and Health (BAuA) in Dortmund, the Spanish Instituto de Biomechanica de Valencia, the Fundación Laboral de la Construcción, also in Spain, the Roessingh Research and Development Centre at the University of Twente in the Netherlands, the Systems Security Lab at the Greek University of Piraeus, Interactive Wear GmbH in Munich, Hypercliq IKE in Greece, ACCIONA Construcción S.A. in Spain and Rolls-Royce Power Systems AG in Friedrichshafen.

Further information:
Website BIONIC
Video

Contact: Markus Miezal, Dipl.-Ing. Mathias Musahl

Related news: 02/26/2019 Launch of new EU project –“BIONIC,” an intelligent sensor network designed to reduce the physical demands at the workplace

DFKI AV – Stellantis Collaboration on Radar-Camera Fusion – 2 publications

DFKI Augmented Vision is working with Stellantis on the topic of Radar-Camera Fusion for Automotive Object Detection using Deep Learning since 2020. The collaboration has already led to two publications, in ICCV 2021 (International Conference on Computer Vision – ERCVAD Workshop on “Embedded and Real-World Computer Vision in Autonomous Driving”) and WACV 2022 (Winter Conference on Applications of Computer Vision).

The 2 publications are:

1.  Deployment of Deep Neural Networks for Object Detection on Edge AI Devices with Runtime OptimizationProceedings of the IEEE International Conference on Computer Vision Workshops – ERCVAD Workshop on Embedded and Real-World Computer Vision in Autonomous Driving

Lukas Stefan Stäcker, Juncong Fei, Philipp Heidenreich, Frank Bonarens, Jason Rambach, Didier Stricker, Christoph Stiller

This paper discusses the optimization of neural network based algorithms for object detection based on camera, radar, or lidar data in order to deploy them on an embedded system on a vehicle.

2. Fusion Point Pruning for Optimized 2D Object Detection with Radar-Camera FusionProceedings of the IEEE Winter Conference on Applications of Computer Vision, 2022

Lukas Stefan Stäcker, Juncong Fei, Philipp Heidenreich, Frank Bonarens, Jason Rambach, Didier Stricker, Christoph Stiller

This paper introduces fusion point pruning, a new method to optimize the selection of fusion points within the deep learning network architecture for radar-camera fusion.

Please view the abstract here: Fusion Point Pruning for Optimized 2D Object Detection with Radar-Camera Fusion (dfki.de)

Contact: Dr. Jason Rambach

GreifbAR Projekt – Greifbare Realität – Interaktion mit realen Werkzeugen in Mixed-Reality Welten

Am 01.10.2021 ist das Forschungsprojekt Projekt GreifbAR gestartet unter Leitung des DFKI (Forschungsbereich Erweiterte Realität). Ziel des Projekts GreifbAR ist es, Mixed-Reality Welten (MR), einschließlich virtueller (VR) und erweiterter Realität („Augmented Reality“ – AR), greifbar und fassbar zu machen, indem die Nutzer mit bloßen Händen mit realen und virtuellen Objekten interagieren können. Die Genauigkeit und Geschicklichkeit der Hand ist für die Ausführung präziser Aufgaben in vielen Bereichen von größter Bedeutung, aber die Erfassung der Hand-Objekt-Interaktion in aktuellen MR-Systemen ist völlig unzureichend. Derzeitige Systeme basieren auf handgehaltenen Controllern oder Erfassungsgeräten, die auf Handgesten ohne Kontakt mit realen Objekten beschränkt sind. GreifbAR löst diese Einschränkung, indem es ein Erfassungssystem einführt, das sowohl die vollständige Handhalterung inklusiv Handoberfläche als auch die Objektpose erkennt, wenn Benutzer mit realen Objekten oder Werkzeugen interagieren. Dieses Erfassungssystem wird in einen Mixed-Reality-Trainingssimulator integriert, der in zwei relevanten Anwendungsfällen demonstriert wird: industrielle Montage und Training chirurgischer Fertigkeiten. Die Nutzbarkeit und Anwendbarkeit sowie der Mehrwert für Trainingssituationen werden gründlich durch Benutzerstudien analysiert.

Fördergeber

Bundesministerium für Bildung und Forschung, BMBF

Förderkennzeichen

16SV8732                                                                                                                                                         

Projektlaufzeit

01.10.2021 – 30.09.2023

Verbundkoordination

Deutsches Forschungszentrum für Künstliche Intelligenz GmbH

Projektpartner

  • DFKI – Forschungsbereich Erweiterte Realität
  • NMY – Mixed Reality Communication GmbH
  • Charité – Universitätsmedizin Berlin
  • Universität Passau Lehrstuhl für Psychologie mit Schwerpunkt Mensch – Maschine – Interaktion

Fördervolumen

1.179.494 € (gesamt), 523.688 € (DFKI)

Kontakt: Dr. Jason Rambach

VIZTA Project Time-of-Flight Camera Datasets Released

As part of the research activities of DFKI Augmented Vision in the VIZTA project (https://www.vizta-ecsel.eu/), two publicly available datasets have been released and are available for download. TIMo dataset is a building indoor monitoring dataset for person detection, person counting, and anomaly detection. TICaM dataset is an automotive in-cabin monitoring dataset with a wide field of view for person detection and segmentation and activity recognition. Real and synthetic images are provided allowing for benchmarking of transfer learning algorithms as well. Both datasets are available here https://vizta-tof.kl.dfki.de/. The publication describing the datasets in detail are available as preprints.

TICaM: https://arxiv.org/pdf/2103.11719.pdf

TIMo: https://arxiv.org/pdf/2108.12196.pdf

Video: https://www.youtube.com/watch?v=xWCor9obttA

Contacts: Dr. Jason Rambach, Dr. Bruno Mirbach

Three papers accepted at ISMAR 2021

This image has an empty alt attribute; its file name is ISMAR2021.png

We are happy to announce that three papers from our department have been accepted at the ISMAR 2021 conference.

ISMAR, the International Symposium on Mixed and Augmented Reality, is the leading international academic conference in the field of Augmented Reality and Mixed Reality. The symposium will be held as a hybrid conference from October 4th to 8th, 2021, with its main location in the city of Bari, Italy.

The accepted papers of our department are the following:

Visual SLAM with Graph-Cut Optimized Multi-Plane Reconstruction
Fangwen Shu, Yaxu Xie, Jason Raphael Rambach, Alain Pagani, Didier Stricker

Comparing Head and AR Glasses Pose Estimation
Ahmet Firintepe, Oussema Dhaouadi, Alain Pagani, Didier Stricker

A Study of Human-Machine Teaming For Single Pilot Operation with Augmented Reality
Nareg Minaskan Karabid, Alain Pagani, Charles-Alban Dormoy, Jean-Marc Andre, Didier Stricker

We congratulate all authors for their publication!

Contact: Dr. Alain Pagani



XR for nature and environment survey

On July 29th, 2021, Dr. Jason Rambach presented the survey paper “A Survey on Applications of Augmented, Mixed and Virtual Reality for Nature and Environment” at the 23rd Human Computer Interaction Conference HCI International. The article is the result of a collaboration between DFKI, the Worms University of Applied Sciences and the University of Kaiserslautern.

Abstract: Augmented, virtual and mixed reality (AR/VR/MR) are technologies of great potential due to the engaging and enriching experiences they are capable of providing. However, the possibilities that AR/VR/MR offer in the area of environmental applications are not yet widely explored. In this paper, we present the outcome of a survey meant to discover and classify existing AR/VR/MR applications that can benefit the environment or increase awareness on environmental issues. We performed an exhaustive search over several online publication access platforms and past proceedings of major conferences in the fields of AR/VR/MR. Identified relevant papers were filtered based on novelty, technical soundness, impact and topic relevance, and classified into different categories. Referring to the selected papers, we discuss how the applications of each category are contributing to environmental protection and awareness. We further analyze these approaches as well as possible future directions in the scope of existing and upcoming AR/VR/MR enabling technologies.

Authors: Jason Rambach, Gergana Lilligreen, Alexander Schäfer, Ramya Bankanal, Alexander Wiebel, Didier Stricker

Paper: https://av.dfki.de/publications/a-survey-on-applications-of-augmented-mixed-and-virtual-reality-for-nature-and-environment/

Contact: Jason.Rambach@dfki.de

Presseberichte zu unserem Projekt “KI-Rebschnitt”
Advancing sports analytics to coach athletes through Deep Learning research

The recent advancements in Deep learning has lead to new interesting applications such as analyzing human motion and activities in recorded videos. The analysis covers from simple motion of humans walking, performing exercises to complex motions such as playing sports.

The athlete’s performance can be easily captured with a fixed camera for sports like tennis, badminton, diving, etc. The large availability of low cost cameras in handheld devices has further led to common place solution to record videos and analyze an athletes performance. Although the sports trainers can provide visual feedback by playing recorded videos, it is still hard to measure and monitor the performance improvement of the athlete.  Also, the manual analysis of the obtained footage is a time-consuming task which involves isolating actions of interest and categorizing them using domain-specific knowledge. Thus, the automatic interpretation of performance parameters in sports has gained a keen interest.

Competitive diving is one of the well recognized aquatic sport in Olympics in which a person dives from a platform or a springboard and performs different classes of acrobatics before descending into the water. These classes are standardized by international organization Fédération Internationale de Natation (FINA). The differences in the acrobatics performed in various classes of diving are very subtle. The difference arises in the duration which starts with the diver standing on a diving platform or a springboard and ends at the moment he/she dives into the water. This is a challenging task to model especially due to involvement of rapid changes and requires understanding of long-term human dynamics. Further, the model must be sensitive to subtle changes in body pose over a large number of frames to determine the correct classification.

In order to automate this kind of task, three challenging sub-problems are often encountered:  1) temporally cropping events/actions of interest from continuous video;  2) tracking the person of interest even though other divers and bystanders may be in view; and 3) classifying the events/actions of interest.

We are developing a solution in co-operation with Institut für Angewandte Trainingswissenshaft in Leipzig (IAT) to tackle the three subproblems. We work towards a complete parameter tracking solution based on monocular markerless human body motion tracking using only a mobile device (tablet or mobile phone) as training support tool to the overall diving action analysis. The techniques proposed, can be generalized to video footage recorded from other sports.

Contact person: Dr. Bertram Taetz, Pramod Murthy