News Archive
  • March 2025
  • February 2025
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024

Paper accepted at CVPR 2020

Our paper with the title “HandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation from a Single Depth Map” has been accepted for publication at the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020 (CVPR 2020) which will take place from June 14th to 19th, 2020 in Seattle, Washington, USA. It is the “premier” conference in the field of Computer Vision. Our paper was accepted from 6656 submissions as one of 1470 (acceptance rate: 22 %).

Abstract 
We propose a novel architecture with 3D convolutions for simultaneous 3D hand shape and pose estimation trained in a weakly-supervised manner. The input to our architecture is a 3D voxelized depth map. For shape estimation, our architecture produces two different hand shape representations. The first is the 3D voxelized grid of the shape which is accurate but does not preserve the mesh topology and the number of mesh vertices. The second representation is the 3D hand surface which is less accurate but does not suffer from the limitations of the first representation. To combine the advantages of these two representations, we register the hand surface to the voxelized hand shape. In extensive experiments, the proposed approach improves over the state-of-the-art for hand shape estimation on the SynHand5M dataset by 47.8%. Moreover, our 3D data augmentation on voxelized depth maps allows to further improve the accuracy of 3D hand pose estimation on real datasets. Our method produces visually more reasonable and realistic hand shapes of NYU and BigHand2.2M datasets compared to the existing approaches.

Please find our paper here.

Authors
Muhammad Jameel Nawaz Malik, Ibrahim Abdelaziz, Ahmed Elhayek, Soshi Shimada, Sk Aziz Ali, Vladislav Golyanik, Christian Theobalt, Didier Stricker

Please also check out our video on YouTube.

Please contact Didier Stricker for more information.

Three PhDs successfully finished in 2019

We are very happy to announce that three of our PhD students have been able to successfully defend their PhD thesis during 2019!

Mr. Aditya Tewari defended his thesis with the title “Prior-Knowledge Addition to Spatial and Temporal Classification Models with Demonstration on Hand Shape and Gesture Classification” on October 25th in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Paul Lukowicz (TU Kaiserslautern and DFKI) and Prof. Dr. Dr. h. c. Dieter Rombach (Fraunhofer IESE, Kaiserslautern).

Mr. Aditya Tewari during his PhD defense on October 25th,  2019

Mr. Vladislav Golyanik defended his thesis with the title „Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds” on November 20th in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Antonio Aguado (Universitat Politècnica de Catalunya, Spain) and Prof. Dr. Reinhard Koch (Christian-Albrechts-Universität zu Kiel).

Mr. Vladislav Golyanik during his PhD defense on November 20th, 2019

Mr. Christian Bailer defended his thesis with the title „New Data Based Matching Strategies for Visual Motion Estimation” on November 22nd in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Michael Feslberg (Linköpings University, Sweden) and Dr. Margret Keuper (Max-Planck-Institut für Informatik, Saarbrücken).

Mr. Christian Bailer during his PhD defense on November 22nd, 2019

All three PhDs have left our Augmented Vision Department shortly after their defense to pursue a career outside of DFKI.

Two Papers at VISAPP 2020

Our team is presenting two papers at the VISAPP 2020 (15th International Conference on Computer Vision Theory and Applications) conference that is taking place from February 27th – 29th in Valletta, Malta.

The two papers are:

Iterative Color Equalization for Increased Applicability of Structured Light Reconstruction
Torben Fetzer, Gerd Reis, Didier Stricker

Autopose: Large-Scale Automotive Driver Head Pose And Gaze Dataset With Deep Head Pose Baseline
Mohamed Selim, Ahmet Firintepe, Alain Pagani, Didier Stricker

The AutoPOSE dataset can be downloaded from the website at autopose.dfki.de.

Paper published in IJCV

The International Journal of Computer Vision (IJCV) is considered one of the top journals in Computer Vision. It details the science and engineering of this rapidly growing field. Regular articles present major technical advances of broad general interest. Survey articles offer critical reviews of the state of the art and/or tutorial presentations of pertinent topics.

We are proud to announce that our paper “SceneFlowFields++: Multi-frame Matching, Visibility Prediction, and Robust Interpolation for Scene Flow Estimation” has been published in the IJCV (for more information click here). It is an extension of our earlier WACV paper “SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences“.

IFA 2019: Intelligent sensor technology for a better posture at the workplace

At the Internationale Funkausstellung (IFA 2019) – the world’s leading trade fair for consumer electronics and home appliances – at the beginning of September in Berlin, researchers from the TU Kaiserslautern (TUK) and the DFKI research area Augmented Reality presented a sensor system that reduces incorrect posture at the workplace. Sensors are used which are attached to different parts of the body such as the arms, spine and legs and which determine the movement sequences. Software evaluates the data and calculates movement parameters such as joint angles on the arm and knee or the degree of flexion or twisting of the spine. The system immediately detects if a movement is carried out incorrectly or if an incorrect posture is adopted. Via a Smartwatch, the system gives the user direct feedback so that he can correct movement or posture. The sensors could be integrated into work clothing and shoes. The run at the IFA was great and the project prototype was assessed very positively throughout. The press was also represented in large numbers.  Please find the links to the articles below.

Video coverage in German:
SWR Landesschau Aktuell Rheinlandpfalz: Video from 06.09.2019, starting 7:45h

Media coverage in German:
Rheinpfalz: Sensoren in der Arbeitskleidung
Ärztezeitung: Digital Health darf bei der IFA nicht fehlen
Elektroniknet: Neue Sensortechnik verspricht bessere Haltung am Arbeitsplatz
Esanum: Haltungsschäden mit Sensortechnik vermeiden
Industie.de: Sensoren für eine bessere Körperhaltung
Medica: Bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Maschinenwerkzeug.de: IFA 2019: Bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Mobile-zeitgeist.de: IFA 2019: Eine bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Nachrichten-kl.de: IFA 2019: Eine bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Nachrichten.idw-online.de: IFA 2019: Eine bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Smarterworld: Sensoren gegen Haltungsschäden
Uni-kl.de: Neue Sensortechnik: Bessere Haltung am Arbeitsplatz

Media coverage in English:
Alphagalileo: IFA 2019: Intelligent sensor technology for a better posture at the workplace
Dailymail.co: Sit up straight! Smartwatch that sounds an alarm every time you slump at your desk is being developed by scientists to combat bad posture
Expressdigest: Smartwatch sounds an alarm every time you slump to correct posture
eandt.theiet: Wearables could deter slouching at work, researchers suggest
Elektroniknet: New sensor technology promises better posture at the workplace
France.timesofnews: Smartwatch sounds an alarm every time you slump to correct posture
Longroom: Smartwatch sounds an alarm every time you slump to correct posture
Nachrichten.idw-online.de: IFA 2019: Intelligent sensor technology for a better posture at the workplace
uni-kl.de: IFA 2019: Intelligent sensor technology for a better posture at the workplace
wsbuss: Smartwatch sounds an alarm every time you slump to correct posture
Newsoneplace: Smartwatch sounds an alarm every time you slump to correct posture


Paper at EuroVR Conference

Our team is presenting the paper

Augmented Reality in Physics education: Motion understanding using an Augmented Airtable
Nareg Minaskan Karabid, Jason Raphael Rambach, Alain Pagani, Didier Stricker

at the 16th EuroVR International Conference 2019 in Tallinn, Estonia on October, 23rd-25th.

The paper presents the part of the group’s work within the research project BeGreifen

Two papers at ISMAR 2019

Our team is presenting two papers at the ISMAR 2019 conference in Beijing, China. ISMAR (International Symposium on Mixed and Augmented Reality) is the leading international conference in AR/VR/MR technologies. The two papers on the topics of Object Pose Estimation and SLAM are:

A Shape Completion Component for Monocular Non-Rigid SLAM
Yongzhi Su, Vladislav Golyanik, Nareg Minaskan Karabid, Sk Aziz Ali, Didier Stricker

Deep Multi-State Object Pose Estimation for Augmented Reality Assembly Yongzhi Su, Jason Raphael Rambach, Nareg Minaskan Karabid, Paul Lesur, Alain Pagani, Didier Stricker

Kick-Off meeting of the EU project HyperCOG

Dr. Alain Pagani, and Mohamed Selim (Augmented Vision department) and Ludgar Van Elst (Smart Data and Services department) participated in the Kick-off meeting of the EU project HyperCOG on September 24th, 2019, that was held at IK4-LORTEK in Spain.

HyperCOG project “HYPERCONNECTED ARCHITECTURE FOR HIGH COGNITIVE PRODUCTION PLANTS” addresses the full digital transformation of the process industry and cognitive process production plants through an innovative Industrial Cyber-Physical System (ICPS). It is based on commercially available advanced technologies that will enable the development of a hyper-connected network of digital nodes. The nodes can catch outstanding streams of data in real-time, which together with the high computing capabilities available nowadays, provide sensing, knowledge and cognitive reasoning to the industrial business. The project will strive to demonstrate how data technologies embedded in a CPS platform applied in the process industry can streamline processes, achieve a step gain in efficiency, sustainability and resource utilization and act as a basis for the provision of new services.

In HyperCOG, 14 international partners have joined forces, bringing together the necessary competence in terms of expertise and resources to ensure the achievement of the project goals. In DFKI, the departments “Augmented Vision” and the department “Smart Data and Services” at DFKI Kaiserslautern are involved.

The Augmented Vision department is responsible for developing smart sensing technologies based on visual perception and visual interpretation for different use cases in active industrial plants. Moreover, in industrial plants, with a growing amount of sensors and data, the information load might increase and hinder the monitoring task. Augmented Vision department will carry out the work on AR to present contextualized information in a simple and direct way.

Project partners:
1. SmartFactory Kaiserslautern (GERMANY)
2. IK4-LORTEK (SPAIN)
3. Tecnalia Reseach and Innovation (SPAIN)
4. Estia – École Supérieure des Technologies Industrielles Avancées (FRANCE)
5. SIDENOR Aceros Especiales SL (SPAIN)
6. Çimsa Çimento Sanayi Ve Ticaret A.Ş. (TURKEY)
7. SOLVAY (FRANCE)
8. MSI – Mondragon Sistemas de Información (SPAIN)
9. U-PEC – Université Paris-Est Créteil (FRANCE)
10. CBS – Cybers Services Zrt (HUNGARY)
11. Ekodenge (TURKEY)
12. 2.0 LCA consultants (DENMARK)
13. Insights Publishers (UNITED KINGDOM).

Contact person: Dr. Alain Pagani, Mohamed Selim

Picture: The HyperCOG partners at the kick-off meeting.

Talk of Dr. Alain Pagani at the HE2Net Symposium 2019

Sylvain Hourlier (THALES) and Alain Pagani (DFKI)

On September 12th, 2019, Dr. Alain Pagani was invited to give a talk on the subject “User monitoring, Human-machine collaboration and Artificial Intelligence” at the 10th HE2Net Symposium.

The HE2Net Symposium, jointly organized by THALES and the Ecole Nationale Supérieure de Cognitique (ENSC) is the annual conference of the Human Engineering and Expertise Network, an association of institutions and companies addressing the topic of human factors in engineering.

The 10th HE2Net Symposium took place in Arcachon, France and was devoted to the topic of Human Factor of Artificial Intelligence. Experts of the Human Factors field and the Artificial Intelligence field could exchange views on questions related to the inclusion of AI methods in human-machine collaboration.

Dr. Alain Pagani presented recent advances on Artificial Intelligence at DFKI, including work on non-invasive human monitoring techniques developed at the Department Augmented Vision.

Photo: Sylvain Hourlier (THALES), co-organizer of the HE2Net Symposium and Alain Pagani (DFKI GmbH).

Papers at IEEE Intelligent Vehicles Conference 2019

The following papers have been presented at the Intelligent Vehicles Symposium (IV) 2019 from 9th to 12th of June in Paris, France:

PWOC-3D: Deep Occlusion-Aware End-to-End Scene Flow Estimation Rohan Saxena, René Schuster, Oliver Wasenmüller, Didier Stricker

DeLiO: Decoupled LiDAR Odometry
Queens Maria Thomas, Oliver Wasenmüller, Didier Stricker

The 2019 IEEE Intelligent Vehicles Symposium (IV’19) is a premier annual technical forum sponsored by the IEEE Intelligent Transportation Systems Society (ITSS). It brings together researchers and practitioners from universities, industry, and government agencies worldwide to share and discuss the latest advances in theory and technology related to intelligent vehicles. Papers concerning all aspects of intelligent vehicles as well as proposals for workshops and specials sessions are invited for IV’19. For more Information click here.

Oral Paper at CVPR 2019!

Our team members have presented the following paper at the Computer Vision and Pattern Recognition Conference 2019 which took place from 16th to 20th of June 2019 in Long Beach, California, USA and is the “premier” conference in this field (Computer Vision).

SDC – Stacked Dilated Convolution: A Unified Descriptor Network for Dense Matching Tasks
René Schuster, Oliver Wasenmüller, Christian Unger, Didier Stricker

Content: To introduce a new design element for Deep Neural Networks (SDC – Stacked Dilated Convolutions) and beyond that to apply this element in a network for Dense Feature Description. With the new descriptor, we were able to improve matching for stereo disparity, optical flow, and scene flow on different data sets by up to 50%.

Our paper was accepted from 5160 submissions as one of 1294 (acceptance rate: 25.1 %) and was also one of the 288 oral papers (oral presentation in addition to the poster session), oral rate: 5.6 % of submissions, 22.3 % of accepted papers. The presentation can be seen here.

New European Project “BIONIC”

BIONIC is a European research project aiming to develop an unobtrusive, autonomous and privacy preserving platform for real-time risk alerting and continuous persuasive coaching, enabling the design of workplace interventions adapted to the needs and fitness levels of specific ageing workforce. Gamification strategies adapted to the needs and wishes of the elderly workers will ensure optimal engagement for prevention and self-management of musculoskeletal health in any working/living environment.

For more Information visit https://bionic-h2020.eu.

Official project Kick-Off Meeting at DFKI’s Headquarters in Kaiserslautern, Germany.
Article in “Nature” – The AlterEgo Project

The results of our European project AlterEgo (https://av.dfki.de/projects/alterego/) have been published in the prestigious journal “Nature”. Please follow this link to read the complete article: https://www.nature.com/articles/s41598-018-35813-6

European project Co2Team takes off!

During flights, pilots have to manage difficult situations while facing an increasing system complexity due to the amount and the nature of available information. The idea of Co2Team, Cognitive Collaboration for Teaming, is that a system based on artificial intelligence can provide efficient support to the pilot with the use of cognitive computing. The main objective of the project is to propose a technological and methodological transition to a more autonomous flight transport based on a progressive crew reduction. Co2Team will develop the roadmap of cognitive computing to assist the pilot for future air transport. This transition will be based on an innovative bidirectional communication paradigm and an optimized shared authority (human-machine) using the potential of the cognitive computing (pilot monitoring, environment and situation understanding, enhanced assistance, adaptive automation).

The project partners are the department Augmented Vision at DFKI, Deutsche Lufthansa AG and the Institut Polytechnique de Bordeaux (INP Bordeaux).

The first meeting of the project took place on January 8th, 2019, in the premises of the INP Bordeaux. During the meeting, the participants could test an existing flight simulator and discuss possible developments of simulators for the project.

Contact person: Dr. Alain Pagani

Picture: The Co2Team partners at the kick-off event.

Start of the European Erasmus+ project ARinfuse

ARinfuse is an European project funded under Erasmus+, the EU’s programme to support education, training, youth and sport in Europe. The objective or ARinfuse is to support individuals in acquiring and developing basic skills and key competences within the field of geoinformatics and utility infrastructure, in order to foster employability. This objective is addressed through the development of new learning modules where Augmented Reality technologies are merged with geoinformatics and applied within the utility infrastructure sector. The developed digital learning content and tools will be implemented in university programs as well as in vocational training programs, and will be made available as Open Educational Resources, open textbooks and Open Source Educational Software.

The Augmented Vision department will contribute to the ARinfuse project by sharing its knowledge and expertise in Augmented Reality technologies for the energy and utilities sector, gained mainly during the European project LARA. Besides DFKI, following partners are collaborating in the project: GeoImaging Ltd (Cyprus), Novogit AB (Sweden), the Cyprus University of Technology (Cyprus), the GISIG association (Italy), the Sewerage Board of Nicosia (Cyprus), and the Flanders Environment Agency (VMM, Belgium).

On December 19th, 2018, the project was officially launched during a kick-off meeting in Nicosia, Cyprus, where the partners started to work on educational and training material and on the specification of the software modules.

Contact person: Dr. Alain Pagani

Picture: The ARinfuse partners at the kick-off event. From left to right: Alain Pagani (DFKI), Elena Valari (GeoImaging), Diofantos Hadjimitsis (CUT), Kiki Charalambus (Sewage Board Nicosia), Konstantinos Smagas (GeoImaging), Katleen Miserez (VMM), Mario Tzouvaras (CUT), Andreas Christofe (CUT), Anders Ostman (Novogit), Aristodemos Anastasiades (GeoImaging), Giorgio Saio (GISIG).

Talk of Dr. Alain Pagani on Augmented Reality for Education in Riga, Latvia

On November 27th 2018, Dr. Alain Pagani was invited by the State Fire and Rescue Service of the Republic of Latvia to give a talk on the use of Augmented Reality for education and awareness raising in the International Conference “Societal Security in the Baltic Sea Region: Challenges and Solutions”.

The main focus of the conference, organized jointly by the State Fire and Rescue Service of Latvia (VUGD) and Riga Stradins Universtiy, in cooperation with the Permanent Secretariat of the Council of the Baltic Sea States, the Swedish Institute and the Swedish Civil Contingencies Agency, was on introducing civil safety in education and the promotion of community awareness of safety in the Baltic Sea region. Dr. Alain Pagani introduced several aspects of Augmented Reality, including recent works from the Department Augmented Vision, and presented advantages of Augmented Reality for education, training and awareness raising.

He could share his views on the use of novel technologies in education during the panel discussion on the subject “education as a core element of societal security culture”, together with Mr Martins Baltmanis, Deputy Chief of the State Fire and Rescue Service of Latvia, Ms Ruta Silina, Head of Division of Communication and International Cooperation at Riga Stradins University, and Ms Elisabeth Braw, Associate Fellow at the Royal United Services Institute for Defence and Security Studies, United Kingdom.

Eyes of Thing’s demonstrator “AudioGuide3.0” highlighted as “Creation Innovation” by the EU innovation radar

The Augmented Museum Guide developed in the EU project Eyes of Things has been selected by the EU innovation radar as a “Creation Innovation” and is presented on the innovation radar website. The Innovation Radar platform builds on the information and data gathered by independent experts involved in reviewing ongoing projects funded by H2020, FP7 or CIP. The aim is to make information about EU-funded innovations from high-quality projects visible and accessible to the public in one place (the EU’s new Innovation Radar platform).

The Museum Audio Guide 3.0 is a new type of Audio Guide for museums, where the headset used for audio information is equipped with a miniature camera and a image processing chip. The chip runs an image analysis software that has been trained for a specific exhibit and is able to recognize artworks such as paintings while the user is simply passing by. The painting recognition module is running constantly, but does not consume much power thanks to the dedicated hardware module and the efficient implementation. Thus, the camera-equipped headset can be used for an entire day without the need to change the batteries. From the visitor perspective, this new type of audio guide seems completely natural, as audio information is provided only in the right context, and without the need to care about the technology. An artificial-intelligence-based algorithm is able to detect when the visitor might be interested in audio information and delivers a soft sound notification, informing the user that audio information is available. The visitor can then decide to play the recorded audio by pressing a single button. This technology has already been developed as an output of a former European project (“Eyes of things”), and several prototypes could be successfully tested at the Albertina Museum in Vienna.

The main developers of this demonstrator were the Austrian company Fluxguide and the department Augmented Vision at DFKI.

Contact person: Dr. Alain Pagani

Learning 6DoF Object Poses from Synthetic Single Channel Images

[vc_row][vc_column][vc_column_text]Learning 6DoF Object Poses from Synthetic Single Channel Images

by Jason Raphael Rambach, Chengbiao Deng, Alain Pagani, Didier Stricker  has been accepted at ISMAR 2018.

The paper will be presented in a poster session. The conference will take place in Munich from 16 – 20th October 2018.

The IEEE ISMAR is the leading international academic conference in the fields of Augmented Reality and Mixed Reality.
The symposium is organized and supported by the IEEE Computer Society and IEEE VGTC.[/vc_column_text][/vc_column][/vc_row]

Project „Eyes of Things“ concludes with best achievements

The Augmented Vision Group of the German Research Center for Artificial Intelligence (DFKI) presented the results of the European project “Eyes of Things” (Grant number 643924) during a final review in September 2018, together with seven partners from across Europe. Following the overall goal of the project, the consortium delivered a miniature and independent Computer Vision system based on a low-power and dedicated vision processor from the company Intel Movidius, with interfaces to three different energy-efficient cameras (AMS-Awaiba NanEye, Sony, Himax). The battery-powered device has a very small form factor, and is able to run computer vision tasks continuously (“always on”) for several days. The developed wireless interface allows for exchanging processed data with companion devices (tablets, smartphones) or for storing information in the cloud. The versatility of the Eyes Of Things device could be successfully demon-strated through the development of four different applications: A doorbell surveillance application which notifies the house owner about visits on his smartphone, a therapy doll which analyses the emotion profile of young patients through vision-based emotion recognition, a multi-purpose camera which synchronizes user-defined events in a cloud-based virtual memory and an intelligent audio guide for museums which recognizes artworks and automatically plays contextual audio comments. During the review, the partners presented the results of pilot tests for all demonstrators.  The consortium received the mention “all project aims have been successfully achieved”, which is the highest possible review outcome. Two of the project’s demonstrators – the multi-purpose surveillance camera and the Augmented Museum Guide – will be developed and commercialized as user products in partnership with DFKI in the upcoming months.

Contact: Dr. Alain Pagani

 

AV group co-organized ACM CSCS’18

The Augmented Vision Group of the German Research Center for Artificial Intelligence (DFKI) co-organized the 2. ACM Computer Science in Cars Symposium 2018 (CSCS’18). CSCS is an annual symposium aiming to create bridges between academic research on the one hand and practitioners from the automotive industry on the other. The two days event combined a single track program with invited keynotes, academic oral presentations, poster presentations and a panel discussion. The symposium provided a platform for industry and academia to exchange ideas and meet these future challenges jointly. The focus of the 2018 symposium was Artificial Intelligence & Security for Autonomous Vehicles.

In the panel discussion the current challenges of Artificial Intelligence for ASAS and Autonomous Vehicles were discussed with experts in this domain. Amongst others the validation of AI systems, the required vision sensor setup and training data, user acceptance, legal challenges, and many more were discussed. The panel was moderated by the AV-member Dr. Oliver Wasenmüller.

CSCS panel discussion: Dr. Oliver Wasenmüller (Team Leader Machine Vision, DFKI), Georg Kuschk (Team Leader Machine Learning, Astyx), Karl Leiss (CEO, BIT-TS), Prof. Dr. Christoph Sorge (Professor Legal Informatics, UdS), Prof. Dr. Christoph Stiller (Director Institute MRT, KIT), Dr. Shervin Raafatnia (AI Validation Engineer, Bosch).

CSCS panel discussion: Dr. Oliver Wasenmüller (Team Leader Machine Vision, DFKI), Georg Kuschk (Team Leader Machine Learning, Astyx), Karl Leiss (CEO, BIT-TS), Prof. Dr. Christoph Sorge (Professor Legal Informatics, UdS), Prof. Dr. Christoph Stiller (Director Institute MRT, KIT), Dr. Shervin Raafatnia (AI Validation Engineer, Bosch).