Papers at IEEE Intelligent Vehicles Conference 2019

The following papers have been presented at the Intelligent Vehicles Symposium (IV) 2019 from 9th to 12th of June in Paris, France:

PWOC-3D: Deep Occlusion-Aware End-to-End Scene Flow Estimation Rohan Saxena, René Schuster, Oliver Wasenmüller, Didier Stricker

DeLiO: Decoupled LiDAR Odometry
Queens Maria Thomas, Oliver Wasenmüller, Didier Stricker

The 2019 IEEE Intelligent Vehicles Symposium (IV’19) is a premier annual technical forum sponsored by the IEEE Intelligent Transportation Systems Society (ITSS). It brings together researchers and practitioners from universities, industry, and government agencies worldwide to share and discuss the latest advances in theory and technology related to intelligent vehicles. Papers concerning all aspects of intelligent vehicles as well as proposals for workshops and specials sessions are invited for IV’19. For more Information click here.

Oral Paper at CVPR 2019!

Our team members have presented the following paper at the Computer Vision and Pattern Recognition Conference 2019 which took place from 16th to 20th of June 2019 in Long Beach, California, USA and is the “premier” conference in this field (Computer Vision).

SDC – Stacked Dilated Convolution: A Unified Descriptor Network for Dense Matching Tasks
René Schuster, Oliver Wasenmüller, Christian Unger, Didier Stricker

Content: To introduce a new design element for Deep Neural Networks (SDC – Stacked Dilated Convolutions) and beyond that to apply this element in a network for Dense Feature Description. With the new descriptor, we were able to improve matching for stereo disparity, optical flow, and scene flow on different data sets by up to 50%.

Our paper was accepted from 5160 submissions as one of 1294 (acceptance rate: 25.1 %) and was also one of the 288 oral papers (oral presentation in addition to the poster session), oral rate: 5.6 % of submissions, 22.3 % of accepted papers. The presentation can be seen here.

New European Project “BIONIC”

BIONIC is a European research project aiming to develop an unobtrusive, autonomous and privacy preserving platform for real-time risk alerting and continuous persuasive coaching, enabling the design of workplace interventions adapted to the needs and fitness levels of specific ageing workforce. Gamification strategies adapted to the needs and wishes of the elderly workers will ensure optimal engagement for prevention and self-management of musculoskeletal health in any working/living environment.

For more Information visit

Official project Kick-Off Meeting at DFKI’s Headquarters in Kaiserslautern, Germany.
Article in “Nature” – The AlterEgo Project

The results of our European project AlterEgo ( have been published in the prestigious journal “Nature”. Please follow this link to read the complete article:

European project Co2Team takes off!

During flights, pilots have to manage difficult situations while facing an increasing system complexity due to the amount and the nature of available information. The idea of Co2Team, Cognitive Collaboration for Teaming, is that a system based on artificial intelligence can provide efficient support to the pilot with the use of cognitive computing. The main objective of the project is to propose a technological and methodological transition to a more autonomous flight transport based on a progressive crew reduction. Co2Team will develop the roadmap of cognitive computing to assist the pilot for future air transport. This transition will be based on an innovative bidirectional communication paradigm and an optimized shared authority (human-machine) using the potential of the cognitive computing (pilot monitoring, environment and situation understanding, enhanced assistance, adaptive automation).

The project partners are the department Augmented Vision at DFKI, Deutsche Lufthansa AG and the Institut Polytechnique de Bordeaux (INP Bordeaux).

The first meeting of the project took place on January 8th, 2019, in the premises of the INP Bordeaux. During the meeting, the participants could test an existing flight simulator and discuss possible developments of simulators for the project.

Contact person: Dr. Alain Pagani

Picture: The Co2Team partners at the kick-off event.

Start of the European Erasmus+ project ARinfuse

ARinfuse is an European project funded under Erasmus+, the EU’s programme to support education, training, youth and sport in Europe. The objective or ARinfuse is to support individuals in acquiring and developing basic skills and key competences within the field of geoinformatics and utility infrastructure, in order to foster employability. This objective is addressed through the development of new learning modules where Augmented Reality technologies are merged with geoinformatics and applied within the utility infrastructure sector. The developed digital learning content and tools will be implemented in university programs as well as in vocational training programs, and will be made available as Open Educational Resources, open textbooks and Open Source Educational Software.

The Augmented Vision department will contribute to the ARinfuse project by sharing its knowledge and expertise in Augmented Reality technologies for the energy and utilities sector, gained mainly during the European project LARA. Besides DFKI, following partners are collaborating in the project: GeoImaging Ltd (Cyprus), Novogit AB (Sweden), the Cyprus University of Technology (Cyprus), the GISIG association (Italy), the Sewerage Board of Nicosia (Cyprus), and the Flanders Environment Agency (VMM, Belgium).

On December 19th, 2018, the project was officially launched during a kick-off meeting in Nicosia, Cyprus, where the partners started to work on educational and training material and on the specification of the software modules.

Contact person: Dr. Alain Pagani

Picture: The ARinfuse partners at the kick-off event. From left to right: Alain Pagani (DFKI), Elena Valari (GeoImaging), Diofantos Hadjimitsis (CUT), Kiki Charalambus (Sewage Board Nicosia), Konstantinos Smagas (GeoImaging), Katleen Miserez (VMM), Mario Tzouvaras (CUT), Andreas Christofe (CUT), Anders Ostman (Novogit), Aristodemos Anastasiades (GeoImaging), Giorgio Saio (GISIG).

Talk of Dr. Alain Pagani on Augmented Reality for Education in Riga, Latvia

On November 27th 2018, Dr. Alain Pagani was invited by the State Fire and Rescue Service of the Republic of Latvia to give a talk on the use of Augmented Reality for education and awareness raising in the International Conference “Societal Security in the Baltic Sea Region: Challenges and Solutions”.

The main focus of the conference, organized jointly by the State Fire and Rescue Service of Latvia (VUGD) and Riga Stradins Universtiy, in cooperation with the Permanent Secretariat of the Council of the Baltic Sea States, the Swedish Institute and the Swedish Civil Contingencies Agency, was on introducing civil safety in education and the promotion of community awareness of safety in the Baltic Sea region. Dr. Alain Pagani introduced several aspects of Augmented Reality, including recent works from the Department Augmented Vision, and presented advantages of Augmented Reality for education, training and awareness raising.

He could share his views on the use of novel technologies in education during the panel discussion on the subject “education as a core element of societal security culture”, together with Mr Martins Baltmanis, Deputy Chief of the State Fire and Rescue Service of Latvia, Ms Ruta Silina, Head of Division of Communication and International Cooperation at Riga Stradins University, and Ms Elisabeth Braw, Associate Fellow at the Royal United Services Institute for Defence and Security Studies, United Kingdom.

Eyes of Thing’s demonstrator “AudioGuide3.0” highlighted as “Creation Innovation” by the EU innovation radar

The Augmented Museum Guide developed in the EU project Eyes of Things has been selected by the EU innovation radar as a “Creation Innovation” and is presented on the innovation radar website. The Innovation Radar platform builds on the information and data gathered by independent experts involved in reviewing ongoing projects funded by H2020, FP7 or CIP. The aim is to make information about EU-funded innovations from high-quality projects visible and accessible to the public in one place (the EU’s new Innovation Radar platform).

The Museum Audio Guide 3.0 is a new type of Audio Guide for museums, where the headset used for audio information is equipped with a miniature camera and a image processing chip. The chip runs an image analysis software that has been trained for a specific exhibit and is able to recognize artworks such as paintings while the user is simply passing by. The painting recognition module is running constantly, but does not consume much power thanks to the dedicated hardware module and the efficient implementation. Thus, the camera-equipped headset can be used for an entire day without the need to change the batteries. From the visitor perspective, this new type of audio guide seems completely natural, as audio information is provided only in the right context, and without the need to care about the technology. An artificial-intelligence-based algorithm is able to detect when the visitor might be interested in audio information and delivers a soft sound notification, informing the user that audio information is available. The visitor can then decide to play the recorded audio by pressing a single button. This technology has already been developed as an output of a former European project (“Eyes of things”), and several prototypes could be successfully tested at the Albertina Museum in Vienna.

The main developers of this demonstrator were the Austrian company Fluxguide and the department Augmented Vision at DFKI.

Contact person: Dr. Alain Pagani