We are happy to announce that our paper “Generative View Synthesis: From Single-view Semantics to Novel-view Images” has been accepted for publication at the Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020), which will take place online from December 6th to 12th. NeurIPS is the top conference in the field of Machine Learning. Our paper was accepted from 9454 submissions as one of 1900 (acceptance rate: 20.1%).
Abstract: Content creation, central to applications such as virtual reality, can be a tedious and time-consuming. Recent image synthesis methods simplify this task by offering tools to generate new views from as little as a single input image, or by converting a semantic map into a photorealistic image. We propose to push the envelope further, and introduce Generative View Synthesis (GVS), which can synthesize multiple photorealistic views of a scene given a single semantic map. We show that the sequential application of existing techniques, e.g., semantics-to-image translation followed by monocular view synthesis, fail at capturing the scene’s structure. In contrast, we solve the semantics-to-image translation in concert with the estimation of the 3D layout of the scene, thus producing geometrically consistent novel views that preserve semantic structures. We first lift the input 2D semantic map onto a 3D layered representation of the scene in feature space, thereby preserving the semantic labels of 3D geometric structures. We then project the layered features onto the target views to generate the final novel-view images. We verify the strengths of our method and compare it with several advanced baselines on three different datasets. Our approach also allows for style manipulation and image editing operations, such as the addition or removal of objects, with simple manipulations of the input style images and semantic maps respectively.
Authors: Tewodros Amberbir Habtegebrial, Varun Jampani, Orazio Gallo, Didier Stricker
Please find our paper here.
Please also check out our video on YouTube.
Please contact Didier Stricker for more information.
On July 10th, 2020, Mr Jason Rambach successfully defended his PhD thesis entitled “Learning Priors for Augmented Reality Tracking and Scene Understanding” in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Guillaume Moreau (Ecole Centrale de Nantes) and Prof. Dr. Christoph Grimm (TU Kaiserslautern).
In his thesis, Jason Rambach addressed the combination of geometry-based computer vision techniques with machine learning in order to advance the state-of-the-art in tracking and mapping systems for Augmented Reality. His scientific contributions, in the fields of model-based object tracking and SLAM were published in high-rank international peer-reviewed conferences and journals such as IEEE ISMAR and MDPI Computers. His “Augmented Things” paper, proposing the concept of IoT objects that can store and share their AR information received the best poster paper award at the ISMAR 2017 conference.
Jason Rambach holds a Diploma in Computer Engineering from the University of Patras, Greece and a M.Sc. in Information and Communication Engineering from the TU Darmstadt, Germany. Since 2015, he has been at the Augmented Vision group of DFKI where he was responsible for the BMBF-funded research projects ProWiLan and BeGreifen and several industry projects with leading Automotive Companies in Germany. Jason Rambach will remain at DFKI AV as a Team Leader for the newly formed team “Spatial Sensing and Machine Perception” focused on depth sensing devices and scene understanding using Machine Learning.
Patientinnen und Patienten erhalten nach Operationen an Blase, Prostata oder Nieren standardmäßig eine kontinuierliche Dauerspülung der Blase, um Komplikationen durch Blutgerinnsel zu vermeiden. Die Spülung sollte ständig überwacht werden, was jedoch im klinischen Alltag nicht zu leisten ist.
Das Ziel von VisIMon ist es, eine bessere Patientenversorgung bei gleichzeitiger Entlastung des Personals durch eine automatisierte Überwachung der Spülung zu erreichen. Im Projekt wird ein kleines, am Körper getragenes Modul entwickelt, welches den Spülvorgang mit unterschiedlichen Sensoren überwacht. Das System soll sich nahtlos in bestehende Abläufe einfügen lassen. Durch den Zusammenschluss interdisziplinärer Partner aus Industrie und Forschung sollen die notwendigen Sensoren und Schnittstellen entwickelt und zu einem effektiven System vereint werden. Dabei soll moderne Kommunikationstechnologie neue Konzepte ermöglichen, bei denen die Komponenten des Systems drahtlos miteinander kommunizieren, über nutzerfreundliche, interaktive Schnittstellen Daten zur Verfügung stellen und sich durch die Nutzer steuern lassen.
Sensoren, Elektronik zur Auswertung sowie die dazugehörige Systemsoftware zur Bestimmung des Hämoglobins sowie zur Messung der Spülgeschwindigkeit und Füllmengenüberwachung wurden nun erfolgreich am DFKI entwickelt und dem Partner DITABIS zur Integration übergeben. Das System verwendet Eingebettete Künstliche Intelligenz bei der Ermittlung der Messwerte und kann so aktiv und robust auf technische Herausforderungen wie Blasenbildung oder mechanische Erschütterungen reagieren.
Kontakt: Dr. Gerd Reis
Our paper with the title “HandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation from a Single Depth Map” has been accepted for publication at the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020 (CVPR 2020) which will take place from June 14th to 19th, 2020 in Seattle, Washington, USA. It is the “premier” conference in the field of Computer Vision. Our paper was accepted from 6656 submissions as one of 1470 (acceptance rate: 22 %).
We propose a novel architecture with 3D convolutions for simultaneous 3D hand shape and pose estimation trained in a weakly-supervised manner. The input to our architecture is a 3D voxelized depth map. For shape estimation, our architecture produces two different hand shape representations. The first is the 3D voxelized grid of the shape which is accurate but does not preserve the mesh topology and the number of mesh vertices. The second representation is the 3D hand surface which is less accurate but does not suffer from the limitations of the first representation. To combine the advantages of these two representations, we register the hand surface to the voxelized hand shape. In extensive experiments, the proposed approach improves over the state-of-the-art for hand shape estimation on the SynHand5M dataset by 47.8%. Moreover, our 3D data augmentation on voxelized depth maps allows to further improve the accuracy of 3D hand pose estimation on real datasets. Our method produces visually more reasonable and realistic hand shapes of NYU and BigHand2.2M datasets compared to the existing approaches.
Please find our paper here.
Muhammad Jameel Nawaz Malik, Ibrahim Abdelaziz, Ahmed Elhayek, Soshi Shimada, Sk Aziz Ali, Vladislav Golyanik, Christian Theobalt, Didier Stricker
Please also check out our video on YouTube.
Please contact Didier Stricker for more information.
We are very happy to announce that three of our PhD students have been able to successfully defend their PhD thesis during 2019!
Mr. Aditya Tewari defended his thesis with the title “Prior-Knowledge Addition to Spatial and Temporal Classification Models with Demonstration on Hand Shape and Gesture Classification” on October 25th in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Paul Lukowicz (TU Kaiserslautern and DFKI) and Prof. Dr. Dr. h. c. Dieter Rombach (Fraunhofer IESE, Kaiserslautern).
Mr. Vladislav Golyanik defended his thesis with the title „Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds” on November 20th in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Antonio Aguado (Universitat Politècnica de Catalunya, Spain) and Prof. Dr. Reinhard Koch (Christian-Albrechts-Universität zu Kiel).
Mr. Christian Bailer defended his thesis with the title „New Data Based Matching Strategies for Visual Motion Estimation” on November 22nd in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Michael Feslberg (Linköpings University, Sweden) and Dr. Margret Keuper (Max-Planck-Institut für Informatik, Saarbrücken).
All three PhDs have left our Augmented Vision Department shortly after their defense to pursue a career outside of DFKI.
Our team is presenting two papers at the VISAPP 2020 (15th International Conference on Computer Vision Theory and Applications) conference that is taking place from February 27th – 29th in Valletta, Malta.
The two papers are:
Iterative Color Equalization for Increased Applicability of Structured Light Reconstruction
Torben Fetzer, Gerd Reis, Didier Stricker
Autopose: Large-Scale Automotive Driver Head Pose And Gaze Dataset With Deep Head Pose Baseline
Mohamed Selim, Ahmet Firintepe, Alain Pagani, Didier Stricker
The AutoPOSE dataset can be downloaded from the website at autopose.dfki.de.
The International Journal of Computer Vision (IJCV) is considered one of the top journals in Computer Vision. It details the science and engineering of this rapidly growing field. Regular articles present major technical advances of broad general interest. Survey articles offer critical reviews of the state of the art and/or tutorial presentations of pertinent topics.
We are proud to announce that our paper “SceneFlowFields++: Multi-frame Matching, Visibility Prediction, and Robust Interpolation for Scene Flow Estimation” has been published in the IJCV (for more information click here). It is an extension of our earlier WACV paper “SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences“.
At the Internationale Funkausstellung (IFA 2019) – the world’s leading trade fair for consumer electronics and home appliances – at the beginning of September in Berlin, researchers from the TU Kaiserslautern (TUK) and the DFKI research area Augmented Reality presented a sensor system that reduces incorrect posture at the workplace. Sensors are used which are attached to different parts of the body such as the arms, spine and legs and which determine the movement sequences. Software evaluates the data and calculates movement parameters such as joint angles on the arm and knee or the degree of flexion or twisting of the spine. The system immediately detects if a movement is carried out incorrectly or if an incorrect posture is adopted. Via a Smartwatch, the system gives the user direct feedback so that he can correct movement or posture. The sensors could be integrated into work clothing and shoes. The run at the IFA was great and the project prototype was assessed very positively throughout. The press was also represented in large numbers. Please find the links to the articles below.
Video coverage in German:
SWR Landesschau Aktuell Rheinlandpfalz: Video from 06.09.2019, starting 7:45h
Media coverage in German:
Rheinpfalz: Sensoren in der Arbeitskleidung
Ärztezeitung: Digital Health darf bei der IFA nicht fehlen
Elektroniknet: Neue Sensortechnik verspricht bessere Haltung am Arbeitsplatz
Esanum: Haltungsschäden mit Sensortechnik vermeiden
Industie.de: Sensoren für eine bessere Körperhaltung
Medica: Bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Maschinenwerkzeug.de: IFA 2019: Bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Mobile-zeitgeist.de: IFA 2019: Eine bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Nachrichten-kl.de: IFA 2019: Eine bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Nachrichten.idw-online.de: IFA 2019: Eine bessere Haltung am Arbeitsplatz dank neuer Sensortechnik
Smarterworld: Sensoren gegen Haltungsschäden
Uni-kl.de: Neue Sensortechnik: Bessere Haltung am Arbeitsplatz
Media coverage in English:
Alphagalileo: IFA 2019: Intelligent sensor technology for a better posture at the workplace
Dailymail.co: Sit up straight! Smartwatch that sounds an alarm every time you slump at your desk is being developed by scientists to combat bad posture
Expressdigest: Smartwatch sounds an alarm every time you slump to correct posture
eandt.theiet: Wearables could deter slouching at work, researchers suggest
Elektroniknet: New sensor technology promises better posture at the workplace
France.timesofnews: Smartwatch sounds an alarm every time you slump to correct posture
Longroom: Smartwatch sounds an alarm every time you slump to correct posture
Nachrichten.idw-online.de: IFA 2019: Intelligent sensor technology for a better posture at the workplace
uni-kl.de: IFA 2019: Intelligent sensor technology for a better posture at the workplace
wsbuss: Smartwatch sounds an alarm every time you slump to correct posture
Newsoneplace: Smartwatch sounds an alarm every time you slump to correct posture
Our team is presenting the paper
at the 16th EuroVR International Conference 2019 in Tallinn, Estonia on October, 23rd-25th.
The paper presents the part of the group’s work within the research project BeGreifen.