Read the interesting article on Augmented Reality from normal videos published in the online issue on September 11th, 2017:
News
On 1st September 2017, Oliver Wasenmüller successfully defended his PhD thesis about “Towards an Accurate RGB-D Benchmark, Mapping and Odometry as well as their Applications” in front of the examination commission consisting of Prof. Dr. Didier Stricker (TU Kaiserslautern and DFKI), Prof. Dr. Dieter Fellner (TU Darmstadt and FhG IGD) and Prof. Dr. Karsten Berns (TU Kaiserslautern).
In his thesis, Oliver Wasenmüller addressed and enhanced several algorithms for more accurate RGB-D mapping and odometry. For that purpose, he researched methods for the precise mapping of depth images as well as the determination of their positions in odometry algorithms. He provided a public benchmark, which, for the first time, allows the objective evaluation of such algorithms. Furthermore, he transferred the algorithms into an industrial application within the scope of a 3D discrepancy check. He conducted his research in the context of the projects ARVIDA and BodyAnalyzer.
Oliver Wasenmüller studied Computer Science at the University of Kaiserslautern and finished his master in cooperation with the Sony European Technology Center. Since then, he has been working as a project manager and researcher at the department “Augmented Vision” (AV) of the DFKI Kaiserslautern. He is currently managing several projects dealing with image processing for automotive and industrial applications.
We are proud to announce that our research department got one of the highly coveted NVIDIA Tesla V100s. NVIDIA CEO, Jensen Huang, not only unveiled NVIDIA’s novelty in the context of the CVPR 2017 but he also presented it to 15 participants of the NVIDIA AI Labs programes including Augmentd Vision department. In the front row, first from right, you can see our colleague, Christian Bailer, who attended the conferece and received the gift on behalf of the group.
Check the details on NVIDIA’s blog.
Our research group got two papers accepted at British Machine Vision Conference (BMVC) 2017:
Fast dense feature extraction with convolutional neural networks that have pooling or striding layers by Christian Bailer, Tewodros Habtegebrial, Kiran Varanasi and Didier Stricker.
Introduction to Coherent Depth Fields for Dense Monocular Surface Recovery by Vladislav Golyanik, Torben Fetzer and Didier Stricker.
Both papers will be soon available on our webpage.
The BMVC is one of the major international conferences on computer vision and related areas. It is organised by the British Machine Vision Association (BMVA) and will take place from 4th to 7th September 2017 in the Imperial College London.
The paper analyses applicability of current GPUs in real-time applications with hard real-time
constraints and provides a solution for a single dedicated GPU. This work was done in collaboration with
Max Planck Institute for Software Systems (MPI-SWS).
Towards Scheduling Hard Real-Time Image Processing Tasks on a Single GPU
The IEEE International Conference on Image Processing (ICIP) 2017 will feature world-class speakers, tutorials, and industry sessions, and creates an excellent forum to foster innovation and entrepreneurship, and network with the brightest minds in academia and industry that are working in this field. The conference will be held at the China National Convention Center in Beijing, China from 17-20 September 2017.
Our research group got three papers accepted at 16th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2017:
Paper #1: A Probablistic Combination of CNN and RNN Estimates for Hand Gesture Based Interaction in Car by Aditya Tewari, Bertram Taetz, Frederic Grandidier, Didier Stricker
Paper #2: Augmented Things: Enhancing AR Applications leveraging the Internet of Things and Universal 3D Object Tracking by Jason Rambach, Alain Pagani, Didier Stricker
Paper #3: Fusion of unsynchronized optical tracker and inertial sensor in EKF framework for in-car Augmented Reality delay reduction by Jason Rambach, Alain Pagani, Sebastian Lampe, Ruben Reiser, Manthan Pancholi, Didier Stricker
The IEEE ISMAR is the leading international academic conference in the fields of Augmented Reality and Mixed Reality. The congress jointly organized by Centrale Nantes and Inria will be held at La Cité, Nantes Events Center in Nantes (France), on October 9–13, 2017.
Our paper on deep-learning and optical flow will be presented @ CVPR 2017.
Please find the the details here:
CNN-based Patch Matching for Optical Flow with Thresholded Hinge Embedding Loss
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
Low-cost, extremely small and energy-efficient: New camera matrix provides precise depth images for automated driving and industrial applications
It is no bigger than a 1-cent piece: a new, ultra-compact and adaptive camera, which, in addition to pictures, also delivers accurate depth information in real-time. As a novel sensor, it offers a wide range of applications in the area of automated driving or manual assembly processes. The system is developed in the project “DAKARA” – design and application of an ultra-compact, energy-efficient and configurable camera matrix for spatial analysis. In the project funded by the German Federal Ministry of Education and Research (BMBF), five partners from industry and research work together.
The camera matrix consists of sixteen square-shaped single cameras, which together function not only as an imaging device, but also as a depth camera. They are arranged on a so-called “wafer”, an approximately 1 millimeter thick structure of polycrystalline semiconductor blanks. The new camera technology comes from AMS Sensors Germany GmbH. With their technology, the camera will not be larger than ten by ten millimeters and about three millimeters thick.
Stephan Voltz, CEO of AMS Sensors Germany, explains the function of the new technology: “The structure as a camera matrix captures the scene from sixteen slightly displaced perspectives in order to calculate the scene geometry (a depth image) using the light field principle. Because such calculations are very high-intensity, an efficient processor is embedded directly into the periphery of the camera matrix to enable real-time applications. ”
The Augmented Vision (AV) research area at the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern is developing the algorithms for depth image calculations. They are executed in a real-time manner, directly in the camera system. In addition, various applications can be run on the embedded chip for further processing of the generated image data.
Professor Didier Stricker, Head of the Augmented Vision department at DFKI, said: “The depth information provided by the camera system alongside the color information allows a wide range of new applications. The ultra-compact design makes it possible to integrate the new camera into very small filigree components and use it as a non-contact sensor “.
The structure of the camera matrix is reconfigurable. As a result, a more specific layout can be used, depending on the application, for example a different arrangement of the camera matrix in L-shape. The depth image computation can also be adapted to specific requirements for the depth information.
Cameras that provide depth information already exist. However, these send out light to calculate the depth. Disadvantages are the high energy consumption, the large design and high costs. Other passive systems have much lower energy consumption, but are still in the research stage and generally have large shapes and low image rates.
“The DAKARA camera matrix will be the first passive system to provide both color and depth images in real-time, with high image rates, adaptive features, low energy consumption, and a very compact design,” said Oliver Wasenmüller, DFKI Project Manager and co-initiator of the project. The new system is to be used by well-known users from different domains.
Two application scenarios are used to check and demonstrate the developments of DAKARA: A rear-view camera from the partner ADASENS Automotive GmbH is designed to better interpret the rear vehicle environment. This means that finer structures such as curbs or posts can also be detected during automated parking. In addition, the system is designed to recognize people and send warning signals in an emergency. As a result, a tremendous increase in the safety of automated or partially automated driving can be expected. The Bosch Rexroth AG and DFKI (department Innovative Factory Systems) with the Living Lab SmartFactory KL e.V. will be installing a manual assembly process for the workplace assistant. The camera matrix captures objects as well as the hands of the worker by means of the algorithms of the partner CanControls GmbH. The particular challenge is to clearly distinguish objects such as tools or workpieces from the operator’s hands. The depth information of the DAKARA camera is intended to make this separation easier and more precise.
In the next three years the new camera matrix is to be designed, developed and extensively tested in the mentioned scenarios. A first prototype is to be realized by late summer 2018.
The project “DAKARA” is funded by the Federal Ministry of Education and Research (BMBF) within the framework of the “Photonics Research Germany – Digital Optics” program. The project volume totals 3.8 million euros; almost half of which is generated by the industry partners involved.
For more details, please visit the project page: DAKARA