Our paper on deep-learning and optical flow will be presented @ CVPR 2017.
Please find the the details here:
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
Low-cost, extremely small and energy-efficient: New camera matrix provides precise depth images for automated driving and industrial applications
It is no bigger than a 1-cent piece: a new, ultra-compact and adaptive camera, which, in addition to pictures, also delivers accurate depth information in real-time. As a novel sensor, it offers a wide range of applications in the area of automated driving or manual assembly processes. The system is developed in the project “DAKARA” – design and application of an ultra-compact, energy-efficient and configurable camera matrix for spatial analysis. In the project funded by the German Federal Ministry of Education and Research (BMBF), five partners from industry and research work together.
The camera matrix consists of sixteen square-shaped single cameras, which together function not only as an imaging device, but also as a depth camera. They are arranged on a so-called “wafer”, an approximately 1 millimeter thick structure of polycrystalline semiconductor blanks. The new camera technology comes from AMS Sensors Germany GmbH. With their technology, the camera will not be larger than ten by ten millimeters and about three millimeters thick.
Stephan Voltz, CEO of AMS Sensors Germany, explains the function of the new technology: “The structure as a camera matrix captures the scene from sixteen slightly displaced perspectives in order to calculate the scene geometry (a depth image) using the light field principle. Because such calculations are very high-intensity, an efficient processor is embedded directly into the periphery of the camera matrix to enable real-time applications. ”
The Augmented Vision (AV) research area at the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern is developing the algorithms for depth image calculations. They are executed in a real-time manner, directly in the camera system. In addition, various applications can be run on the embedded chip for further processing of the generated image data.
Professor Didier Stricker, Head of the Augmented Vision department at DFKI, said: “The depth information provided by the camera system alongside the color information allows a wide range of new applications. The ultra-compact design makes it possible to integrate the new camera into very small filigree components and use it as a non-contact sensor “.
The structure of the camera matrix is reconfigurable. As a result, a more specific layout can be used, depending on the application, for example a different arrangement of the camera matrix in L-shape. The depth image computation can also be adapted to specific requirements for the depth information.
Cameras that provide depth information already exist. However, these send out light to calculate the depth. Disadvantages are the high energy consumption, the large design and high costs. Other passive systems have much lower energy consumption, but are still in the research stage and generally have large shapes and low image rates.
“The DAKARA camera matrix will be the first passive system to provide both color and depth images in real-time, with high image rates, adaptive features, low energy consumption, and a very compact design,” said Oliver Wasenmüller, DFKI Project Manager and co-initiator of the project. The new system is to be used by well-known users from different domains.
Two application scenarios are used to check and demonstrate the developments of DAKARA: A rear-view camera from the partner ADASENS Automotive GmbH is designed to better interpret the rear vehicle environment. This means that finer structures such as curbs or posts can also be detected during automated parking. In addition, the system is designed to recognize people and send warning signals in an emergency. As a result, a tremendous increase in the safety of automated or partially automated driving can be expected. The Bosch Rexroth AG and DFKI (department Innovative Factory Systems) with the Living Lab SmartFactory KL e.V. will be installing a manual assembly process for the workplace assistant. The camera matrix captures objects as well as the hands of the worker by means of the algorithms of the partner CanControls GmbH. The particular challenge is to clearly distinguish objects such as tools or workpieces from the operator’s hands. The depth information of the DAKARA camera is intended to make this separation easier and more precise.
In the next three years the new camera matrix is to be designed, developed and extensively tested in the mentioned scenarios. A first prototype is to be realized by late summer 2018.
The project “DAKARA” is funded by the Federal Ministry of Education and Research (BMBF) within the framework of the “Photonics Research Germany – Digital Optics” program. The project volume totals 3.8 million euros; almost half of which is generated by the industry partners involved.
For more details, please visit the project page: DAKARA
Nach dem erfolgreichen Abschluss seines Software Campus Projekts BodyAnalyzer wurde Oliver Wasenmüller vom Forschungsbereich Erweiterte Realität des DFKI Kaiserslautern am 27. März 2017 feierlich als Absolvent des Software Campus Programms verabschiedet. Oliver Wasenmüller stellte sein, in Kooperation mit der SIEMENS AG entstandenes Projekt BodyAnalyzer, Erfassung und Analyse von 3D Körpermodellen, in einer Abschlusspräsentation auf dem 4. Software Campus Summit in Berlin vor, bevor er von Prof. Dr. Wahlster und Dr. Harald Schöning, Leiter Forschung der Software AG, verabschiedet wurde.
Der Software Campus sucht herausragende Doktorandinnen und Doktoranden der Informatik sowie informatiknaher Disziplinen mit großem Interesse an Führungsaufgaben in der Wirtschaft oder Unternehmensgründung. Insgesamt 18 Partner aus Industrie und Forschung unterstützen das Führungskräfteentwicklungsprogramm, das vom Bundesministerium für Bildung und Forschung (BMBF) gefördert wird. Die Teilnehmerinnen und Teilnehmer setzen im Software Campus ihr eigenes IT-Projekt um. Sie managen den gesamten Prozess ihres IT-Projekts selbständig mit Unterstützung der Forschungs- und Industriepartner: von der Projektplanung über die Beantragung finanzieller Mittel und das Management, die Koordination des eigenen Forscher-Teams bis zum Abschluss des Vorhabens. Jedes Projekt wird mit bis zu 100.000 Euro über die Projektlaufzeit gefördert. Darüber hinaus wird im Rahmen des Software Campus das vorhandene Potenzial der Teilnehmerinnen und Teilnehmer gezielt gefördert. Die Industriepartner des Software Campus bringen in das Programm ihre besten Führungskräftetrainings ein. Die Teilnehmerinnen und Teilnehmer können in sechs Modulen ihre Führungs-, Methoden- sowie Sozial- und Selbstkompetenzen weiterentwickeln.
Wir gratulieren Herrn Wasenmüller recht herzlich zur Aufnahme in den Software Campus sowie zum erfolgreichen Abschluss seines eigenen Forschungsprojekts BodyAnalyzer.
From 20 to 24 March 2017 we will be presenting our demonstrator “Augmented Things” in hall 6, booth B48.
Although machines are designed to simplify our lives, it is often difficult to understand how to use them at first glance. Their use would be much simplified if they would explain us themselves how they are meant to be used. This is now possible using Augmented Reality with the “Augmented Things” concept. Thanks to a dedicated object recognition and tracking algorithm, any object can deliver instantaneously and on demand its instructions manual to the user. To this aim, the user needs only one single application on a tablet or a smartphone and can scan virtually any object to access meta-information in an AR viewer. This tool is easily scalable to many new objects and is an extremely intuitive and natural way of seeking additional information.
We are looking forward to welcoming you at CeBIT 2017!
Two papers have been accepted at IEEE Winter Conference on Applications of Computer Vision – WACV 2017
The papers will be shortly available on our web page.
The WACV is one of the most renowned conferences covering all areas of Computer Vision. It will be held March 27-30, 2017 at the Hyatt Vineyard Creek Hotel, Santa Rosa, CA, USA.
The program schedule is available at http://pamitc.org/wacv2017/program-schedule/.
Our research group got two papers accepted at 15th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2016:
The IEEE ISMAR is the leading international academic conference in the fields of Augmented Reality and Mixed Reality. The symposium is organized and supported by the IEEE Computer Society and IEEE VGTC. The 15th ISMAR will be held from 19 – 23 of September in Merida, Mexico.
A paper titled NRSfM-Flow: Recovering Non-Rigid Scene Flow from Monocular Image Sequences by got accepted at British Machine Vision Conference (BMVC) 2016.
The British Machine Vision Conference is one of the major international conferences on computer vision and related areas. It is organised by the British Machine Vision Association (BMVA). The 27th BMVC will be held at the University of York, 19th-22nd September 2016.