DAKARA

Contact person: Oliver Wasenmüller
Funding by: BMBF
Grant agreement no.: 13N14318
Funding programm: Photonik Forschung Deutschland – Digitale Optik
Begin: March 2017
End: February 2020

Design and application of an ultra-compact, energy-efficient and reconfigurable camera matrix for spatial analysis

Within the DAKARA project an ultra-compact, energy-efficient and reconfigurable camera matrix is developed. In addition to standard color images, it provides accurate depth information in real-time, forming the basis for various applications in the automotive (autonomous driving), production (Industry 4.0) and many more.

 

Real-time depth image calculations with camera matrix

The ultra-compact camera matrix, which is composed of 4x4 single cameras, not only functions as a camera for color images but also as a provider for depth information

The ultra-compact camera matrix is composed of 4×4 single cameras on a wafer and is equipped with a wafer-level optics, resulting in an extremely compact design not bigger than a cent-coin. This is made possible by the innovative camera technology of the AMS Sensors Germany GmbH.

The configuration as a camera matrix captures the scene from sixteen slightly displaced perspectives and thus allows the scene geometry (a depth image) to be calculated from these by means of the light field principle. Because such calculations are very high-intensity, close integration of the camera matrix with an efficient, embedded processor is required to enable real-time applications. The depth image calculations, which are researched and developed by DFKI (Department Augmented Vision), can be carried out in real-time in the electronic functional level of the camera system in a manner that is resource-conserving and real-time. Potential applications benefit significantly from the fact that the depth information is made available to them in addition to the color information without further calculations on the user side. Thanks to the ultra-compact design, it is possible to integrate the new camera into very small and / or filigree components and use it as a non-contact sensor. The structure of the camera matrix is reconfigurable so that a more specific layout can be used depending on the application. In addition, the depth image computation can also be reconfigured and thus respond to certain requirements for the depth information.

Ultra-compact, reconfigurable, low energy consumption
The innovation of the DAKARA project represents the overall system that provides both color and depth images. Similar systems, which are also found in the product application, are generally active systems that emit light and thus calculate the depth. Major disadvantages of such systems are large designs, high energy consumption and high costs. There are passive systems that have much lower energy consumption, but are still in the research stage and generally have large designs and low image rates.

For the first time, DAKARA offers a passive camera, which convinces with an ultra-compact design, high image rates, reconfigurable properties and low energy consumption, leaving the research stage and entering the market with well-known users from different domains.

 

Three-Dimensional Geometry for Automotive and Industry

In order to demonstrate the power and innovative power of the DAKARA concept, the new camera is used in two different application scenarios. These include an intelligent rear-view camera in the automotive field and the workplace assistant in manual production.

The planned intelligent rear-view camera of the partner ADASENS Automotive GmbH is capable of interpreting the rear vehicle environment spatially, metrically and semantically compared to currently used systems consisting of ultrasonic sensors and a mono color camera. As a result, even finer structures such as curbsides or poles can be recognized and taken into account during automated parking operations. In addition, the system is able to detect people semantically and to trigger warning signals in the event of an emergency. The DAKARA camera provides a significant contribution to increasing the safety of autonomous or semi-automated driving.

Color image of a rear-view camera in 2D and without depth information
The color image of the rear-view camera is replaced by a depth image, in which every pixel states the distance to the scene.

A manual assembly process at the Bosch Rexroth AG and DFKI (Department Innovative Factory Systems) is shown in the case of the workplace assistant. The aim is to support and assure the operator of his tasks. For this purpose, the new camera matrix is fixed over the workplace and both objects and hands are detected spatially and in time by the algorithms of the partner CanControls GmbH. A particular challenge is that objects such as tools or workpieces that are held in the hand are very difficult to separate from these. This separation is made possible by the additional provision of depth information by the DAKARA camera. In this scenario, a gripping path analysis, a removal and level control, the interaction with a dialog system and the tool position detection are implemented. The camera is designed to replace a large number of sensors, which are currently being used in various manual production systems by the project partner Bosch Rexroth, thus achieving a new quality and cost level.

Color image of a work place without depth information
Image of the scene with depth information. In addition: clear separation of tool and hand possible due to DAKARA-technology

In the next three years, the new camera matrix will be designed, developed and extensively tested in the mentioned scenarios. A first prototype will be realized by late summer 2018. The project “DAKARA” is funded by the Federal Ministry of Education and Research (BMBF) within the framework of the “Photonics Research Germany – Digital Optics” program. The project volume totals 3.8 million euros; almost half of it being provided by the industry partners involved.


Partners:

  • AMS Sensors Germany GmbH, Nürnberg (Konsortialführung)
  • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), Kaiserslautern (Technische Konsortialführung)
  • ADASENS Automotive GmbH, Lindau
  • Bosch Rexroth AG, Stuttgart
  • CanControls, Aachen
LiSA

Contact person: Dr. Gerd Reis
Funding by: BMWi
Grant agreement no.: 03ET1416A-F
Funding programm: EnOB
Begin: 01.11.2016
End: 31.10.2019

EnOB-BMWi

Light and solar management using active and model-predictively controlled Components

Project partners: Universität Kaiserslautern, DFKI GmbH Kaiserslautern, ebök Planung und Entwicklung GmbH, Dresden Elektronik Ingenieurtechnik GmbH, Agentilo GmbH, Herbert Waldmann GmbH & Co. KG

HDR Sensor image (left) and simulated scene (right)

Simulated reduction of glareThe research project LiSA is a broad-based joint project in the area of façade, lighting and control technology. The aim is enabling the most energy-efficient operation of office and administrative buildings taking into account user satisfaction. Through exemplary system integration the understanding of the interaction of individual components and the necessity of system solutions are promoted.

At component level technologies are being developed which enable the efficient use of daylight, provide energy-saving lighting with artificial light, and reduce cooling load during summer by means of shading. At the level of sensor technology, a cost-effective sensor is developed, which measures the light conditions as well as heat inputs to the room by solar radiation. A model-predictive control approach optimizes the operation of the components, which can be managed and controlled via wireless communication paths.

With the implementation of the project in a Living Lab Smart Office Space, which is subject to detailed monitoring and in which people use the space according to the actual purpose, it is ensured that the developments are continuously empirically validated and that the results are perceived by users as added value. The people working in the Living Lab have the opportunity to interact with the technology and are thus an essential part of the investigations.

Anna C-trus

Contact person: Gerd Reis

ANNA – Artificial Neural Network Analyzer

A Framework for the Automatic Inspection of TRUS Images

The project ANNA (Artificial Neural Network Analyzer) aims at the design of a framework to analyze ultrasound images by means of signal processing combined with methods from the field of artificial intelligence (neural networks, self-organizing maps, etc.). Although not obvious ultrasound images do contain information that cannot be recognized by the human visual system and that do provide information about the underlying tissue. On the other hand the human visual system recognizes virtual structures in ultrasound images that are not related at all to the underlying tissue. Especially interesting in this regard is the fact that the careful combination several texture descriptor based filters is suited for an analysis by artificial neural networks and that suspicious regions can be marked reliably. The specific aim of the framework is to automatically analyze conventional rectal ultrasound (TRUS) images of the prostate in order to detect suspicious regions that are likely to relate to a primary cancer focus. These regions are marked for a subsequent biopsy procedure. The advantages of such an analysis are the significantly reduced number of biopsies compared to a random or systematic biopsy procedure to detect a primary cancer and the significantly enhanced success rate to extract primary cancer tissue with a single biopsy procedure. On one hand this results in a faster and more reliable diagnosis with significantly decreased intra-examiner variability, on the other hand the discomfort of the patient due to multiple biopsy sessions is dramatically reduced.