VisIMon

“Vernetztes, intelligentes und interaktives System zur kontinuierlichen, perioperativen Überwachung und Steuerung einer Irrigationsvorrichtung sowie zum funktionellen Monitoring des unteren Harntraktes”

Die kontinuierliche Dauerspülung der Blase wird standardmäßig nach Operationen an Blase, Prostata, oder Nieren angewendet, um Komplikationen durch Blutgerinnsel zu vermeiden. Die Spülung sollte ständig überwacht werden, was jedoch im klinischen Alltag nicht zu leisten ist. Motivation von VisIMon ist es daher eine automatisierte Überwachung zu ermöglichen. Sie führt zu einer verbesserten Patientenversorgung bei gleichzeitiger Entlastung des Personals.

Ziel des Projekts VisIMon ist daher die Entwicklung eines kleinen, am Körper getragenen Moduls, wel-ches den Spülvorgang mit Hilfe unterschiedlicher Sensoren überwacht. Das System soll sich nahtlos an den als Standard etablierten Vorgang anlehnen. Durch den Zusammenschluss interdisziplinärer Partner aus Industrie und Forschung sollen die notwendigen Sensoren entwickelt und zu einem effektiven Überwachungssystem vereint werden. Moderne Kommunikationstechnologie ermöglicht die Entwick-lung völlig neuer Konzepte wie Medizingeräte mit einem Krankenhaus interagieren. Bei mehr als 200.000 Anwendungen im Jahr in Deutschland ist die Entwicklung nicht nur aus medizinischer, sondern auch aus wirtschaftlicher Sicht überaus attraktiv.

 

Partner :

  • Albert-Ludwigs-Universität Freiburg
  • Fraunhofer Gesellschaft zur Förderung der Angewandten Forschung E.V.
  • Lohmann & Birkner Health Care Consulting GmbH
  • Digital Biomedical Imaging Systems AG

 

Kontakt: Dipl.-Inf., Dr. Gerd Reis
E-Mail: reis@dfki.uni-kl.de
Telefon: +49 631 20575 2090
VIDETE

Generierung von Vorwissen mit Hilfe lernender Systeme zur 4D-Analyse komplexer Szenen

Motivation

Die künstliche Intelligenz beeinflusst aktuell viele Bereiche, so auch das maschinelle Sehen. Für Anwendungen in den Bereichen Autonome Systeme, Medizin und Industrie stellen sich dabei grundlegende Herausforderungen: 1) die Erzeugung von Vorwissen zur Lösung stark unterbestimmter Probleme, 2) die Verifizierung und Erklärung der von der KI berechneten Antwort und 3) die Bereitstellung von KI in Szenarien mit eingeschränkter Rechenleistung.

Ziele und Vorgehen

Ziel von VIDETE ist es, mit Hilfe von KI, Vorwissen durch Verfahren des maschinellen Lernens zu generieren und damit bisher unlösbare Aufgaben wie die Rekonstruktion dynamischer Objekte mit nur einer Kamera praktisch handhabbar zu machen. Durch geeignetes Vorwissen wird es leichter fallen allgemeine Szenen, zum Beispiel im Bereich Autonomer Systeme, mit Hilfe von Algorithmen zu analysieren und zu interpretieren. Weiter sollen Methoden entwickelt werden die es ermöglichen die berechneten Ergebnisse zu begründen bevor diese weiter benutzt werden. Im Bereich der Medizin wäre dies vergleichbar mit der Meinung eines Kollegen im Gegensatz zu der pauschalen Antwort aktueller KI-Methoden. Als Schlüsseltechnik wir die Modularisierung der Algorithmen angesehen, welche insbesondere auch die Verfügbarkeit von KI erhöhen wird. Modulare Komponenten lassen sich effizient in Hardware realisieren. Somit können Berechnungen (z.B. das Erkennen einer Geste) nahe am erzeugenden Sensor durchgeführt werden. Dies ermöglicht es im Gegenzug, semantisch angereicherte Informationen mit geringem Overhead zu kommunizieren, wodurch KI auch auf mobilen Geräten mit geringen Ressourcen verfügbar wird.

Innovationen und Perspektiven

Die künstliche Intelligenz findet in fast allen Bereichen des täglichen Lebens und der Arbeit Einzug. Die im Projekt VIDETE erwarteten Ergebnisse werden unabhängig von den definierten Forschungszenarien sein und können zum Fortschritt in vielen Anwendungsbereichen (Privatleben, Industrie, Medizin, Autonome Systeme, etc) beitragen.

 

Kontakt: Dipl.-Inf., Dr. Gerd Reis
E-Mail: reis@dfki.uni-kl.de
Telefon: +49 631 20575 2090
IMCVO

Inertial Motion Capture-system with Visual Odometry

IMCV project proposes a wearable sensory system, based on inertial motion capture device and visual odometry that can easily be mounted on a robot, as well as on the humans and delivers 3D kinematics in all the environments with an additional 3D reconstruction of the surroundings.

Its objective is to develop this platform for both Exoskeleton and bipedal robot benchmarking.

And it will develop a scenario-generic sensory system for human and bipedal robots and therefore two benchmarking platform will be delivered to be integrated into Eurobench facilities in Spain and Italy for validation tests.

It is planned to use recent advances in inertial measurement units based 3D kinematics estimation that does not use magnetometers and, henceforth, is robust against magnetic interferences induced by the environment or the robot.

This allows a drift-free 3D joint angle estimation of e.g. a lower body configuration or a robotic leg in a body-attached coordinate system.

To map the environment and to correct for possible global heading drift (relative to an external coordinate frame) of the magnetometer-free IMU system, it is planned to fuse the visual odometry stochastically with the IMU system. The recorded 3D point cloud of the stereo camera is used in the post-processing phase to generate the 3D reconstruction of the environment. Therefore a magnetometer-free wearable motion capture system with approximate environment mapping should be created that works for humans and bipedal robots, in any environment, i.e. indoors and outdoors.

To improve localization and measure gait events, a wireless foot pressure insoles will be integrated for measuring ground interaction. Together with the foot insole all the necessary data to reconstruct kinetics and kinematics will be delivered and fully integrated into Robot Operating System (ROS). A user interface will be developed for possible modifications of the skeleton. We also provide validation recordings with a compliant robot leg and with humans, including the computation of key gait-parameters.

 

Kontakt: Dr. Bertram Taetz
E-Mail: Bertram.Taetz@dfki.uni-kl.de
Auroras

“Automotive Robust Radar Sensing”

The main objective of the project AuRoRaS is the research and development of intelligent methods to enable high-resolution automotive radar sensors of the latest generation algorithmically and software-wise for the first time for the purpose and requirements of highly automated to autonomous driving (level 4-5). The innovation consists in resolving various artifacts that radar sensors have physically caused by artificial intelligence methods.

 

 

Hauptziel des Projektvorhabens AuRoRaS ist die Erforschung und Entwicklung von intelligenten Methoden, um hochauflösende automotive Radarsensoren der neuesten Generation algorithmisch und softwareseitig erstmals für den Einsatzzweck und die Anforderungen des hochautomatisierten bis autonomen Fahrens (Level 4-5) zu befähigen. Die Innovation besteht darin verschiedene Artefakte, die Radar Sensoren physikalisch bedingt aufweisen, mit Methoden der Künstlichen Intelligenz aufzulösen.

 

 

Partner :

  • ASTYX GmbH (Dr. Georg Kuschk), Lise-Meitner-Straße 2a, 85521, Ottobrunn, DE
  • BIT Technology Solutions gmbH (Geschäftsleitung), Gewerbering 3, 83539 Pfaffing OT Forsting, DE

 

Kontakt: Jason Rambach ; Mahdi Chamseddine
E-Mail: Jason.Rambach@dfki.uni-kl.de
Telefon: +49 631 20575 3740
You in 3D
duin3d

Contact person: Onorina Kovalenko

E-Mail: Onorina.Kovalenko@dfki.de

Tel: +49 (0) 631 20575 3607

Real-time Motion capture of multiple persons in community videos

Tracking multiple persons in 3D with high accuracy and temporal stability in real-time with monocular RGB camera is a challenging task which has a lot of practical applications like 3D human character animation, motion analysis in sports, modeling human body movements and many others. The optical human tracking methods often require usage of multi-view video recordings or depth cameras. Systems which work with monocular RGB cameras are mostly not in real-time, track single person and require additional data like initial human pose to be given. All this implies a lot of practical limitations and is one of the major reasons why optical motion capture systems have not yet seen more widespread use in commercial products. The DFKI research department Augmented Vision presents a novel fully automatic multi-person motion tracking system. The presented system works in real-time with monocular RGB video and tracks multiple people in 3D. It does not require any manual work or a specific human pose to start the tracking process. The system automatically estimates a personalized 3D skeleton and an initial 3D location of each person. The system is tested for tracking multiple persons in outdoor scenes, community videos and low quality videos captured with mobile-phone cameras.

00002951
00003752
00005773
00001049
Be-greifen

Contact person: Jason Raphael Rambach
Funding programm: German BMBF
Begin: 01.07.2016
End: 30.06.2019

Comprehensible, interactive experiments: practice and theory in the MINT study

Be-greifenThe project is funded by the Federal Ministry of Education and Research (BMBF). Combine tangible, manipulatable objects (“tangibles”) with advanced technologies (“Augmented Reality”) to develop new, intuitive user interfaces. Through interactive experiments, it will be possible to actively support the learning process during the MINT study and to provide the learner with more theoretical information about physics.

In the project interfaces of Smartphones, Smartwatches or Smartglasses are used. For example, a data gadget that allows you to view content through a combination of subtle head movements, eyebrows, and voice commands, and view them on a display attached above the eye. Through this casual information processing, the students are not distracted in the execution of the experiment and can access the objects and manipulate them.

A research project developed as a preliminary study demonstrates the developments. For this purpose, scientists at the DFKI and at the Technical University Kaiserslautern have developed an app that supports students and students in the determination of the relationship between the fill level of a glass and the height of the sound. The gPhysics application captures the amount of water, measures the sound frequency and transfers the results into a diagram. The app can be operated only by gestures of the head and without manual interaction. In gPhysics, the water quantity is recorded with a camera and the value determined is corrected by means of head gestures or voice commands, if required. The microphone of the Google Glass measures the sound frequency. Both information is displayed in a graph that is continuously updated on the display of Google Glass. In this way, the learners can follow the frequency curve in relation to the water level directly when filling the glass. Since the generation of the curve is comparatively fast, the learners have the opportunity to test different hypotheses directly during the interaction process by varying various parameters of the experiment.

In the project, further experiments on the physical basis of mechanics and thermodynamics are constructed. In addition, the consortium develops technologies that enable learners to discuss video and sensor recordings as well as analyze their experiments in a cloud and to exchange ideas with fellow students or to compare results.

The DFKI is a co-ordinator of five other partners in research and practice: the Technical University of Kaiserslautern, studio klv GmbH & Co. KG Berlin, University of Stuttgart, Con Partners GmbH from Bremen and Embedded Systems Academy GmbH from Barsinghausen.

PROWILAN

Contact person: Jason Raphael Rambach
Funding by: BMBF
Grant agreement no.: 16KIA0243K
Begin: 01.02.2015
End: 31.01.2018

Professional Wireless Industrial LAN

Due to the rising requirements of the industry for a flexible and cost-efficient production, secure and robust wireless solutions are steadily gaining much interest. The BMBF project “Professional Wireless Industrial LAN – proWiLAN” comprises a number of experts from a consortium of eight German organizations developing the next generation of wireless radio technology, which will meet the rapidly growing requirements of the future industrial applications.

The aim of the project is to improve robustness, bandwidth and latency of wireless solutions so that even sophisticated or safety-critical applications such as augmented reality or radio-based emergency stop button can be efficiently and user-friendly supported. The common wireless technologies allow stable execution of cooperative Augmented Reality applications only to a limited extent. Besides, in inaccessible environments where assembly and maintenance work should be performed, present-day wireless technologies cannot satisfy the growing requirements.

Necessary and planned innovations include, among others, a multi-band-capable radio interface, which is not sensitive to interferences in each single band and thus is always immediately available. This makes very fast response times of applications possible. It is important to be able to ensure short response times of the systems in order to e.g. achieve the guaranteed shut down time of a machine in case of an emergency stop. Another key innovation of proWiLAN is the integration of a powerful 60-GHz module, which brings a significant increase of transmission data rates. Furthermore, a localization method for industrial environments should be integrated, so that mobile devices should be capable of determining their location and orientation in space. In order to get a high customer acceptance, novel Plug & Trust process developed in proWiLAN, which allows a quick and easy commissioning, retrofitting and security, is of key importance.

proWiLAN is funded by the research program ICT 2020 — Research for Innovations by the Federal Ministry of Education and Research (BMBF) with a total of 4.6 million euro. The project started in February 2015 and runs until the beginning of 2018. In addition to the DFKI as project coordinator, the consortium includes ABB AG, IHP – Leibniz Institute for Innovative Microelectronics, IMST GmbH, NXP Semiconductors Germany GmbH, Bosch Rexroth AG, Robert Bosch GmbH and the Technical University Dresden.

Marmorbild

Contact person: Dr. Gerd Reis
Funding by: BMBF
Grant agreement no.: 03VP00293
Funding programm: VIP+
Begin: 01.10.2016
End: 30.09.2019

© S. Siegesmund

The virgin stone marble has been used as preferred material for representative buildings and sculptures. Yet, due to its chemical composition and its porosity marble is prone to natural deterioration in outdoor environments, with an accelerating rate since the beginning of industrialization, mainly due to increasing pollution. A basic requirement for a successful restoration and conservation is a regularly repeated assessment of the current object condition and knowledge about prior restoration actions. Ideally the assessment is non-destructive. This requirement is fulfilled for both the optical digitization of objects shape and appearance, and the ultrasound examination used to acquire properties with respect to material quality.

Goal of the joint research project Marmorbild of the University Kaiserslautern, the Fraunhofer Institute (IBMT), and the Georg-August-University Göttingen is the validation of modern ultrasound technologies and digital reconstruction methods with respect to non-destructive testing of facades, constructions and sculptures built from marble. The proof of concept has been provided with prior research.

The planned portable assessment system holds a high potential for innovation. In the future, more objects can be examined cost-effectively in short time periods. Damage can be identified at an early stage allowing for a target-oriented investment of efforts and financial resources.

Dresdner Knabe
DAKARA

Contact person: Oliver Wasenmüller
Funding by: BMBF
Grant agreement no.: 13N14318
Funding programm: Photonik Forschung Deutschland – Digitale Optik
Begin: March 2017
End: February 2020

Design and application of an ultra-compact, energy-efficient and reconfigurable camera matrix for spatial analysis

Within the DAKARA project an ultra-compact, energy-efficient and reconfigurable camera matrix is developed. In addition to standard color images, it provides accurate depth information in real-time, forming the basis for various applications in the automotive (autonomous driving), production (Industry 4.0) and many more.

 

Real-time depth image calculations with camera matrix

The ultra-compact camera matrix, which is composed of 4x4 single cameras, not only functions as a camera for color images but also as a provider for depth information

The ultra-compact camera matrix is composed of 4×4 single cameras on a wafer and is equipped with a wafer-level optics, resulting in an extremely compact design not bigger than a cent-coin. This is made possible by the innovative camera technology of the AMS Sensors Germany GmbH.

The configuration as a camera matrix captures the scene from sixteen slightly displaced perspectives and thus allows the scene geometry (a depth image) to be calculated from these by means of the light field principle. Because such calculations are very high-intensity, close integration of the camera matrix with an efficient, embedded processor is required to enable real-time applications. The depth image calculations, which are researched and developed by DFKI (Department Augmented Vision), can be carried out in real-time in the electronic functional level of the camera system in a manner that is resource-conserving and real-time. Potential applications benefit significantly from the fact that the depth information is made available to them in addition to the color information without further calculations on the user side. Thanks to the ultra-compact design, it is possible to integrate the new camera into very small and / or filigree components and use it as a non-contact sensor. The structure of the camera matrix is reconfigurable so that a more specific layout can be used depending on the application. In addition, the depth image computation can also be reconfigured and thus respond to certain requirements for the depth information.

Ultra-compact, reconfigurable, low energy consumption
The innovation of the DAKARA project represents the overall system that provides both color and depth images. Similar systems, which are also found in the product application, are generally active systems that emit light and thus calculate the depth. Major disadvantages of such systems are large designs, high energy consumption and high costs. There are passive systems that have much lower energy consumption, but are still in the research stage and generally have large designs and low image rates.

For the first time, DAKARA offers a passive camera, which convinces with an ultra-compact design, high image rates, reconfigurable properties and low energy consumption, leaving the research stage and entering the market with well-known users from different domains.

 

Three-Dimensional Geometry for Automotive and Industry

In order to demonstrate the power and innovative power of the DAKARA concept, the new camera is used in two different application scenarios. These include an intelligent rear-view camera in the automotive field and the workplace assistant in manual production.

The planned intelligent rear-view camera of the partner ADASENS Automotive GmbH is capable of interpreting the rear vehicle environment spatially, metrically and semantically compared to currently used systems consisting of ultrasonic sensors and a mono color camera. As a result, even finer structures such as curbsides or poles can be recognized and taken into account during automated parking operations. In addition, the system is able to detect people semantically and to trigger warning signals in the event of an emergency. The DAKARA camera provides a significant contribution to increasing the safety of autonomous or semi-automated driving.

Color image of a rear-view camera in 2D and without depth information
The color image of the rear-view camera is replaced by a depth image, in which every pixel states the distance to the scene.

A manual assembly process at the Bosch Rexroth AG and DFKI (Department Innovative Factory Systems) is shown in the case of the workplace assistant. The aim is to support and assure the operator of his tasks. For this purpose, the new camera matrix is fixed over the workplace and both objects and hands are detected spatially and in time by the algorithms of the partner CanControls GmbH. A particular challenge is that objects such as tools or workpieces that are held in the hand are very difficult to separate from these. This separation is made possible by the additional provision of depth information by the DAKARA camera. In this scenario, a gripping path analysis, a removal and level control, the interaction with a dialog system and the tool position detection are implemented. The camera is designed to replace a large number of sensors, which are currently being used in various manual production systems by the project partner Bosch Rexroth, thus achieving a new quality and cost level.

Color image of a work place without depth information
Image of the scene with depth information. In addition: clear separation of tool and hand possible due to DAKARA-technology

In the next three years, the new camera matrix will be designed, developed and extensively tested in the mentioned scenarios. A first prototype will be realized by late summer 2018. The project “DAKARA” is funded by the Federal Ministry of Education and Research (BMBF) within the framework of the “Photonics Research Germany – Digital Optics” program. The project volume totals 3.8 million euros; almost half of it being provided by the industry partners involved.


Partners:

  • AMS Sensors Germany GmbH, Nürnberg (Konsortialführung)
  • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), Kaiserslautern (Technische Konsortialführung)
  • ADASENS Automotive GmbH, Lindau
  • Bosch Rexroth AG, Stuttgart
  • CanControls, Aachen
LiSA

Contact person: Dr. Gerd Reis
Funding by: BMWi
Grant agreement no.: 03ET1416A-F
Funding programm: EnOB
Begin: 01.11.2016
End: 31.10.2019

EnOB-BMWi

Light and solar management using active and model-predictively controlled Components

Project partners: Universität Kaiserslautern, DFKI GmbH Kaiserslautern, ebök Planung und Entwicklung GmbH, Dresden Elektronik Ingenieurtechnik GmbH, Agentilo GmbH, Herbert Waldmann GmbH & Co. KG

HDR Sensor image (left) and simulated scene (right)

Simulated reduction of glareThe research project LiSA is a broad-based joint project in the area of façade, lighting and control technology. The aim is enabling the most energy-efficient operation of office and administrative buildings taking into account user satisfaction. Through exemplary system integration the understanding of the interaction of individual components and the necessity of system solutions are promoted.

At component level technologies are being developed which enable the efficient use of daylight, provide energy-saving lighting with artificial light, and reduce cooling load during summer by means of shading. At the level of sensor technology, a cost-effective sensor is developed, which measures the light conditions as well as heat inputs to the room by solar radiation. A model-predictive control approach optimizes the operation of the components, which can be managed and controlled via wireless communication paths.

With the implementation of the project in a Living Lab Smart Office Space, which is subject to detailed monitoring and in which people use the space according to the actual purpose, it is ensured that the developments are continuously empirically validated and that the results are perceived by users as added value. The people working in the Living Lab have the opportunity to interact with the technology and are thus an essential part of the investigations.