News Archive
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023

CONTACT

Control and Animation of Cognitive Characters

Control and Animation of Cognitive Characters

The purpose of the CONTACT project is to take a first step towards the broad goal of building a general animation and simulation framework for cognitive, human-like, autonomous characters. The objective of this research is to develop a working platform for future autonomous characters, where users could define high level goals, and virtual characters determine appropriate actions based on specific domain knowledge and AI techniques. The user will also be allowed to overrule the character’s decision and for example force the character to fall back on some predefined behaviour. In case an action implies the movement of a character, the corresponding motion sequence will be created by adapting reference motions provided by a motion database.

VIDETE

Generation of prior knowledge with the help of learning systems for 4D analysis of complex scenes

Generation of prior knowledge with the help of learning systems for 4D analysis of complex scenes

Motivation

Artificial intelligence currently influences many areas, including machine vision. For applications in the fields of autonomous systems, medicine and industry there are fundamental challenges: 1) generating prior knowledge to solve severely under-determined problems, 2) verifying and explaining the response calculated by the AI, and 3) providing AIs in scenarios with limited computing power.

Goals and Procedure

The goal of VIDETE is to use AI to generate prior knowledge using machine learning processes, thus making previously unsolvable tasks such as the reconstruction of dynamic objects practically manageable with just one camera. With suitable prior knowledge it will be easier to analyze and interpret general scenes, for example in the area of autonomous systems, with the help of algorithms. Furthermore, methods will be developed to justify the calculated results before they are used further. In the field of medicine this would be comparable to the opinion of a colleague in contrast to the general answers of current AI methods. A key technique is considered to be the modularization of algorithms, which will especially increase the availability of AI. Modular components can be realized efficiently in hardware. Thus, calculations (e.g. the recognition of a gesture) can be performed close to the generating sensor. This, in turn, enables semantically enriched information to be communicated with low overhead, which means that AI can also be used on mobile devices with low resources available.

Innovations and Perspectives

Artificial intelligence finds its way into almost all areas of daily life and work. The results expected from the VIDETE project will be independent of the defined research scenarios and can contribute to progress in many application areas (private life, industry, medicine, autonomous systems, etc.).

Contact

Dr. Dipl.-Inf. Gerd Reis

SYNERGIE

System zur Optimierung der Energieeffizienz von Kommunalen Kläranlagen durch Intelligentes Wissensmanagement

System zur Optimierung der Energieeffizienz von Kommunalen Kläranlagen durch Intelligentes Wissensmanagement

Im Rahmen des Projekts SYNERGIE wird ein intelligentes Energiemanagementsystem für Kläranlagen entwickelt. Es erfasst und verknüpft alle für den Themenbereich “Energie auf Kläranlagen” relevanten Informationen und bietet gleichzeitig Mechanismen, die eine systemübergreifende Optimierung der Energieerzeugung und -nutzung (inkl. der anfallenden Wärme) ermöglichen.

Ein Bestandteil von SYNERGIE ist eine Wissens- und Informationsdatenbank (Ontologie), welche neben Informationen von Anlagenkomponenten (z.B. Pumpen und Gebläse/Verdichter und deren Kennlinien; Kennwerte Blockheizkraftwerk zur Erzeugung von elektrischer Energie und Wärme aus Biogas) auch Informationen zu ablaufenden Prozessen (z.B. optimale Bedingungen für die Produktion von Bio- bzw. Faulgas wie Temperatur, pH-Werte im Faulbehälter) und Energiekosten (z.B. Tarifinformationen des entsprechenden Energieversorgungsunternehmens) zur Verfügung stellt.

Speziell angepasste Schnittstellen ermöglichen den Datenaustausch zwischen dem SYNERGIE-System und dem Prozessleitsystem als auch der Modellierungs- und Simulationssoftware. Ziel ist die Umsetzung einer möglichst freien Kombinierbarkeit der Systeme untereinander.

Zur Darstellung der Informationen und Interaktion werden skalierbare Visualisierungs- und Interaktionstechniken verwendet. In diesem Zusammenhang werden moderne Multitouch-Techniken untersucht. Diese bieten eine schnelle und intuitive Steuerung und Analyse unter Verzicht auf die übliche Kombination aus Tastatur und Maus. Diese Aufgabenstellung bedingt die Erforschung neuer User-Interface-Konzepte, die losgelöst von alten Designs und simplen 1:1-Transfers üblicher Mausinteraktionen eine optimale Multitouch-Unterstützung bieten.

Partners

  • ifak system GmbH
  • Technische Universität Kaiserslautern – Fachgebiet Siedlungswasserwirtschaft
Virtual Try-On

Interaktiver Individual-Bekleidungskatalog

Interaktiver Individual-Bekleidungskatalog

Ziel des Virtual Try-On Teilprojektes Interaktiver Individual-Bekleidungskatalog ist die Schaffung der technologischen Grundlagen für eine synergetische Verbindung des innovativen Angebots kundenindividueller Bekleidungsstücke (Maßkonfektion) mit dem E- Commerce unter Einsatz von VR-Methoden und die Umsetzung dieser Techniken in die Praxis anhand eines virtuellen, kundenindividuellen Internet-Katalogs.

Die Nutzung dieses neuartigen Web-basierten Systems erfolgt nach der einmaligen Erfassung der kundenspezifischen Daten (z.B. Körpermaße) im Wesentlichen in zwei Schritten. Zunächst kombiniert der Kunde aus dem Angebot einzelne Kleidungsstücke (Modelltyp, Farbe und Ausstattung) und erhält als sofortiges Feedback zweidimensionale, vorgegebene Ansichten von sich selbst in der ausgewählten Kleidung. Per Mausklick kann er anschließend in einen 3D Modus wechseln. Mittels spezieller Morphingmethoden wird hierzu vom System ein 3D Modell der bekleideten Figurine generiert, das interaktiv aus beliebiger Perspektive betrachtet kann. Um eine schnelle und interaktive Darstellung zu ermöglichen und dadurch die Kundenakzeptanz des Systems zu steigern, wird an dieser Stelle auf eine physikalisch basierte Simulation der Kleidung verzichtet.

  • Synergetische Verbindung des innovativen Angebots kundenindividueller Bekleidungsstücke mit dem E-Commerce unter Einsatz von VR-Methoden.
  • Umsetzung dieser Techniken in die Praxis anhand eines virtuellen, kundenindividuellen Internet-Katalogs.
  • Entwicklung intelligenter Morphingmethoden zur Kleidungsvisualisierung ohne Einsatz physikalisch basierter Simulation.
VisIMon

Networked, Intelligent and Interactive System for Continuous, Perioperative  Monitoring and Control of an Irrigation Device, as well as for Functional Monitoring of the Lower Urinary Tactonitoring

Networked, Intelligent and Interactive System for Continuous, Perioperative Monitoring and Control of an Irrigation Device, as well as for Functional Monitoring of the Lower Urinary Tactonitoring

Continuous bladder irrigation is the standard after operations on the bladder, prostate, or kidneys to prevent complications caused by blood clots. The irrigation should be monitored constantly, but this is not possible in everyday clinical practice. Therefore, the motivation of VisIMon is to enable automated monitoring. It leads to improved patient care while at the same time relieving the strain on staff.

The aim of the project VisIMon is the development of a small module worn on the body which monitors the irrigation process with the aid of various sensors. The system should seamlessly integrate with the established standard process. Through the cooperation of interdisciplinary partners from industry and research, the necessary sensors are to be developed and combined into an effective monitoring system. Modern communication technology enables the development of completely new concepts how medical devices interact with a hospital. At more than 200,000 applications per year in Germany, the development is extremely attractive not only from a medical but also from an economic point of view.

Partners

  • Albert-Ludwigs-Universität Freiburg
  • Fraunhofer Gesellschaft zur Förderung der Angewandten Forschung E.V.
  • Lohmann & Birkner Health Care Consulting GmbH
  • Digital Biomedical Imaging Systems AG

Contact

Dr. Dipl.-Inf. Gerd Reis

SUDPLAN

Sustainable Urban Development Planner for Climate Change Adaptation

The SUDPLAN project aims at developing an easy-to-use web-based planning, prediction, decision support and training tool, for the use in an urban context, based on a what-if scenario execution environment, which will help to assure population’s health, comfort, safety and life quality as well as sustainability of investments in utilities and infrastructures within a changing climate. This tool is based on an innovative and visionary capacity to link, in an ad-hoc fashion, existing environmental simulation models, information and sensor infrastructures, spatial data infrastructures and climatic scenario information in a service-oriented approach, as part of the Single Information Space in Europe for the Environment (SISE). It will provide end users with 3D modeling and simulation as well as cutting edge highly interactive 3D/4D visualization, including visualization on real 3D hardware. The tool includes the SUDPLAN Scenario Management System and three so-called Common Services, which “downscale” regional climate change models using local knowledge, and which will be available for use in all of Europe. Both components will contribute to improved assessment of urban climate change impact. Vital aspects of climate change are considered in 4 carefully selected urban pilot applications located in Austria, the Czech Republic, Germany and Sweden. They cover such diverse applications as: a) extreme rainfall episodes causing problems with uncontrollable, extremely localized runoff, and for drainage and sewage systems, b) hazardous air pollution and high ambient temperature episodes causing health risks, and c) social dynamics (movement of people) as function of climate change and quality of living. DFKI’s role in this European project is the interactive 3D/4D visualization of simulation input and result data on standard 2D as well as on 3D hardware. Furthermore DFKI develops interaction methods for intuitive manipulation and analysis of the aforementioned data.

Partners

  • SMHI – Swedish Meteorological and Hydrological Institute, SE
  • AIT – Austrian Institute of Technology GmbH, AT
  • cismet GmbH, DE
  • CENIA – Czech Environmental Information Agency, CZ
  • Apertum IT AB, SE
  • Stockholm Uppsala Air Quality Management Association, SE
  • City of Wuppertal, DE
  • Technische Universität Graz, AT
VES

Virtual Echocardiography System

The objective of the “Virtual Echocardiography Project” is the research and development of innovative techniques and solutions for the achievement of a virtual examination environment for educational purpose in echocardiography.

The visualisation of a beating human heart requires initially the elaboration of an ontological framework for detailed heart-beat descriptions at the medical and at the geometrical level. This ontological framework is important for future virtual tutoring work and also, more generally, for connecting visualisation technology with core Artificial Intelligence technologies at DFKI. In addition to the geometrical heart-beat model it will be of further interest to develop a detailed model based on physiological mechanisms underlying the heart-beat, both for the healthy and the diseased heart. In combination with artifical ultrasound image generation a virtual examination environment will be established.

DAKARA

Design and application of an ultra-compact, energy-efficient and reconfigurable camera matrix for spatial analysis

Design and application of an ultra-compact, energy-efficient and reconfigurable camera matrix for spatial analysis

Within the DAKARA project an ultra-compact, energy-efficient and reconfigurable camera matrix is developed. In addition to standard color images, it provides accurate depth information in real-time, providing the basis for various applications in the automotive industry (autonomous driving), production and many more. The ultra-compact camera matrix is composed of 4×4 single cameras on a wafer and is equipped with a wafer-level optics, resulting in an extremely compact design of approx. 10 x 10 x 3 mm. This is made possible by the innovative camera technology of the AMS Sensors Germany GmbH. The configuration as a camera matrix captures the scene from sixteen slightly displaced perspectives and thus allows the scene geometry (a depth image) to be calculated from these by means of the light field principle. Because such calculations are very high-intensity, close integration of the camera matrix with an efficient, embedded processor is required to enable real-time applications. The depth image calculations, which are researched and developed by DFKI (Department Augmented Vision), can be carried out in real-time in the electronic functional level of the camera system in a manner that is resource-conserving and real-time. Potential applications benefit significantly from the fact that the depth information is made available to them in addition to the color information without further calculations on the user side. Thanks to the ultra-compact design, it is possible to integrate the new camera into very small and / or filigree components and use it as a non-contact sensor. The structure of the camera matrix is reconfigurable so that a more specific layout can be used depending on the application. In addition, the depth image computation can also be reconfigured and thus respond to certain requirements for the depth information.

The innovation of the DAKARA project represents the ultra-ompact, energy-efficient and reconfigurable overall system that provides both color and depth images. Similar systems, which are also found in the product application, are generally active systems that emit light and thus calculate the depth. Major disadvantages of such systems are the high energy consumption, the large design and the high costs. Passive systems have much lower energy consumption, but are still in the research stage and generally have large designs and low image rates. For the first time, DAKARA offers a passive camera, which convinces with an ultra-compact design, high image rates, reconfigurable properties and low energy consumption, leaving the research stage and entering the market with well-known users from different domains.

In order to demonstrate the power and innovative power of the DAKARA concept, the new camera is used in two different application scenarios. These include an intelligent rear-view camera in the automotive field and the workplace assistant in manual production. The planned intelligent rear-view camera of the partner ADASENS Automotive GmbH is capable of interpreting the rear vehicle environment spatially, metrically and semantically compared to currently used systems consisting of ultrasonic sensors and a mono color camera. As a result, even finer structures such as curbsides or poles can be recognized and taken into account during automated parking operations. In addition, the system is able to detect people semantically and to trigger warning signals in the event of an emergency. The DAKARA camera provides a significant contribution to increasing the safety of autonomous or semi-automated driving. A manual assembly process at the Bosch Rexroth AG and DFKI (Department Innovative Factory Systems) is shown in the case of the workplace assistant. The aim is to support and assure the operator of his tasks. For this purpose, the new camera matrix is fixed over the workplace and both objects and hands are detected spatially and in time by the algorithms of the partner CanControls GmbH. A particular challenge is that objects such as tools or workpieces that are held in the hand are very difficult to separate from these. This separation is made possible by the additional provision of depth information by the DAKARA camera. In this scenario, a gripping path analysis, a removal and level control, the interaction with a dialog system and the tool position detection are implemented. The camera is designed to replace a large number of sensors, which are currently being used in various manual production systems by the project partner Bosch Rexroth, thus achieving a new quality and cost level.

In the next three years the new camera matrix will be designed, developed and extensively tested in the mentioned scenarios. A first prototype will be realized by late summer 2018. The project “DAKARA” is funded by the Federal Ministry of Education and Research (BMBF) within the framework of the “Photonics Research Germany – Digital Optics” program. The project volume totals 3.8 million euros; almost half of it is provided by the industry partners involved.

Partners

  • AMS Sensors Germany GmbH, Nürnberg (Konsortialführung)
  • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), Kaiserslautern (Technische Konsortialführung)
  • ADASENS Automotive GmbH, Lindau
  • Bosch Rexroth AG, Stuttgart
  • CanControls, Aachen
COGNITO

Cognitive Workflow Capturing and Rendering with On-Body Sensor-Networks

Cognitive Workflow Capturing and Rendering with On-Body Sensor-Networks

Augmented and virtual reality are becoming more and more common in systems for user assistance, educational simulators, novel games, and the whole range of applications in between. Technology to automatically capture, recognize, and render human activities is essential for all these applications. The aim of COGNITO is to bring this technology a big step forward.

COGNITO is a European project with activities covering the whole chain from low-level sensor fusion to workflow analysis and assistive visualization. Novel techniques are developed for analyzing, learning, and recording workflows, and then to use the acquired information in the way most suited for the user.

The project emphasizes how the hands are used to interact with objects and tools in the environment. This is an important component needed for making the technology useful in industrial applications. The workflow capturing in COGNITO is built upon the development of a on-body sensor network of miniature inertial and vision sensors. With the sensor network it is possible to accurately track limb motions, and even the fine motor skills of the hands using a wrist mounted camera. This information is then used to identify and classify workflow patterns in the captured movements. In the next step this is used for user monitoring and to develop new interaction paradigms for user-adaptive information presentation.

The focus of the Augmented Vision department is to develop the visual-inertial sensor network and provide the first level of information abstraction from it. This involves developing sensor fusion algorithms to estimate the limb motions and to use the wrist camera to provide detailed hand reconstructions. Augmented Vision also takes part in classifying workflows needed for user monitoring.

Partners

  • UNIVERSITY OF BRISTOL
  • UNIVERSITY OF LEEDS
  • Centre National de la Recherche Scientifiques (CNRS)
  • Trivisio Prototyping GmbH
  • Center for Computer Graphics (CCG) &
  • Technology-Initiative SmartFactory KL .

Contact

Prof. Dr.-Ing. Dipl.-Inf. Gabriele Bleser-Taetz

4DUS

4-Dimensional Ultrasound

4-Dimensional Ultrasound

The main objective of this project is the improvement of real ultrasound data of the human heart. The spatial representation of the heart in-vivo using ultrasound imaging is currently fairly limited and not of much diagnostic use due to motion artefacts and registration errors in ultrasound-head position tracking. Further on it is technically impossible to scan all medical relevant regions of the heart from a single transducer position.

This is the reason why traditional 2D examinations are done from several positions acquiring the standard slices. Our approach to improve the imaging quality is to merge ultrasound data from different transducer positions intelligently. The motion is recorded by 6-DOF position sensors allowing absolute free positioning of the transducer to get the best beam direction for each region of interest. With specialised techniques in merging ultrasound data from different positions using digital imaging algorithms we are confident to improve the image quality to such an extent that the sensitivity and diagnostic possibilities are significantly enhanced.

Partners

  • Klinikum der Bayerischen Julius-Maximilians-Universität Würzburg: http://www.medizin.uni-wuerzburg.de/