Im Rahmen des Projektes „KI Absicherung“ werden erstmalig industrieübergreifend Methoden und Maßnahmen für die Absicherung von KI-basierten Funktionen für das automatisierte Fahren entwickelt. Es wird ein initialer und standardisierungsfähiger Industriekonsens angestrebt, der eine einheitliche und breit akzeptierbare Absicherung von KI basierten Wahrnehmungsfunktionen in der Automobilindustrie etabliert.


Automobilhersteller: Volkswagen AG (Konsortialführer), AUDI AG, BMW Group, Opel Automobile GmbH

Zulieferer: Continental Automotive GmbH, Hella Aglaia Mobile Vision GmbH, Robert Bosch GmbH, Valeo Schalter und Sensoren GmbH, Visteon Electronics Germany GmbH, ZF Friedrichshafen AG

Technologieprovider: AID Autonomous Intelligent Driving GmbH, Automotive Safety Technologies GmbH, Intel Deutschland GmbH, Mackevision Medien Design GmbH, Merantix AG, Luxoft GmbH, umlaut systems GmbH, QualityMinds GmbH

Forschungspartner: Fraunhofer IAIS (Stellv. Konsortialführer und Wissenschaftlicher Koordinator), Bergische Universität Wuppertal, Deutsches Forschungszentrum für Künstliche Intelligenz, Deutsches Zentrum für Luft- und Raumfahrt, FZI Forschungszentrum Informatik, TU München, Universität Heidelberg

Externe Technologiepartner: BIT Technology Solutions GmbH, neurocat GmbH, understand ai GmbH

Projektmanagement: European Center for Information and Communication Technologies – EICT GmbH


Angewandte Referenzarchitektur für virtuelle Dienste und Anwendungen

Angewandte Referenzarchitektur für virtuelle Dienste und Anwendungen

ARVIDA is a project funded by the German Federal Ministry for Education and Research (BMBF) with actually 23 partners from research and industry. The main goal of the project is the creation of a service oriented reference architecture for virtual technologies (VT). The service orientation and the usage or rather adaption of available internet and VT-standards ensure interoperability between different modules and VT applications. A broad cross-company evaluation of the reference architecture in selected industrial scenarios guarantees that the results can be used as a future standard. The department Augmented Vision at DFKI deals in this project with a target-actual comparison for virtual product verification. Thereby a real object is captured in real-time and compared to a CAD model of the object. Though algorithms for high precision reconstruction of objects with small or medium size using low-cost depth cameras are investigated and developed.


Dr. Hilko Hoffmann


Acoustic and Visual Environmental Simulation for Robot Control

The project aims at simulating the spreading of sounds within the close-up range of an autonomous mobile robot that is situated in an office environment. This simulation is used for evaluating the robot’s control strategies for identifying sound events and reacting to them. In the project’s extension, the developed techniques will be applied to human speech and moving humans as sound sources.

The main objective of this project is the development of new methods for simulating the optical and acoustic properties of an indoor scene in order to use these data for evaluating the control algorithms of autonomous mobile robots.

A robot orientates itself inside a room by using the information that is provided by its sensor systems. Besides distance-sensors, optical and acoustic sensors provide these important data. This comprises the core tasks in the collaboration of the research groups involved in this project: In order to enable a robot to interact with its environment and to permit a context-sensitive execution of its tasks, the robot has to be able to interpret this information provided by its sensors. However, appropriate environments and stimuli for testing these capabilities are not always available. In order to test the control algorithms of a robot, this project aims at providing the capabilities for a realistic simulation of the acoustic and visual properties of indoor-environments. For this purpose, the project will use technologies that have been developed by the research groups collaborating in this project, the groups “Robot Systems”, and “Computer Graphics”, as well as DFKI’s Competence Center “Human Centered Visualization”.

It is especially envisioned to build our work upon the audio-visual Virtual-Reality presentation system that has been developed in cooperation between the University of Kaiserslautern’s Computer Graphics group, the Fraunhofer Institute for Technical and Economical Mathematics (ITWM), and DFKI’s research lab “Intelligent Visualization and Simulation” in the context of the research project “Acoustic Simulated Reality”.

While in the first part of the project the work is focussing on static sound sources emitting one characteristic signal, the project’s extension aims at applying the techniques developed in the first part to humans in office environments. This implies the modeling and simulation of moving sound sources as well as the dynamic aspects of speech. The techniques that will be developed here provide a central aspect for enabling robots to interact with humans. As a platform for integrating and evaluating these techniques, the humanoid robot head ROMAN is available at the Robotics Laboratory of the Department of Computer Science.


Research Group Robot Systems of the University of Kaiserslautern


Vision, Identification, with Z-sensing Technologies and  key Applications

Vision, Identification, with Z-sensing Technologies and key Applications

VIZTA project, coordinated by ST Micrelectronics, aims at developing innovative technologies in the field of optical sensors and laser sources for short to long-range 3D-imaging and to demonstrate their value in several key applications including automotive, security, smart buildings, mobile robotics for smart cities, and industry4.0. The key differentiating 12-inch Silicon sensing technologies developed during VIZTA are:

1-Innovative SPAD and lock-in pixel for Time of Flight architecture sensors. 2-Unprecedent and cost-effective NIR and RGB-Z filters on-chip solutions. 3-complex RGB+Z pixel architectures for multimodal 2D/3D imaging.

For short-range sensors : advanced VCSEL sources including wafer-level GaAs optics and associated high speed driver. These developed differentiating technologies allows the development and validation of innovative 3D imaging sensors products with the following highly integrated prototypes demonstrators:

1-High resolution (>77 000 points) time-of-flight ranging sensor module with integrated VCSEL, drivers, filters and optics. 2-Very High resolution (VGA min) depth camera sensor with integrated filters and optics.

For Medium and Long range sensing, VIZTA also adresses new LiDAR systems with dedicated sources, optics and sensors. Technology developments of sensors and emitters are carried out by leading semiconductor product suppliers (ST Microelectronics, Philips, III-V Lab) with the support of equipment suppliers (Amat, Semilab) and CEA Leti RTO.

VIZTA project also include the developement of 6 demonstrators for key applications including automotive, security, smart buildings, mobile robotics for smart cities, and industry4.0 with a good mix of industrial and academic partners (Ibeo, Veoneer, Ficosa, Beamagine, IEE, DFKI, UPC, Idemia, CEA-List, ISD, BCB, IDE, Eurecat). VIZTA consortium brings together 23 partners from 9 countries in Europe: France, Germany, Spain, Greece, Luxembourg, Latvia, Sweden, Hungary, and United Kingdom.




Dr.-Ing. Jason Raphael Rambach


AlterEgo - Enhancing social interactions using information technology

AlterEgo – Enhancing social interactions using information technology

Social pathologies, including schizophrenia, autism and social phobia, are mainly characterized by difficulties in interaction with others. This causes much suffering both for the patients and those that surround them. The AlterEgo European project aims to develop and test, in a three years term, an innovative rehabilitation method to improve such relational deficits, using humanoid robotics and virtual reality.

The project is rooted in a new transdisciplinary theory burgeoning in movement neuroscience and cognitive science: the theory of similarity. This theory suggests that it is easier to socially interact with someone who looks like us. This resemblance can be morphological (form of my alterego), behavioural (his/her actions), or kinematic (the way he/she moves).

AlterEgo foresees real-time manipulations of these similarity clues. The patient will be placed in interactive situations with a virtual agent. In the early stage of rehabilitation, the virtual agent, displayed on a screen, will be the alterego of the patient, more reassuring since similar. In later stages, the patient will face a humanoid robot – the European iCub robot – or the clinician. Changes in appearance and behaviour, during the interaction, will be introduced very gradually. We will thus test, over periods of six months, a new rehabilitation method reducing the interaction deficits of these patients by the virtue of more or less socially neutral artificial agents.

The AlterEgo project is one of the 17 laureates (250 submissions) of the last European call – ICT 2.9 – Cognitive Sciences and robotics – launched in 2012 by the European commission. It is coordinated by Pr. Benoît Bardy, director of the EuroMov centre (Movement & Health research unit) at Montpellier 1 University in France. In synergy with the french movement scientists, the project involves computer science experts from the DFKI centre (Germany), mathematicians from the University of Bristol (UK), roboticists from the Ecole Polytechnique Fédérale de Lausanne (CH), as well as clinicians, psychologists and psychiatrists from the Academic Hospital of Montpellier (CHRU, FR).

More information can be found on the project site:




Common approaches to HCI (Human Computer Interaction) largely consist of stressing the points in which computers exceed human performance. They are based on the assumption that the users working with such a system will have to adapt themselves to their working environment and not vice versa. These approaches do not take sufficiently into account that computers may be very efficient at fast searching and ordering large amounts of data, while humans are much more adept at visually arranging and manipulating data, as well as recognizing relations between different sets of data (meta-data).

Human thinking and knowledge work is heavily dependent on sensing the outside world. One important part of this perception-oriented sensing is the human visual system. It is well-known that our visual knowledge disclosure that is, our ability to think, abstract, remember, and understand visually and our skills to visually organize are extremely powerful. The overall vision of @VISOR is to realize an individually customizable virtual world which inspires the user’s thinking, enables the economical usage of his perceptual power, and adheres to a multiplicity of personal details with respect to his thought process and knowledge work.

The logical conclusion is that by creating a framework that emphasizes the strengths of both humans and machines in an immersive virtual environment, @VISOR can achieve great improvements in the effectiveness of knowledge workers and analysts. The @VISOR project strives to realize this vision by designing methods to present and visualize data in a way that integrates the user into his artificial surroundings seamlessly and gives him/her the opportunity to interact with it in a natural way. In this connection, a holistic context and content-sensitive approach for information retrieval, visualization, and navigation in manipulative virtual environments is introduced. @VISOR addresses this promising and comprehensive vision of efficient man-machine interaction in future manipulative virtual environments by the term “immersion”: a frictionless sequence of operations and a smooth operational flow, integrated with multi-sensory interaction possibilities, which allows an integral interaction of human work activities and machine support. When implemented to perfection, this approach enables a powerful immersion experience: the user has the illusion that he is actually situated in the artificial surroundings, the barrier between human activities and their technical reflection vanishes, and the communication with the artificial environment is seamless and homogeneous. As a result, not only are visually driven thinking, understanding, and organizing promoted, but the identification and recognition of new relations and knowledge is facilitated.

As a matter of course, the study of non-specific, general, real-world information spaces is far too complex to be the aim of @VISOR. Therefore @VISOR will dedicate its studies on virtual environments to personal (virtual) information spaces which are, to a high degree, based on documents i.e., the personal document-based information spaces. In this specific context the above considerations and questions will be concretized and focused.


Magnetometer-free Inertial Motion Capture System with Visual Odometry

Magnetometer-free Inertial Motion Capture System with Visual Odometry

IMCV project proposes a wearable sensory system, based on inertial motion capture device and visual odometry that can easily be mounted on a robot, as well as on the humans and delivers 3D kinematics in all the environments with an additional 3D reconstruction of the surroundings.

Its objective is to develop this platform for both Exoskeleton and bipedal robot benchmarking.

And it will develop a scenario-generic sensory system for human and bipedal robots and therefore two benchmarking platform will be delivered to be integrated into Eurobench facilities in Spain and Italy for validation tests.

It is planned to use recent advances in inertial measurement units based 3D kinematics estimation that does not use magnetometers and, henceforth, is robust against magnetic interferences induced by the environment or the robot.

This allows a drift-free 3D joint angle estimation of e.g. a lower body configuration or a robotic leg in a body-attached coordinate system.

To map the environment and to correct for possible global heading drift (relative to an external coordinate frame) of the magnetometer-free IMU system, it is planned to fuse the visual odometry stochastically with the IMU system. The recorded 3D point cloud of the stereo camera is used in the post-processing phase to generate the 3D reconstruction of the environment. Therefore a magnetometer-free wearable motion capture system with approximate environment mapping should be created that works for humans and bipedal robots, in any environment, i.e. indoors and outdoors.

To improve localization and measure gait events, a wireless foot pressure insoles will be integrated for measuring ground interaction. Together with the foot insole all the necessary data to reconstruct kinetics and kinematics will be delivered and fully integrated into Robot Operating System (ROS). A user interface will be developed for possible modifications of the skeleton. We also provide validation recordings with a compliant robot leg and with humans, including the computation of key gait-parameters.


Technische Universität Kaiserslautern, Dekanat Informatik


Dr. Bertram Taetz


Software Innovations For the Digital Company

Software Innovations For the Digital Company

The joint project ?Software Innovations For the Digital Company? (SINNODIUM) links to the four ongoing projects within the framework of the Cluster Excellence Competition, connects the various fields of research, and guides the overall project to an integrated conclusion. In practice, initial prototype solutions for the next generation of business software??emergent software??are developed that dynamically and flexibly work to make a variety of components from different manufacturers combinable, thus triggering a wave of innovation in digital companies across all sectors.

At SINNODIUM, medium and large software companies therefore work together with research partners on general application scenarios for emergent business software in the areas of Smart Retail (trade), Smart Production (industry), and Smart Services (services and logistics).




Eingebettete Neuronale Netze für Optische Sensoren zur flexiblen und vernetzen Produktion

Eingebettete Neuronale Netze für Optische Sensoren zur flexiblen und vernetzen Produktion

Im Rahmen des Projekts ENNOS wird eine kompakte und energieeffiziente Farb- und Tiefenkamera entwickelt, also eine Kamera, die Farbbilder und gleichzeitig 3-dimensionale Informationen zum Abstand von Objekten liefert. Informationen zu Farbe und 3D-Daten werden mittels sogenannter „tiefer neuronaler Netze“ verknüpft, das sind sehr vereinfachte „künstliche Gehirne“: Es wird also „künstliche Intelligenz“ zur rechnergestützten Entscheidungsfindung genutzt.

Ziel ist ein besonders flexibles und leistungsfähiges optisches System, das viele neue Anwendungsmöglichkeiten im Bereich Produktion findet.

Die Auswertung geschieht über Field Programmable Gate Array-Chips (FPGA), das sind programmierbare Integrierte Schaltkreise, die sich an unterschiedliche Aufgaben anpassen lassen. Solche Prozessoren sind besonders flexibel und leistungsfähig, aber von begrenzter Kapazität.

Die Herausforderung liegt darin, die komplexe Struktur und Größe moderner neuronaler Netze effizient in eine passende und kompakte Hardware-Architektur umzuwandeln. Möglich wird dies durch Vorarbeit des Verbundkoordinators Bosch, der eine Vorreiterrolle für solche eingebetteten Lösungen einnimmt.

Unterstützt wird er dabei vom Deutschen Forschungszentrum für Künstliche Intelligenz (DFKI), das sich mit Entscheidungsalgorithmen sowie der Vereinfachung („Pruning“) von neuronalen Netzen beschäftigen wird.

Eine weitere wesentliche Innovation des Projekts ENNOS liegt in der Einführung von ultra-kompakten 3D-Kameras des Projektpartners PMD Technologies AG, der erfolgreich als erster Anbieter eine 3D-Kamera in ein Smartphone integriert hat. Für das Projekt ENNOS werden eine neue Beleuchtungseinheit sowie optische Komponenten für den Industrieeinsatz konzipiert. Dies soll ermöglichen, schwierige Beleuchtungsbedingungen sowie weitere Störeinflüsse aus dem Fertigungsumfeld (z. B. Kalibrierungsungenauigkeiten und Rauschen) zu kompensieren.

Um die große erwartete Leistungsfähigkeit des ENNOS-Konzepts zu demonstrieren, wird die neue (intelligente) Kameraplattform in drei verschiedenen Anwendungsszenarien bei den Verbundpartnern eingesetzt:

Bosch und das DFKI realisieren zusammen die Anwendungen „Ferndiagnose mit automatischer Unkenntlichmachung von Personen“ (Abb. 1a) und „Intelligente Bilderkennung und -analyse mit dem Ziel rein maschinengebundener Produktion“ (Abb. 1b). Die dritte Anwendung „Assistenzsystem für Bestandsaufnahmen“ (Abb. 2) in großen Anlagen wird von den Partnern ioxp GmbH und KSB AG realisiert.

Jedes dieser Szenarien adressiert bestehende Probleme, die durch bisherige Technologien nur bedingt oder gar nicht gelöst werden und daher ein hohes Innovationspotenzial bieten.


  • Robert Bosch GmbH, Gerlingen-Schillerhöhe (Koordinator)
  • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), Kaiserslautern
  • KSB SE & Co. KGaA, Frankenthal
  • ioxp GmbH, Mannheim
  • pmdtechnologies ag, Siegen (assoziierter Partner)
  • ifm eletronic GmbH, Tettnang (assoziierter Partner)