Acoustic and Visual Environmental Simulation for Robot Control

The project aims at simulating the spreading of sounds within the close-up range of an autonomous mobile robot that is situated in an office environment. This simulation is used for evaluating the robot’s control strategies for identifying sound events and reacting to them. In the project’s extension, the developed techniques will be applied to human speech and moving humans as sound sources.

The main objective of this project is the development of new methods for simulating the optical and acoustic properties of an indoor scene in order to use these data for evaluating the control algorithms of autonomous mobile robots.

A robot orientates itself inside a room by using the information that is provided by its sensor systems. Besides distance-sensors, optical and acoustic sensors provide these important data. This comprises the core tasks in the collaboration of the research groups involved in this project: In order to enable a robot to interact with its environment and to permit a context-sensitive execution of its tasks, the robot has to be able to interpret this information provided by its sensors. However, appropriate environments and stimuli for testing these capabilities are not always available. In order to test the control algorithms of a robot, this project aims at providing the capabilities for a realistic simulation of the acoustic and visual properties of indoor-environments. For this purpose, the project will use technologies that have been developed by the research groups collaborating in this project, the groups “Robot Systems”, and “Computer Graphics”, as well as DFKI’s Competence Center “Human Centered Visualization”.

It is especially envisioned to build our work upon the audio-visual Virtual-Reality presentation system that has been developed in cooperation between the University of Kaiserslautern’s Computer Graphics group, the Fraunhofer Institute for Technical and Economical Mathematics (ITWM), and DFKI’s research lab “Intelligent Visualization and Simulation” in the context of the research project “Acoustic Simulated Reality”.

While in the first part of the project the work is focussing on static sound sources emitting one characteristic signal, the project’s extension aims at applying the techniques developed in the first part to humans in office environments. This implies the modeling and simulation of moving sound sources as well as the dynamic aspects of speech. The techniques that will be developed here provide a central aspect for enabling robots to interact with humans. As a platform for integrating and evaluating these techniques, the humanoid robot head ROMAN is available at the Robotics Laboratory of the Department of Computer Science.


Research Group Robot Systems of the University of Kaiserslautern