COGNITO

Cognitive Workflow Capturing and Rendering with On-Body Sensor-Networks

Cognitive Workflow Capturing and Rendering with On-Body Sensor-Networks

Augmented and virtual reality are becoming more and more common in systems for user assistance, educational simulators, novel games, and the whole range of applications in between. Technology to automatically capture, recognize, and render human activities is essential for all these applications. The aim of COGNITO is to bring this technology a big step forward.

COGNITO is a European project with activities covering the whole chain from low-level sensor fusion to workflow analysis and assistive visualization. Novel techniques are developed for analyzing, learning, and recording workflows, and then to use the acquired information in the way most suited for the user.

The project emphasizes how the hands are used to interact with objects and tools in the environment. This is an important component needed for making the technology useful in industrial applications. The workflow capturing in COGNITO is built upon the development of a on-body sensor network of miniature inertial and vision sensors. With the sensor network it is possible to accurately track limb motions, and even the fine motor skills of the hands using a wrist mounted camera. This information is then used to identify and classify workflow patterns in the captured movements. In the next step this is used for user monitoring and to develop new interaction paradigms for user-adaptive information presentation.

The focus of the Augmented Vision department is to develop the visual-inertial sensor network and provide the first level of information abstraction from it. This involves developing sensor fusion algorithms to estimate the limb motions and to use the wrist camera to provide detailed hand reconstructions. Augmented Vision also takes part in classifying workflows needed for user monitoring.

Partners

  • UNIVERSITY OF BRISTOL
  • UNIVERSITY OF LEEDS
  • Centre National de la Recherche Scientifiques (CNRS)
  • Trivisio Prototyping GmbH
  • Center for Computer Graphics (CCG) &
  • Technology-Initiative SmartFactory KL .

Contact

Prof. Dr.-Ing. Dipl.-Inf. Gabriele Bleser-Taetz

4DUS

4-Dimensional Ultrasound

4-Dimensional Ultrasound

The main objective of this project is the improvement of real ultrasound data of the human heart. The spatial representation of the heart in-vivo using ultrasound imaging is currently fairly limited and not of much diagnostic use due to motion artefacts and registration errors in ultrasound-head position tracking. Further on it is technically impossible to scan all medical relevant regions of the heart from a single transducer position.

This is the reason why traditional 2D examinations are done from several positions acquiring the standard slices. Our approach to improve the imaging quality is to merge ultrasound data from different transducer positions intelligently. The motion is recorded by 6-DOF position sensors allowing absolute free positioning of the transducer to get the best beam direction for each region of interest. With specialised techniques in merging ultrasound data from different positions using digital imaging algorithms we are confident to improve the image quality to such an extent that the sensitivity and diagnostic possibilities are significantly enhanced.

Partners

  • Klinikum der Bayerischen Julius-Maximilians-Universität Würzburg: http://www.medizin.uni-wuerzburg.de/
LiSA

Light and solar management using active and model-predictively controlled components

The research project LiSA is a broad-based joint project in the area of facade, lighting and control technology. The aim is enabling the most energy-efficient operation of office and administrative buildings taking into account user satisfaction. Through exemplary system integration the understanding of the interaction of individual components and the necessity of system solutions are promoted.

At component level technologies are being developed which enable the efficient use of daylight, provide energy-saving lighting with artificial light, and reduce cooling load during summer by means of shading. At the level of sensor technology, a cost-effective sensor is developed, which measures the light conditions as well as heat inputs to the room by solar radiation. A model-predictive control approach optimizes the operation of the components, which can be managed and controlled via wireless communication paths.

With the implementation of the project in a Living Lab Smart Office Space, which is subject to detailed monitoring and in which people use the space according to the actual purpose, it is ensured that the developments are continuously empirically validated and that the results are perceived by users as added value. The people working in the Living Lab have the opportunity to interact with the technology and are thus an essential part of the investigations.

Partners

  • Technische Universität Kaiserslautern
  • DFKI GmbH
  • ebök Planung und Entwicklung GmbH
  • Dresden Elektronik Ingenieurtechnik GmbH
  • Agentilo GmbH
  • Herbert Waldmann GmbH & Co. KG

Contact

Dr. Dipl.-Inf. Gerd Reis

CAPTURE

CAPTURE - 3D-scene reconstruction with high resolution and high dynamic range spherical images

CAPTURE – 3D-scene reconstruction with high resolution and high dynamic range spherical images

Reconstruction of 3D-scenes out of camera images represents an essential technology for many applications, such as 3D-digital-cities, digital cultural heritages, games, tele-cooperation, tactical training or forensic. The objective of the project CAPTURE is to develop a novel approach for 3D scene acquisition and develop corresponding theory and practical methods.

Instead of processing a large amount of standard perspective low resolution video images, we use as input data a few single but full spherical high resolution and high dynamic range (HDR) images. Currently available spherical high resolution cameras are able to record fine texture details and the complete scene from a single point in space. Additionally such cameras provide HDR images yielding consistent color and photometric information. We propose to exploit this new technology focusing on the dense/high-quality 3D reconstruction of both indoor and outdoor environments.

The fundamental issue of the project is to develop novel algorithms that take into account the properties of these images, and thus to push forward the current state of the art in 3D scene acquisition and viewing. In particular we develop novel stable and light-invariant image feature detectors, as well as robust assignment methods for image matching and novel 3D reconstruction/viewing algorithms, which exploit the properties of the images.

The multiple spherical view geometry provides a high amount of redundant information about the underlying environment. This, combined with the consistency of the color and photometric information from HDR images, allows us to develop new methods for robust high-precision image matching and 3D structure estimation, resulting in a high-fidelity textured model of the real scene.

The development of the project CAPTURE makes extensive usage of our Computer Vision Development Framework ARGOS. From the software development side, it is necessary to work with large images and merge information from multiple sources simultaneously. We therefore also put special attention in parallel processing of large amount of data as well as clustering capabilities.

The application of this project is the accurate reconstruction of large scenes which includes industrial facilities, touristic and cultural heritage sites, as well as urban environments.

Contact

Dr.-Ing. Alain Pagani

FUMOS

Fusion multimodaler optischer Sensoren zur 3D Bewegungserfassung in dichten, dynamischen Szenen für mobile, autonome Systeme

Fusion multimodaler optischer Sensoren zur 3D Bewegungserfassung in dichten, dynamischen Szenen für mobile, autonome Systeme

Autonomous vehicles will be an indispensable component of future mobility systems. Autonomous vehicles can significantly increase the safety of driving while simultaneously increasing traffic density. Autonomously operating vehicles must be able to continuously and accurately detect their environment and the movements of other road users. To this end, new types of real-time capable sensor systems must be researched. Cameras and laser scanners operate according to different principles and offer different advantages in capturing the environment. The aim of this project is to investigate whether and how the two sensor systems can be combined to reliably detect movements in traffic in real time. The challenge in this case is to suitably combine the heterogeneous data of both systems and to find suitable representations for the geometric and visual features of a traffic scene. These must be optimized to the extent that reliable information can be provided for vehicle control in real time. If such a hybrid sensor system can be designed and successfully built, this could represent a breakthrough for sensor equipment for autonomous vehicles and a decisive step for the implementation of this technology.

Contact

Ramy Battrawy, M.Sc.

Dr.-Ing. René Schuster

DYNAMICS

Consistent dynamic scene reconstruction and property transfer using priors and constraints

Consistent dynamic scene reconstruction and property transfer using priors and constraints

The objective of DYNAMICS is to develop a new methodology for 4D reconstruction of real world scenes with a small number of cameras, as well as to learn statistical models from the captured data sets. A 4D reconstruction refers to a sequence of accurate 3D reconstructions (including geometry, topology and surface properties) of a dynamic (evolving in time) real-world scene. We aim to build a robust lightweight capture system that can be easily installed and used (e.g. in the living room of a house, in outdoor environments, and broadly under various spatial and temporal constraints).

We are developing a novel interactive software system for motion estimation capitalizing on our experience from the predecessor project DENSITY and exploring new directions (new hardware and machine learning methods).

Specifically, the project DYNAMICS can be subdivided into several work packages according to the target scenarios and concerned areas of computer vision:

1) Software for an interactive monocular 4D reconstruction of non-rigid scenes. The main components are modules dealing with non-rigid structure from motion (NRSfM) pipeline and non-rigid registration. Underlying technology will allow to reconstruct non-rigidly deforming scenes with a minimal number of assumptions from a single RGB camera. Target scenarios include endoscopy, capture of facial expressions, small motion and post-factum reconstructions.

2) Software for robust 4D reconstruction from multiple views incorporating optical flow and scene flow with additional assumptions. We plan to assemble a capture studio with five Emergent HT-4000C high-speed cameras (a multi-view setting). Here, we aim at the highest precision and richness of detail in the reconstructions.

3) 3D shape templates with attributes derived from real data using deep learning techniques. The main objective of this work package is to provide statistical models as a prior knowledge in order to increase the robustness and accuracy of reconstructions. Furthermore, the shape templates will allow for more accurate reconstructions of articulated motion (e.g. skeleton poses) from uncalibrated multi-view settings.

DYNAMICS is a BMBF project with an emphasis on development of core technologies applicable in other ongoing and forthcoming projects in the Augmented Vision Lab.

PAMAP

Physical Activity Monitoring for Aging People

Physical Activity Monitoring for Aging People

PAMAP is a European project in the area of ambient assisted living. The project aims at helping the growing elderly population in our society to age weil, or simply to make their life as healthy and comfortable as possible. This can be achieved by providing physicians with the means to help and encourage people to a healthy activity level, and to diagnose problems at an early stage.

PAMAP is an interdisciplinary project bringing together developers of sensors, biomechanics experts and medical specialists. State-of-the-art sensors are integrated with innovative software in order to monitor human physical activity.

The base of the PAMAP system is a mobile network of miniature inertial sensors. Attaching these sensors to the body and connecting them in a network provides kinematic measurements about its movements.

We work in close cooperation with sensor providers to build a modern micro-electro-mechanical systems (MEMS) technology based miniature sensor network, from which data is acquired. In parallel we develop a model of the human body together with our biomechanics partners. They help us to relate the measurements to appropriate physiological indicators. The measurements and the model are then brought together using innovative statistical sensor fusion solutions.

By providing the sensor fusion knowhow, the Augmented Vision department acts as a keystone in the development. We provide the software that transforms the raw sensor readings to high level information which is suitable for analysis by medical personnel. This can be either the overall physical activity level or specific limb motions. The goal is to provide the physicians with relevant and precise information about the vitality of the wearer of the PAMAP system.

Partners

  • INTRACOM TELECOM
  • University of Compiegne
  • TRIVISIO Prototyping GmbH
  • Centre Hospitalier Universitaire de Rennes

Contact

Prof. Dr.-Ing. Dipl.-Inf. Gabriele Bleser-Taetz

You in 3D

You in 3D

Real-time Motion capture of multiple persons in community videos

Tracking multiple persons in 3D with high accuracy and temporal stability in real-time with monocular RGB camera is a challenging task which has a lot of practical applications like 3D human character animation, motion analysis in sports, modeling human body movements and many others. The optical human tracking methods often require usage of multi-view video recordings or depth cameras. Systems which work with monocular RGB cameras are mostly not in real-time, track single person and require additional data like initial human pose to be given. All this implies a lot of practical limitations and is one of the major reasons why optical motion capture systems have not yet seen more widespread use in commercial products. The DFKI research department Augmented Vision presents a novel fully automatic multi-person motion tracking system. The presented system works in real-time with monocular RGB video and tracks multiple people in 3D. It does not require any manual work or a specific human pose to start the tracking process. The system automatically estimates a personalized 3D skeleton and an initial 3D location of each person. The system is tested for tracking multiple persons in outdoor scenes, community videos and low quality videos captured with mobile-phone cameras.

You in 3D
You in 3D

Contact

Onorina Kovalenko

Be-greifen

Be-greifen

Comprehensible, interactive experiments: practice and theory in the MINT study

© S. Siegesmund

Be-greifenThe project is funded by the Federal Ministry of Education and Research (BMBF). Combine tangible, manipulatable objects (“tangibles”) with advanced technologies (“Augmented Reality”) to develop new, intuitive user interfaces. Through interactive experiments, it will be possible to actively support the learning process during the MINT study and to provide the learner with more theoretical information about physics.

In the project interfaces of Smartphones, Smartwatches or Smartglasses are used. For example, a data gadget that allows you to view content through a combination of subtle head movements, eyebrows, and voice commands, and view them on a display attached above the eye. Through this casual information processing, the students are not distracted in the execution of the experiment and can access the objects and manipulate them.

A research project developed as a preliminary study demonstrates the developments. For this purpose, scientists at the DFKI and at the Technical University Kaiserslautern have developed an app that supports students and students in the determination of the relationship between the fill level of a glass and the height of the sound. The gPhysics application captures the amount of water, measures the sound frequency and transfers the results into a diagram. The app can be operated only by gestures of the head and without manual interaction. In gPhysics, the water quantity is recorded with a camera and the value determined is corrected by means of head gestures or voice commands, if required. The microphone of the Google Glass measures the sound frequency. Both information is displayed in a graph that is continuously updated on the display of Google Glass. In this way, the learners can follow the frequency curve in relation to the water level directly when filling the glass. Since the generation of the curve is comparatively fast, the learners have the opportunity to test different hypotheses directly during the interaction process by varying various parameters of the experiment.

In the project, further experiments on the physical basis of mechanics and thermodynamics are constructed. In addition, the consortium develops technologies that enable learners to discuss video and sensor recordings as well as analyze their experiments in a cloud and to exchange ideas with fellow students or to compare results.

Partners

The DFKI is a co-ordinator of five other partners in research and practice: the Technical University of Kaiserslautern, studio klv GmbH & Co. KG Berlin, University of Stuttgart, Con Partners GmbH from Bremen and Embedded Systems Academy GmbH from Barsinghausen.

Funding programm: German BMBF

  • Begin: 01.07.2016
  • End: 30.06.2019

Contact

Dr. Jason Raphael Rambach

Marmorbild

Marmorbild

Marmorbild

© S. Siegesmund

The virgin stone marble has been used as preferred material for representative buildings and sculptures. Yet, due to its chemical composition and its porosity marble is prone to natural deterioration in outdoor environments, with an accelerating rate since the beginning of industrialization, mainly due to increasing pollution. A basic requirement for a successful restoration and conservation is a regularly repeated assessment of the current object condition and knowledge about prior restoration actions. Ideally the assessment is non-destructive. This requirement is fulfilled for both the optical digitization of objects shape and appearance, and the ultrasound examination used to acquire properties with respect to material quality.

Goal of the joint research project Marmorbild of the University Kaiserslautern, the Fraunhofer Institute (IBMT), and the Georg-August-University Göttingen is the validation of modern ultrasound technologies and digital reconstruction methods with respect to non-destructive testing of facades, constructions and sculptures built from marble. The proof of concept has been provided with prior research.

The planned portable assessment system holds a high potential for innovation. In the future, more objects can be examined cost-effectively in short time periods. Damage can be identified at an early stage allowing for a target-oriented investment of efforts and financial resources.

Dresdner Knabe

Partners

Funding by: BMBF

  • Funding programm: VIP+
  • Grant agreement no.: 03VP00293
  • Begin: 01.10.2016
  • End: 30.09.2019

Contact

Dr. Gerd Reis