News Archive
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

HYSOCIATEA

Hybrid Social Teams for Long-Term Collaboration in Cyber-Physical Environments

Hybrid Social Teams for Long-Term Collaboration in Cyber-Physical Environments

The project HySociaTea (Hybrid Social Teams for Long-Term Collaboration in Cyber-Physical Environments), which is funded by the German Federal Ministry of Education and Research, realizes and examines the collaboration of technological augmented humans with autonomous robots, virtual characters and softbots, who work together in a team to solve common tasks.

In the context of Industry 4.0, an example application for these hybrid social teams is the realization of a highly flexible production, in which the team can react on unplanned events by autonomous reorganization. Besides research on the technical feasibility, another key aspect of the project lies in the development of robotic team-competencies as well as on intelligent multi-agent behavior, both of which are also important aspects in purely human teams. The technical systems developed in HySociaTea are mainly meant to be used as assistance-systems for humans working in production-plants – the robots should therefore be perceived as partners in the overall working-process.

On the long run, the team organization, as developed and examined in HySociaTea, can be used in different real-world scenarios, e.g. in modularized production facilities in the factory of the future, as rescue teams in emergency situations, or to realize the necessary division of labor between humans and machines to safely deconstruct nuclear power plants.

In order to realize the vision of HySociaTea, different research departments from all DFKI locations (Bremen, Kaiserslautern, Saarbrücken) join their competencies:

  • RIC (Robotics Innovations Center, DFKI Bremen): autonomous and cooperative robotic systems, mobile manipulation
  • CPS (Cyber-Physical Systems, DFKI Bremen): safe human-robot interaction
  • EI (Embedded Intelligence, DFKI Kaiserslautern): augmented humans, wearable sensors
  • AV (Augmented Vision, DFKI Kaiserslautern): perception modules using image processing and sensor fusion
  • KM (Knowledge Management, DFKI Kaiserslautern): gaze-controlled perception recognition, real-time object-recognition
  • IUI (Intelligent User Interfaces, DFKI Saarbrücken): emosocial virtual characters, multimodal dialog platform
  • LT (Language Technology Lab, DFKI Saarbrücken): autonomous team-reorganization, natural speech interaction
  • ASR (Agents and Simulated Reality, DFKI Saarbrücken): communication middleware, Dual Reality

Contact

Dr. Sirko Straube

iACT

Multi-Channel Interaction in Virtual Environments

Multi-Channel Interaction in Virtual Environments

The vision of iACT is to research, develop and evaluate tools, strategies and metaphors to better understand and support ways to interact effectively with large, immersive displays. Thus, the project aims at providing users with new, intuitive ways to make sense of normally unstructured and hardly comprehensible information, to share their insights with others and develop new ones by working as a team. To achieve this goal, a large part of research will be to develop intuitive visualization metaphors for large displays and explore more natural techniques to interact with information represented on a large scale display.

AuRoRas

Automotive Robust Radar Sensing

Automotive Robust Radar Sensing

Radar sensors are very important in the automotive industry because they have the ability to directly measure the speed of other road users. The DFKI is working with our partners to develop intelligent software solutions to improve the performance of high-resolution radar sensors. We are using machine learning and deep neural networks to detect ghost targets in radar data thus improving their reliability and opens up a wide area of possibilities for highly automated driving.

Partners

ASTYX GmbH (Dr. Georg Kuschk), Lise-Meitner-Straße 2a, 85521, Ottobrunn, DE

BIT Technology Solutions gmbH (Geschäftsleitung), Gewerbering 3, 83539 Pfaffing OT Forsting, DE

Contact

Dr.-Ing. Jason Raphael Rambach

EASY-IMP

Collaborative Development of Intelligent Wearable Meta-Products in the Cloud

Collaborative Development of Intelligent Wearable Meta-Products in the Cloud

EASY is a large initiative started by September 2013. It regroups 12 partners in the areas of IT and intelligent clothes. The approach is to turn garments into Meta-Products. The basic idea and rationale of the project is the following: Recent developments in mobile, ubiquitous and cloud computing, as well as in intelligent wearable sensors, impose a new understanding of products and production models. A (Meta-) product is now a customer driver customisable entity that integrates sensory/computing units, which are in turn connected to the cloud, leading to a paradigm shift from mass production to intelligent, over-the-web configurable products. This evolution triggers an essential change of the whole design/production life cycle and opens totally new perspectives towards person-oriented production models that allow agile, small scale, and distributed production, whith a considerable impact on cost-effectiveness and ecology. However, product design and production is now becoming highly complex and requires interdisciplinary expertise. The lack of relevant and appropriate methodologies and tools constitutes a barrier in the wider adoption of the new production paradigm. Smart phones can be considered as the prime example of Meta-Products. Another emerging market with significant econnomic and societal influence, which exposes a high demand for customization, is the market of intelligent wearable products (clothes, shoes, accessories, etc.). Although in the last years new materials and intelligent on-body-sensors have been developed, there is a lack of an integrated Meta-Product approach. The customizable Meta-Products consist of intelligent wearable (clothing, footwear, accessories) equipped with embedded networks of sensors. Sensorial data will be communicated in real-time to the new generation of smart phones via Bluetooth LE (Low Energy) and/or wireless (wi-fi) for active data flows and NFC (Near Field Communication) more for passive objects identification (internet of things). The required functionality will be configured by the end-users (co-design); the design, selection of components, sourcing of materials and sensors, virtual prototyping, as well as production planning and services integration is a collaborative process of all involved companies, designers, sensor producers, software developers and experts (e.g. trainers, medical professionals, game developers, etc). The EASY infrastructure will enable all interested 3rd parties to offer new services to smart phone and EASY wearable users, thus evolving into an open platform of literally infinit applications in many target market. EASY focus on the following pilot applications: outdoor activity and games personal training functional rehabilitation assisted living for the elderly and handicapped The selected application scenarios will include among other miniature motion, skin-temperature and skin-conductivity sensors. The project will build up on existing games, sport and training applications for testing and demonstration purposes, while focusing on the development of the methodological Meta-Product framework. It will develop guidelines and standards, methodologies and reference-architectures for the design and production of smart garments.

Partners

  • ATOS SPAIN SA
  • INTERACTIVE WEAR AG
  • UNIVERSITE LUMIERE LYON 2
  • ATHENS TECHNOLOGY CENTER SA
  • FLORENTIA FOURLI KARTSOUNI KAI SIAEE
  • INSTITUTO DE BIOMECANICA DE VALENCIA
  • SMART SOLUTIONS TECHNOLOGIES S.L
  • TIMOCCO LTD
  • SYLVIA LAWRY CENTRE FOR MULTIPLE SCLEROSIS RESEARCH E.V.
  • UNIVERZITETNI REHABILITACIJSKI INSTITUT REPUBLIKE SLOVENIJE-SOCA
  • FEDERATION OF THE EUROPEAN SPORTING GOODS INDUSTRY
IVHM

Integrated Vehicle Health Management

In the context of the ESA-Study “Health Management System for Reusable Space Transportation”, within an international consortium, led by Astrium Space Transportation, DFKI’s research lab “Intelligent Visualization and Simulation” is contributing to a number of work packages in the domains of specification, design, test, and performance evaluation.

KI-Absicherung

KI-Absicherung

KI-Absicherung

Im Rahmen des Projektes „KI Absicherung“ werden erstmalig industrieübergreifend Methoden und Maßnahmen für die Absicherung von KI-basierten Funktionen für das automatisierte Fahren entwickelt. Es wird ein initialer und standardisierungsfähiger Industriekonsens angestrebt, der eine einheitliche und breit akzeptierbare Absicherung von KI basierten Wahrnehmungsfunktionen in der Automobilindustrie etabliert.

Partners

Automobilhersteller: Volkswagen AG (Konsortialführer), AUDI AG, BMW Group, Opel Automobile GmbH

Zulieferer: Continental Automotive GmbH, Hella Aglaia Mobile Vision GmbH, Robert Bosch GmbH, Valeo Schalter und Sensoren GmbH, Visteon Electronics Germany GmbH, ZF Friedrichshafen AG

Technologieprovider: AID Autonomous Intelligent Driving GmbH, Automotive Safety Technologies GmbH, Intel Deutschland GmbH, Mackevision Medien Design GmbH, Merantix AG, Luxoft GmbH, umlaut systems GmbH, QualityMinds GmbH

Forschungspartner: Fraunhofer IAIS (Stellv. Konsortialführer und Wissenschaftlicher Koordinator), Bergische Universität Wuppertal, Deutsches Forschungszentrum für Künstliche Intelligenz, Deutsches Zentrum für Luft- und Raumfahrt, FZI Forschungszentrum Informatik, TU München, Universität Heidelberg

Externe Technologiepartner: BIT Technology Solutions GmbH, neurocat GmbH, understand ai GmbH

Projektmanagement: European Center for Information and Communication Technologies – EICT GmbH

ARVIDA

Angewandte Referenzarchitektur für virtuelle Dienste und Anwendungen

Angewandte Referenzarchitektur für virtuelle Dienste und Anwendungen

ARVIDA is a project funded by the German Federal Ministry for Education and Research (BMBF) with actually 23 partners from research and industry. The main goal of the project is the creation of a service oriented reference architecture for virtual technologies (VT). The service orientation and the usage or rather adaption of available internet and VT-standards ensure interoperability between different modules and VT applications. A broad cross-company evaluation of the reference architecture in selected industrial scenarios guarantees that the results can be used as a future standard. The department Augmented Vision at DFKI deals in this project with a target-actual comparison for virtual product verification. Thereby a real object is captured in real-time and compared to a CAD model of the object. Though algorithms for high precision reconstruction of objects with small or medium size using low-cost depth cameras are investigated and developed.

Contact

Dr. Hilko Hoffmann

AVES/DyMoSiHR

Acoustic and Visual Environmental Simulation for Robot Control

The project aims at simulating the spreading of sounds within the close-up range of an autonomous mobile robot that is situated in an office environment. This simulation is used for evaluating the robot’s control strategies for identifying sound events and reacting to them. In the project’s extension, the developed techniques will be applied to human speech and moving humans as sound sources.

The main objective of this project is the development of new methods for simulating the optical and acoustic properties of an indoor scene in order to use these data for evaluating the control algorithms of autonomous mobile robots.

A robot orientates itself inside a room by using the information that is provided by its sensor systems. Besides distance-sensors, optical and acoustic sensors provide these important data. This comprises the core tasks in the collaboration of the research groups involved in this project: In order to enable a robot to interact with its environment and to permit a context-sensitive execution of its tasks, the robot has to be able to interpret this information provided by its sensors. However, appropriate environments and stimuli for testing these capabilities are not always available. In order to test the control algorithms of a robot, this project aims at providing the capabilities for a realistic simulation of the acoustic and visual properties of indoor-environments. For this purpose, the project will use technologies that have been developed by the research groups collaborating in this project, the groups “Robot Systems”, and “Computer Graphics”, as well as DFKI’s Competence Center “Human Centered Visualization”.

It is especially envisioned to build our work upon the audio-visual Virtual-Reality presentation system that has been developed in cooperation between the University of Kaiserslautern’s Computer Graphics group, the Fraunhofer Institute for Technical and Economical Mathematics (ITWM), and DFKI’s research lab “Intelligent Visualization and Simulation” in the context of the research project “Acoustic Simulated Reality”.

While in the first part of the project the work is focussing on static sound sources emitting one characteristic signal, the project’s extension aims at applying the techniques developed in the first part to humans in office environments. This implies the modeling and simulation of moving sound sources as well as the dynamic aspects of speech. The techniques that will be developed here provide a central aspect for enabling robots to interact with humans. As a platform for integrating and evaluating these techniques, the humanoid robot head ROMAN is available at the Robotics Laboratory of the Department of Computer Science.

Partners

Research Group Robot Systems of the University of Kaiserslautern

VIZTA

Vision, Identification, with Z-sensing Technologies and  key Applications

Vision, Identification, with Z-sensing Technologies and key Applications

VIZTA project, coordinated by ST Micrelectronics, aims at developing innovative technologies in the field of optical sensors and laser sources for short to long-range 3D-imaging and to demonstrate their value in several key applications including automotive, security, smart buildings, mobile robotics for smart cities, and industry4.0. The key differentiating 12-inch Silicon sensing technologies developed during VIZTA are:

1-Innovative SPAD and lock-in pixel for Time of Flight architecture sensors. 2-Unprecedent and cost-effective NIR and RGB-Z filters on-chip solutions. 3-complex RGB+Z pixel architectures for multimodal 2D/3D imaging.

For short-range sensors : advanced VCSEL sources including wafer-level GaAs optics and associated high speed driver. These developed differentiating technologies allows the development and validation of innovative 3D imaging sensors products with the following highly integrated prototypes demonstrators:

1-High resolution (>77 000 points) time-of-flight ranging sensor module with integrated VCSEL, drivers, filters and optics. 2-Very High resolution (VGA min) depth camera sensor with integrated filters and optics.

For Medium and Long range sensing, VIZTA also adresses new LiDAR systems with dedicated sources, optics and sensors. Technology developments of sensors and emitters are carried out by leading semiconductor product suppliers (ST Microelectronics, Philips, III-V Lab) with the support of equipment suppliers (Amat, Semilab) and CEA Leti RTO.

VIZTA project also include the developement of 6 demonstrators for key applications including automotive, security, smart buildings, mobile robotics for smart cities, and industry4.0 with a good mix of industrial and academic partners (Ibeo, Veoneer, Ficosa, Beamagine, IEE, DFKI, UPC, Idemia, CEA-List, ISD, BCB, IDE, Eurecat). VIZTA consortium brings together 23 partners from 9 countries in Europe: France, Germany, Spain, Greece, Luxembourg, Latvia, Sweden, Hungary, and United Kingdom.

Partners

Universidad Politecnica Catalunya Commisariat a l Energie Atomique et aux Energies Alternatives (CEA Paris) Fundacio Eurecat STMICROELECTRONICS SA BCB Informática y Control Alter Technology TÜV Nord SA FICOMIRRORS SA Philips Photonics GmbH Applied Materials France SARL SEMILAB FELVEZETO FIZIKAI LABORATORIUM RESZVENYTARSASAG ELEKTRONIKAS UN DATORZINATNU INSTITUTS LUMIBIRD IEE S.A. IBEO Automotive Systems GmbH STMICROELECTRONICS RESEARCH & DEVELOPMENT LTD STMICROELECTRONICS SA IDEMIA IDENITY & SECURITY FRANCE Beamagine S.L. Integrated Systems Development S.A. VEONEER SWEDEN AB III-V Lab STMICROELECTRONICS (ALPS) SAS STMICROELECTRONICS GRENOBLE 2 SAS

Contact

Dr.-Ing. Jason Raphael Rambach

AlterEgo

AlterEgo - Enhancing social interactions using information technology

AlterEgo – Enhancing social interactions using information technology

Social pathologies, including schizophrenia, autism and social phobia, are mainly characterized by difficulties in interaction with others. This causes much suffering both for the patients and those that surround them. The AlterEgo European project aims to develop and test, in a three years term, an innovative rehabilitation method to improve such relational deficits, using humanoid robotics and virtual reality.

The project is rooted in a new transdisciplinary theory burgeoning in movement neuroscience and cognitive science: the theory of similarity. This theory suggests that it is easier to socially interact with someone who looks like us. This resemblance can be morphological (form of my alterego), behavioural (his/her actions), or kinematic (the way he/she moves).

AlterEgo foresees real-time manipulations of these similarity clues. The patient will be placed in interactive situations with a virtual agent. In the early stage of rehabilitation, the virtual agent, displayed on a screen, will be the alterego of the patient, more reassuring since similar. In later stages, the patient will face a humanoid robot – the European iCub robot – or the clinician. Changes in appearance and behaviour, during the interaction, will be introduced very gradually. We will thus test, over periods of six months, a new rehabilitation method reducing the interaction deficits of these patients by the virtue of more or less socially neutral artificial agents.

The AlterEgo project is one of the 17 laureates (250 submissions) of the last European call – ICT 2.9 – Cognitive Sciences and robotics – launched in 2012 by the European commission. It is coordinated by Pr. Benoît Bardy, director of the EuroMov centre (Movement & Health research unit) at Montpellier 1 University in France. In synergy with the french movement scientists, the project involves computer science experts from the DFKI centre (Germany), mathematicians from the University of Bristol (UK), roboticists from the Ecole Polytechnique Fédérale de Lausanne (CH), as well as clinicians, psychologists and psychiatrists from the Academic Hospital of Montpellier (CHRU, FR).

More information can be found on the project site: http://www.euromov.eu/alterego/homepage