FAIRe

Frugal Artificial Intelligence in Resource-limited environments

Artificial intelligence (AI) is finding increasingly diverse applications in the physical world, especially in embedded, cyber-physical devices with limited resources and under demanding conditions. This type of AI is referred to as “Frugal AI” and is characterised by low memory requirements, reduced computing power and the use of less data. The FAIRe (Frugal Artificial Intelligence in Resource-limited environments) project of DFKI and the French computer science institute Inria is developing a comprehensive approach for all abstraction layers of AI applications at the edge.

Edge devices such as driver assistance and infotainment systems in cars, medical devices, manufacturing or service robots and mobile phones have nowhere near the resources of huge cloud data centres that modern machine learning applications require. The challenge is to deal with limited computing power, limited storage space and limited power consumption.

FAIRe aims to enable the deployment of AI applications on mobile devices through an innovative approach to reduce model size and computational overhead by quantising the network, optimising the network architecture, optimising the computations and finally executing on specialised hardware (e.g. RISC-V based or FPGAs).

This combines the expertise from several DFKI research areas: the actual AI algorithms, the hardware on which they run and the compiler layer in between, which translates AI algorithms as efficiently as possible for a specific hardware. To demonstrate this approach in practice, the project team led by Prof Dr Christoph Lüth is conducting a case study on human-robot interaction (HRI) that covers all of these aspects.

Edge AI projects such as FAIRe contribute to making AI applications widely usable on mobile devices and open up new potential for applications.

Partners

  • Inria Taran
  • Inria Cash
  • Inria Corse

Contact

Prof. Dr. Christoph Lüth

Luminous

Language Augmentation for Humanverse

LUMINOUS aims at the creation of the next generation of Language Augmented XR systems, where natural language-based communication and Multimodal Large Language Models (MLLM) enable adaptation to individual, not predefined user needs and unseen environments. This will enable future XR users to interact fluently with their environment, while having instant access to constantly updated global as well as domain- specific knowledge sources to accomplish novel tasks. We aim to exploit MLLMs injected with domain specific knowledge for describing novel tasks on user demand. These are then communicated through a speech interface and/or a task adaptable avatar (e.g., coach/teacher) in terms of different visual aids and procedural steps for the accomplishment of the task. Language driven specification of the style, facial expressions, and specific attitudes of virtual avatars will facilitate generalisable and situation-aware communication in multiple use cases and different sectors. LLMs will benefit in parallel in identifying new objects that were not part of their training data and then describing them in a way that they become visually recognizable. Our results will be prototyped and tested in three pilots, focussing on neurorehabilitation (support of stroke patients with language impairments), immersive industrial safety training, and 3D architectural design review. A consortium of six leading R&D institutes experts in six different disciplines (AI, Augmented Vision, NLP, Computer Graphics, Neurorehabilitation, Ethics) will follow a challenging workplan, aiming to bring about a new era at the crossroads of two of the most promising current technological developments (LLM/AI and XR), made in Europe.

Partners

  1. Deutsches Forschungszentrum für Künstliche Intelligenz GmbH 2. Ludus Tech SL 3. Mindesk Societa a Responsabilita Limita 4. Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V., 5. Universidad del Pais Vasco/Euskal Herriko Universitatea, 6. Fundación Centro de Tecnologias de Interacción visual y comunicaciones Vicomtech 7. University College Dublin, National University of Ireland, 8. Hypercliq IKE 9. Ricoh International B.V. – Niederlassung Deutschland, 10. MindMaze SA, 11. Centre Hospitalier Universitaire Vaudois, 12. University College London

Contact

Muhammad Zeshan Afzal

Prof. Dr. Didier Stricker

BERTHA

BEhavioural Replication of Human drivers for CCAM

The Horizon Europe project BERTHA kicked off on November 22nd-24th in Valencia, Spain. The project has been granted €7,981,799.50 from the European Commission to develop a Driver Behavioral Model (DBM) that can be used in connected autonomous vehicles to make them safer and more human-like. The resulting DBM will be available on an open-source HUB to validate its feasibility, and it will also be implemented in CARLA, an open-source autonomous driving simulator.

The industry of Connected, Cooperative, and Automated Mobility (CCAM) presents important opportunities for the European Union. However, its deployment requires new tools that enable the design and analysis of autonomous vehicle components, together with their digital validation, and a common language between Tier vendors and OEM manufacturers.

One of the shortcomings arises from the lack of a validated and scientifically based Driver Behavioral Model (DBM) to cover the aspects of human driving performance, which will allow to understand and test the interaction of connected autonomous vehicles (CAVs) with other cars in a safer and predictable way from a human perspective.

Therefore, a Driver Behavioral Model could guarantee digital validation of the components of autonomous vehicles and, if incorporated into the ECUs software, could generate a more human-like response of such vehicles, thus increasing their acceptance.

To cover this need in the CCAM industry, the BERTHA project will develop a scalable and probabilistic Driver Behavioral Model (DBM), mostly based on Bayesian Belief Network, which will be key to achieving safer and more human-like autonomous vehicles.

The new DBM will be implemented on an open-source HUB, a repository that will allow industrial validation of its technological and practical feasibility, and become a unique approach for the model’s worldwide scalability.

The resulting DBM will be translated into CARLA, an open-source simulator for autonomous driving research developed by the Spanish partner Computer Vision System. The implementation of BERTHA’s DBM will use diverse demos which allow the building of new driving models in the simulator. This can be embedded in different immersive driving simulators as HAV from IBV.

BERTHA will also develop a methodology which, thanks to the HUB, will share the model with the scientific community to ease its growth. Moreover, its results will include a set of interrelated demonstrators to show the DBM approach as a reference to design human-like, easily predictable, and acceptable behaviour of automated driving functions in mixed traffic scenarios.

Partners

Instituto de Biomecanica de Valencia (ES). Institut Vedecom (FR), Universite Gustave Eiffel (FR), German Research Center for Artificial Intelligence (DE), Computer Vision Center (ES), Altran Deutschland (DE), Continental Automotive France (FR), CIDAUT Foundation (ES), Austrian Institute of Technology (AT), Universitat de València (ES), Europcar International (FR), FI Group (PT), Panasonic Automotive Systems Europe (DE) Korea Transport Institute (KOTI)

Contact

Dr.-Ing. Christian Müller

Dr.-Ing. Jason Raphael Rambach

SocialWear

SocialWear - Socially Interactive Smart Fashion

SocialWear – Socially Interactive Smart Fashion

Im Bereich Wearable Computing liegt der Schwerpunkt traditionell auf der Verwendung Kleidungsstücke als Plattformen für das On-Body-Sensing. Die Funktionalität solcher Systeme wird durch Abtastung und Berechnung definiert. Gegenwärtig sind Überlegungen zum Modedesign nur Mittel zum Zweck: die Optimierung der Sensor-/Berechnungsleistung bei gleichzeitiger Minimierung des Unbehagens für den Benutzer. Mit anderen Worten, innerhalb des traditionellen Wearable Computing-Ansatzes ist das Kleidungsstück im Wesentlichen ein einfaches Behältnis für hochentwickelte digitale Intelligenz, aber es schließt nicht die Lücke zwischen der Funktion und den tatsächlichen Bedürfnissen des Benutzers. Parallel dazu hat die High-Tech-Fashion-Gemeinschaft nach Möglichkeiten gesucht, die Elektronik in neue Designkonzepte einzubinden. Hier lag der Schwerpunkt auf Design-Aspekten, wobei die digitale Funktion oft ziemlich einfach ist: typischerweise eine Art von Lichteffekten, die durch einfache Signale wie die Menge an Bewegung, Impuls oder Umgebungsbedingungen (Licht, Ton, Temperatur) mit wenig intelligenter Verarbeitung gesteuert werden. Mit anderen Worten, im traditionellen High-Tech-Fashion-Ansatz ist der digitale Teil ein einfaches “Add-on” zu anspruchsvollem Design. Aufbauend auf einer einzigartigen Reihe von Kompetenzen der verschiedenen beteiligten DFKI-Gruppen wollen wir eine neue Generation intelligenter Mode entwickeln, die anspruchsvolle künstliche Intelligenz mit anspruchsvollem Design verbindet. Um dies zu erreichen, müssen wir den gesamten klassischen Prozess der Entwicklung sowohl von Kleidungsstücken als auch der zugehörigen tragbaren Elektronik neu überdenken: Mode- und Elektronikdesign-Kriterien sowie Implementierungsprozesse müssen nahtlos integriert werden können. Wir werden Signalverarbeitungs- und Lernmethoden entwickeln, die es solchen intelligenten Kleidungsstücken ermöglichen, komplexe soziale Umgebungen zu verstehen und auf sie zu reagieren, und neue Interaktionsparadigmen entwerfen, um die soziale Interaktion auf neue, subtile und reichhaltige Weise zu verbessern und zu vermitteln.Dabei werden wir ein breites Spektrum entlang der Größe der sozialen Gruppe und des Übergangs zwischen impliziter und expliziter Interaktion berücksichtigen.

Partners

n/a

Contact

Dr. Patrick Gebhard

Dr.-Ing. Bo Zhou

RACKET

Rare Class Learning and Unknown Events Detection for Flexible Production

Rare Class Learning and Unknown Events Detection for Flexible Production

The RACKET project addresses the problem of detecting rare and unknown faults by combining model-based and machine learning methods. The approach is based on the assumption that a physical or procedural model of a manufacturing plant is available, which is not fully specified and has uncertainties in structure, parameters and variables. Gaps and errors in this model are detected by machine learning and corrected, resulting in a more realistic process model (nominal model). This model can be used to simulate system behavior and estimate the future characteristics of a product.

Actual product defects can thus be attributed to anomalies in the output signal and to inconsistencies in the process variables, without the need for a known failure event or an accurate failure model. Errors have a wide range, i.e., geometric errors such as scratches, out-of-tolerance dimensional variables, or dynamic errors such as deviations between estimated and actual product position on a conveyor belt, process steps or incorrect path assignment in the production flow, etc., and can occur at the product and process level.

Contact

Carsten Harms, M.Sc.

Dr.-Ing. Alain Pagani

I-Nergy

Artificial Intelligence for Next Generation Energy

AI spreading in the energy sector is expected to dramatically reshape energy value chain in the next years, by improving business processes performance, while increasing environmental sustainability, strengthening social relationships and propagating high social value among citizens. However, uncertain business cases, fragmented regulations, standards immaturity and low-technical SMEs workforce skills barriers are actually hampering the full exploitation of AI along the energy value chain. I-NERGY will deliver: (i) Financing support through Open Calls to third parties SMEs for new energy use cases and technology building blocks validation, as well as for developing new AI-based energy services, while fully aligning to AI4EU service requirements and strengthening the SME competitiveness on AI for energy; (b) An open modular framework for supporting AI-on-Demand in the energy sector by capitalising on state-of-the-art AI, IoT, semantics, federated learning, analytics tools, which leverage on edge-level AI-based cross-sector multi-stakeholder sovereignty and regulatory preserving interoperable data handling. I-NERGY aims at evolving, scaling up and demonstrating innovative energy-tailored AI-as-a-Service (AIaaS) Toolbox, AI energy analytics and digital twins services that will be validated along 9 pilots, which: (a) Span over the full energy value chain, ranging from optimised management of grid and non-grid RES assets, improved efficiency and reliability of electricity networks operation, optimal risk assessment for energy efficiency investments planning, optimising local and virtual energy communities involvement in flexibility and green energy marketplaces; (b) Delivers other energy and non-energy services to realise synergies among energy commodities (district heating, buildings) and with nonenergy sectors (i.e. e-mobility, personal safety/security, AAL), and with non- or low-technical domains end users (i.e. elderly people).

Partners

ENGINEERING – INGEGNERIA INFORMATICA SPA (ENGINEERING – INGEGNERIA INFORMATICA SPA) FUNDACION ASTURIANA DE LA ENERGIA RIGA MUNICIPAL AGENCY FUNDACION CARTIF PQ TECNOLOGICO BOECILLO Rheinisch-Westfälische Technische Hochschule Aachen COMSENSUS, KOMUNIKACIJE IN SENZORIKA, DOO SONCE ENERGIJA D.O.O. VEOLIA SERVICIOS LECAM SOCIEDAD ANONIMA UNIPERSONAL STUDIO TECNICO BFP SOCIETA A RESPONSABILITA LIMITATA ZELENA ENERGETSKA ZADRUGA ZA USLUGE Iron Thermoilektriki Anonymi Etaireia ASM TERNI SPA CENTRO DE INVESTIGACAO EM ENERGIA REN – STATE GRID SA PARITY PLATFORM IDIOTIKI KEFALAIOUXIKI ETAIREIA Institute of Communication & Computer Systems Fundingbox Accelerator SP. Z O.O. Fundingbox Accelerator SP. Z O.O.

Contact

Prof. Dr. Didier Stricker

DECODE

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Machine Learning, and in particular Artificial Intelligence (AI) in Deep Learning, has revolutionized Computer Vision in almost all areas. These include topics such as motion estimation, object recognition, semantic segmentation (division and classification of parts of an image), pose estimation of people and hands, and many more. A major problem with this method is the distribution of the data. Training data often differs greatly from real applications and do not adequately cover them. Even if suitable data are available, extensive retraining is time-consuming and costly. Adaptive methods that continuously learn (lifelong learning) are the central challenge for the development of robust, realistic AI applications. In addition to the rich history in the field of general continuous learning, the topic of continuous learning for machine vision under real conditions has recently gained interest. The goal of the DECODE project is to explore continuously adaptive models for reconstructing and understanding human motion and the environment in application-related environments. For this purpose, mobile, visual and inertial sensors (accelerometers and angular rate sensors) will be used. For these different types of sensors and data, different approaches from the field of continuous learning will be researched and developed to ensure a smooth transfer from laboratory conditions to everyday, realistic scenarios. The work will concentrate on in the areas of segmented image and video segmentation, kinematic and pose estimation and the estimation of kinematics and pose of the human body as well as the representation of movements and their context. The field of potential applications for the methods developed in DECODE is wide-ranging and includes detailed ergonomic analysis of human-machine analysis of human-machine interactions, for example in the workplace, in factories, or in vehicles.

Contact

Dr.-Ing. Nadia Robertini

Dr.-Ing. René Schuster

Open6GHub

6G for Society and Sustainability

6G for Society and Sustainability

The project “Open6GHub” develops a 6G vision for sovereign citizens in a hyper-connected world from 2030. The aim of the Open6GHub is to provide contributions to a global 6G harmonization process and standard in the European context. We consider German interests in terms of our societal needs (sustainability, climate protection, data protection, resilience, …) while maintaining the competitiveness of our companies and our technological sovereignty. Another interest is the position of Germany and Europe in the international competition for 6G. The Open6GHub will contribute to the development of an overall 6G architecture, but also end-to-end solutions in the following, but not limited to, areas: advanced network topologies with highly agile organic networking, security and resilience, THz and photonic transmission methods, sensor functionalities in the network and their intelligent use, as well as processing and application-specific radio protocols.

Partners

  • Deutsches Forschungszentrum für Künstliche Intelligenz DFKI GmbH (DFKI)
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
  • Fraunhofer FOKUS (FOKUS)
  • Fraunhofer IAF (IAF)
  • Fraunhofer SIT (SIT)
  • Leibniz-Institut für innovative Mikroelektronik (IHP)
  • Karlsruher Institut für Technologie (KIT)
  • Hasso-Plattner-Institut Potsdam (HPI)
  • RWTH Aachen University (RWTH)
  • Technische Universität Berlin (TUB)
  • Technische Universität Darmstadt (TUDa)
  • Technische Universität Ilmenau (ILM)
  • Technische Universität Kaiserslautern (TUK)
  • Universität Bremen (UB)
  • Universität Duisburg-Essen (UDE)
  • Albert-Ludwigs-Universität Freiburg (ALU)
  • Universität Stuttgart (UST)

Contact

Prof. Dr.-Ing. Hans Dieter Schotten

GreifbAR

Greifbare Realität - geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

Greifbare Realität – geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

On 01.10.2021, the research project GreifbAR started under the leadership of the DFKI (research area Augmented Reality). The goal of the GreifbAR project is to make mixed reality (MR) worlds, including virtual (VR) and augmented reality (“AR”), tangible and graspable by allowing users to interact with real and virtual objects with their bare hands. Hand accuracy and dexterity is paramount for performing precise tasks in many fields, but the capture of hand-object interaction in current MR systems is woefully inadequate. Current systems rely on hand-held controllers or capture devices that are limited to hand gestures without contact with real objects. GreifbAR solves this limitation by introducing a sensing system that detects both the full hand grip including hand surface and object pose when users interact with real objects or tools. This sensing system will be integrated into a mixed reality training simulator that will be demonstrated in two relevant use cases: industrial assembly and surgical skills training. The usability and applicability as well as the added value for training situations will be thoroughly analysed through user studies.

Partners

Berliner Charite (University Medicine Berlin) NMY (Mixed reality applications for industrial and communication customers) Uni Passau (Chair of Psychology with a focus on human-machine interaction).

Contact

Dr. Dipl.-Inf. Gerd Reis

Dr.-Ing. Nadia Robertini

HumanTech

Human Centered Technologies for a Safer and Greener European Construction Industry

Human Centered Technologies for a Safer and Greener European Construction Industry

The European construction industry faces three major challenges: improve its productivity, increase the safety and wellbeing of its workforce and make a shift towards a green, resource efficient industry. To address these challenges adequately, HumanTech proposes a human-centered approach, involving breakthrough technologies such as wearables for worker safety and support, and intelligent robotic technology that can harmoniously co-exist with human workers while also contributing to the green transition of the industry.

Our aim is to achieve major advances beyond the current state-of-the-art in all these technologies, that can have a disruptive effect in the way construction is conducted.

These advances will include:

Introduction of robotic devices equipped with vision and intelligence to enable them to navigate autonomously and safely in a highly unstructured environment, collaborate with humans and dynamically update a semantic digital twin of the construction site.

Intelligent unobtrusive workers protection and support equipment ranging from exoskeletons triggered by wearable body pose and strain sensors, to wearable cameras and XR glasses to provide real-time worker localisation and guidance for the efficient and accurate fulfilment of their tasks.

An entirely new breed of Dynamic Semantic Digital Twins (DSDTs) of construction sites simulating in detail the current state of a construction site at geometric and semantic level, based on an extended BIM formulation (BIMxD)

Partners

Hypercliq IKE Technische Universität Kaiserslautern Scaled Robotics SL Bundesanstalt für Arbeitsschutz und Arbeitsmedizin Sci-Track GmbH SINTEF Manufacturing AS Acciona construccion SA STAM SRL Holo-Industrie 4.0 Software GmbH Fundacion Tecnalia Research & Innovation Catenda AS Technological University of the Shannon : Midlands Midwest Ricoh international BV Australo Interinnov Marketing Lab SL Prinstones GmbH Universita degli Studi di Padova European Builders Confederation Palfinger Structural Inspection GmbH Züricher Hochschule für Angewandte Wissenschaften Implenia Schweiz AG Kajima corporation

Contact

Dr. Bruno Walter Mirbach

Dr.-Ing. Jason Raphael Rambach