News Archive
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

extremeXP

EXPeriment driven and user eXPerience oriented analytics for eXtremely Precise outcomes and decisions

EXPeriment driven and user eXPerience oriented analytics for eXtremely Precise outcomes and decisions

A new framework for experimentation-driven analytics

Extreme data characteristics represent a challenge for advanced data-driven analytics and decision-making in critical domains such as crisis management, predictive maintenance, mobility, public safety and cyber-security. Data-driven insights must be timely, accurate, precise, fit-for-purpose and reliable, considering and learning from user intents and preferences. The EU-funded ExtremeXP project will create a next-generation decision support framework that integrates novel research from big data management, machine learning, visual analytics, explainable ΑΙ, decentralised trust, and knowledge engineering. The framework will aim at optimising the properties of complex analytics processes (e.g. accuracy, time-to-answer, specificity, recall, precision, resource consumption) by associating different user profiles with computation variants, promoting a human-centered, experimentation-based approach to AI and complex analytics. The project will perform five pilot demonstrations.

Contact

Dr.-Ing. Alain Pagani

Dr.-Ing. Mohamed Selim

FAIRe

Frugal Artificial Intelligence in Resource-limited environments

Frugal Artificial Intelligence in Resource-limited environments

Artificial intelligence (AI) is finding increasingly diverse applications in the physical world, especially in embedded, cyber-physical devices with limited resources and under demanding conditions. This type of AI is referred to as “Frugal AI” and is characterised by low memory requirements, reduced computing power and the use of less data. The FAIRe (Frugal Artificial Intelligence in Resource-limited environments) project of DFKI and the French computer science institute Inria is developing a comprehensive approach for all abstraction layers of AI applications at the edge.

Edge devices such as driver assistance and infotainment systems in cars, medical devices, manufacturing or service robots and mobile phones have nowhere near the resources of huge cloud data centres that modern machine learning applications require. The challenge is to deal with limited computing power, limited storage space and limited power consumption.

FAIRe aims to enable the deployment of AI applications on mobile devices through an innovative approach to reduce model size and computational overhead by quantising the network, optimising the network architecture, optimising the computations and finally executing on specialised hardware (e.g. RISC-V based or FPGAs).

This combines the expertise from several DFKI research areas: the actual AI algorithms, the hardware on which they run and the compiler layer in between, which translates AI algorithms as efficiently as possible for a specific hardware. To demonstrate this approach in practice, the project team led by Prof Dr Christoph Lüth is conducting a case study on human-robot interaction (HRI) that covers all of these aspects.

Edge AI projects such as FAIRe contribute to making AI applications widely usable on mobile devices and open up new potential for applications.

Partners

  • Inria Taran
  • Inria Cash
  • Inria Corse

Contact

Prof. Dr. Christoph Lüth

Luminous

Language Augmentation for Humanverse

LUMINOUS aims at the creation of the next generation of Language Augmented XR systems, where natural language-based communication and Multimodal Large Language Models (MLLM) enable adaptation to individual, not predefined user needs and unseen environments. This will enable future XR users to interact fluently with their environment, while having instant access to constantly updated global as well as domain- specific knowledge sources to accomplish novel tasks. We aim to exploit MLLMs injected with domain specific knowledge for describing novel tasks on user demand. These are then communicated through a speech interface and/or a task adaptable avatar (e.g., coach/teacher) in terms of different visual aids and procedural steps for the accomplishment of the task. Language driven specification of the style, facial expressions, and specific attitudes of virtual avatars will facilitate generalisable and situation-aware communication in multiple use cases and different sectors. LLMs will benefit in parallel in identifying new objects that were not part of their training data and then describing them in a way that they become visually recognizable. Our results will be prototyped and tested in three pilots, focussing on neurorehabilitation (support of stroke patients with language impairments), immersive industrial safety training, and 3D architectural design review. A consortium of six leading R&D institutes experts in six different disciplines (AI, Augmented Vision, NLP, Computer Graphics, Neurorehabilitation, Ethics) will follow a challenging workplan, aiming to bring about a new era at the crossroads of two of the most promising current technological developments (LLM/AI and XR), made in Europe.

Partners

  1. Deutsches Forschungszentrum für Künstliche Intelligenz GmbH 2. Ludus Tech SL 3. Mindesk Societa a Responsabilita Limita 4. Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V., 5. Universidad del Pais Vasco/Euskal Herriko Universitatea, 6. Fundación Centro de Tecnologias de Interacción visual y comunicaciones Vicomtech 7. University College Dublin, National University of Ireland, 8. Hypercliq IKE 9. Ricoh International B.V. – Niederlassung Deutschland, 10. MindMaze SA, 11. Centre Hospitalier Universitaire Vaudois, 12. University College London

Contact

Muhammad Zeshan Afzal

Prof. Dr. Didier Stricker

BERTHA

BEhavioural Replication of Human drivers for CCAM

BEhavioural Replication of Human drivers for CCAM

The Horizon Europe project BERTHA kicked off on November 22nd-24th in Valencia, Spain. The project has been granted €7,981,799.50 from the European Commission to develop a Driver Behavioral Model (DBM) that can be used in connected autonomous vehicles to make them safer and more human-like. The resulting DBM will be available on an open-source HUB to validate its feasibility, and it will also be implemented in CARLA, an open-source autonomous driving simulator.

The industry of Connected, Cooperative, and Automated Mobility (CCAM) presents important opportunities for the European Union. However, its deployment requires new tools that enable the design and analysis of autonomous vehicle components, together with their digital validation, and a common language between Tier vendors and OEM manufacturers.

One of the shortcomings arises from the lack of a validated and scientifically based Driver Behavioral Model (DBM) to cover the aspects of human driving performance, which will allow to understand and test the interaction of connected autonomous vehicles (CAVs) with other cars in a safer and predictable way from a human perspective.

Therefore, a Driver Behavioral Model could guarantee digital validation of the components of autonomous vehicles and, if incorporated into the ECUs software, could generate a more human-like response of such vehicles, thus increasing their acceptance.

To cover this need in the CCAM industry, the BERTHA project will develop a scalable and probabilistic Driver Behavioral Model (DBM), mostly based on Bayesian Belief Network, which will be key to achieving safer and more human-like autonomous vehicles.

The new DBM will be implemented on an open-source HUB, a repository that will allow industrial validation of its technological and practical feasibility, and become a unique approach for the model’s worldwide scalability.

The resulting DBM will be translated into CARLA, an open-source simulator for autonomous driving research developed by the Spanish partner Computer Vision System. The implementation of BERTHA’s DBM will use diverse demos which allow the building of new driving models in the simulator. This can be embedded in different immersive driving simulators as HAV from IBV.

BERTHA will also develop a methodology which, thanks to the HUB, will share the model with the scientific community to ease its growth. Moreover, its results will include a set of interrelated demonstrators to show the DBM approach as a reference to design human-like, easily predictable, and acceptable behaviour of automated driving functions in mixed traffic scenarios.

Partners

Instituto de Biomecanica de Valencia (ES). Institut Vedecom (FR), Universite Gustave Eiffel (FR), German Research Center for Artificial Intelligence (DE), Computer Vision Center (ES), Altran Deutschland (DE), Continental Automotive France (FR), CIDAUT Foundation (ES), Austrian Institute of Technology (AT), Universitat de València (ES), Europcar International (FR), FI Group (PT), Panasonic Automotive Systems Europe (DE) Korea Transport Institute (KOTI)

Contact

Dr.-Ing. Christian Müller

Dr.-Ing. Jason Raphael Rambach

SHARESPACE

Embodied Social Experiences in Hybrid Shared Spaces

Embodied Social Experiences in Hybrid Shared Spaces

SHARESPACE will demonstrate a radically new technology for promoting ethical and social interaction in eXtended Reality (XR) Shared Hybrid Spaces (SHS), anchored in human sensorimotor communication. Our core concept is to identify and segment social sensorimotor primitives and reconstruct them in hybrid settings to build continuous, embodied, and rich human- avatar experiences.

To achieve this, three interconnected science-towards-technology breakthroughs will be delivered: – novel computational cognitive architectures, – a unique self-calibrating body sensor network, and – a fully mobile spatial Augmented Reality (AR) and virtual human rendering.

We will create a library of social motion primitives and use them to design AI-based architectures of our artificial agents. SHARESPACE mobile capturing technologies combine loosely-coupled visual-inertial tracking of full body kinematic, hand pose and facial expression, incorporating novel neural encoding/decoding functionalities, together with local context-aware animations and highly realistic neural rendering.

Our technology will be iteratively tested in 2 Proofs-of-principles involving human and artificial agents interacting in SHS, and 3 real-world use case scenarios in Health, Sport and Art. We will demonstrate a fully functional prototype of SHARESPACE tailored to the agents’ personalized characteristics (gender, culture, and social dispositions). SHARESPACE will support community-building and exploitation with concrete initiatives, including (i) public engagement around our research and innovation, (ii) promoting high-tech innovation and early transfer to our deep-tech companies, as premises for the consolidation of human-centric and sovereign European market areas such Industry AR and SHS, eHealth and tele-Health. Our long-term vision is to bring XR to a radically new level of presence and sociality by reconstructing sensorimotor primitives that enable ethical, trusted and inclusive modes of social interaction.

Partners

  • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany
  • UNIVERSITE DE MONTPELLIER (UM), France
  • CRDC NUOVE TECNOLOGIE PER LE ATTIVITA PRODUTTIVE SCARL (CRdC), Italy
  • UNIVERSITAETSKLINIKUM HAMBURG-EPPENDORF (UKE), Germany
  • ALE INTERNATIONAL (ALE), France
  • UNIVERSITAT JAUME I DE CASTELLON (UJI), Spain
  • GOLAEM SA (GOLAEM), France
  • SIA LIGHTSPACE TECHNOLOGIES (LST), Latvia
  • CYENS CENTRE OF EXCELLENCE (CYENS), Cyprus
  • RICOH INTERNATIONAL BV (RICOH), Netherlands
  • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
  • ARS ELECTRONICA LINZ GMBH & CO KG (AE), Austria
  • FUNDACIO HOSPITAL UNIVERSITARI VALL D’HEBRON -INSTITUT DE RECERCA (VHIR), Spain

Contact

Prof. Dr. Didier Stricker

OrthoSuPer

Sichere Datenplattform und intelligente Sensorik für die Versorgung der Zukunft in der Orthopädie

Sichere Datenplattform und intelligente Sensorik für die Versorgung der Zukunft in der Orthopädie

Muskel-Skelett-Erkrankungen, vor allem Rücken- und Knieleiden, sind in Deutschland einer der häufigsten diagnostizierten Krankheiten. Zum Teil auch arbeitsbedingt, sind diese für einen großen Teil der Arbeitsunfähigkeit verantwortlich, was zu erheblichen Kosten für Unternehmen und des Gesundheitssystems führt. Das größte Leistungsvolumen der Krankenkassen fällt in diesem Zusammenhang auf die Verschreibung von Physiotherapien. Für die betroffene Patient:innen ist dies eine erhebliche Einschränkung der Lebensqualität. Schmerzen, sowie langwierigen Diagnostik- und Therapieprozessen, die wiederum an zahlreiche Behandlungstermine und Überweisungen zu den jeweiligen behandelnden Ärzt:innen und Physiotherapeut:innen geknüpft sind, sind die Folge. In OrthoSuPer wird die Entwicklung eines intelligenten Wearables sowie eines Computer-Vision Technologie für orthopädische Fälle, wie Knie-Rehabilitation und orthopädietechnischen Versorgungen angestrebt. Eine gemeinsame Datenplattform in Form einer App, wird sich an Ärzt:innen, Physiotherapeut:innen, Orthopädietechniker:innen als auch an Patient:innen richten. Durch die Integration einer mobilen Bewegungsanalyse mittels des Wearables, sowie eines markerlosen kamerabasierten Systems in die digitale Prozesskette, ergeben sich enorme Vorteile für Diagnostik, Monitoring des Krankheitsverlaufs, von Behandlungsfortschritten und Therapiezyklen, bis hin zum Therapieende. Ferner kann auch die Nachsorge in Form von Rehabilitation, Kontrollen und Prävention wesentlich verbessert werden. Dies führt zu einer erheblichen Arbeitserleichterung für behandelnde Ärzte und Therapeuten. In Deutschland werden jährlich ca. 60.000 Patient:innen mit einer Knie Totalendoprothese versorgt. Davon können laut Krankenkassen ungefähr 30% durch ein SmartWearable vermieden oder aufgeschoben werden. Das entspricht also ca. 20.000 Patient:innen jährlich. Die Expertise der bisherigen einzelnen Prozessbeteiligten wird durch digitalisierte, automatisierte Prozesse verknüpft, beispielsweise mit einer sicheren Kommunikation und Dokumentation über die mobile Applikation. Durch die Digitalisierung wird in der Patient:innenversorgung Zeit gewonnen und der Versorgungsprozess insgesamt für alle transparenter, die Daten helfen Ergebnisse zu objektivieren und gleichzeitig die orthopädische Versorgung zu verbessern. Patient:innen können von einem wesentlich geringeren Aufwand sowie qualitativ höherwertiger Diagnostik, Therapie und Nachsorge profitieren. Zusätzlich gewinnen sie maßgeblich an Kontrolle und Unabhängigkeit. Für die Krankenkassen und das Gesundheitssystem kann OrthoSuPer durch die Vernetzung und transparente medizinischen Prozesse, über die gesamte Patient Journey, erheblich zu einer Optimierung der eingesetzten Ressourcen beitragen.

Partners

Deutsches Forschungszentrum für Künstliche Intelligenz GmbH Ottobock SE & Co. KGaA SIMI Reality Motion Systems GmbH wearHEALTH, TU Kaiserslautern (TUK) Routine Health GmbH Orthopädisches Krankenhaus Schloss Werneck

Contact

Michael Lorenz, M.Sc.

Daniela Wittmann

AI-Observer

Artificial Intelligence for Earth Observation Twinning

Artificial Intelligence for Earth Observation Twinning

Artificial Intelligence (AI) has a major impact on many sectors and its influence is predicted to expand rapidly in the coming years. One area with considerable untapped potential for AI is the field of Earth Observation, where it can be used to manage large datasets, find new insights in data and generate new products and services. AI is one of the missing core areas that need to be integrated in the EO capabilities of the ERATOSTHENES Centre of Excellence (ECoE). AI-OBSERVER project aims to significantly strengthen and stimulate the scientific excellence and innovation capacity, as well as the research management and administrative skills of the ECoE, through several capacity building activities on AI for EO applications in the Disaster Risk Reduction thematic area, upgrading and modernising its existing department of Resilient Society, as well as its research management and administration departments, and assisting the ECoE to reach its long-term objective of raised excellence on AI for EO on environmental hazards. A close and strategic partnership between the ECoE from Cyprus (Widening country) and two internationally top-class leading research institutions, the German Research Centre for Artificial Intelligence (DFKI) from Germany and the University of Rome Tor Vergata (UNITOV) from Italy, will lead to a research exploratory project on the application of AI on EO for multi-hazard monitoring and assessment in Cyprus. Moreover, CELLOCK Ltd. (CLK), the project’s industrial partner, will lead commercialisation, exploitation and product development aspects of AI-OBSERVER and its exploratory project outputs. All outputs will be disseminated and communicated to stakeholders, the research community, and the public, assisting the ECoE to accomplish its exploitation goals, by creating strong links with various stakeholders from academia and industry in Cyprus and beyond, that ECoE will capitalise on, long after the end of the project.

Partners

ERATOSTHENES Centre of Excellence (ECoE), Zypern (Koordinator) DFKI, Deutshland Universität Rom Tor Vergata, Italien CELLOCK Ltd., Zypern

Contact

Dr. Dipl.-Inf. Gerd Reis

TWIN4TRUCKS

TWIN4TRUCKS – Digitaler Zwilling und KI in der vernetzten Fabrik für die integrierte Nutzfahrzeug­produktion, Logistik und Qualitätssicherung

Am 1. September 2022 startete das Forschungsprojekt Twin4Trucks (T4T). Darin verbinden sich wissenschaftliche Forschung und industrielle Umsetzung in einzigartiger Weise. Das Projektkonsortium besteht aus sechs Unternehmen aus Forschung und Industrie: Die Daimler Truck AG (DTAG) ist Konsortialführer des Projekts. Sie ist der größte Nutzfahrzeughersteller der Welt und mithilfe von Twin4Trucks soll ihre Produktion durch die Implementierung neuer Technologien wie Digitaler Zwillinge oder eines Digital Foundation Layer optimiert werden. Die Technologie-Initiative SmartFactory Kaiserslautern (SF-KL) und das Deutsche Forschungszentrum für Künstliche Intelligenz (DFKI) geben als visionäre Wissenschaftseinrichtungen mit Production Level 4 die Entwicklungsrichtung vor. Der IT-Dienstleister Atos ist zuständig für den Datenaustausch über Gaia-X, die Qualitätssicherung durch KI-Methoden und das Umsetzungskonzept des DFL. Infosys ist zuständig für die Netzwerkarchitektur, 5G Netzwerke und Integrationsleistungen. Das Unternehmen PFALZKOM baut eine Regional Edge Cloud auf, sowie ein Datencenter. Dazu kommen Gaia-X Umsetzung und Betriebskonzepte für Netzwerke.

Contact

Simon Bergweiler

Dr.-Ing. Jason Raphael Rambach

HERON

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

The project will deliver a complete novel vision-guided mobile robotic solution to automate the assembly and screwdriving of final ssembly operations, which are currently performed manually. The solution will include a robotic cell integrating real-time process control to guarantee the process quality including a digital twin platform for accurate process simulation and trajectory optimization to minimize setup time and increase flexibility. A demonstrator will be built for system validation performing quality control procedures and screwing of automotive parts on the chassis of a vehicle.

Partners

Aldakin S.L (Spain)

Simumatik A.B (Sweden)

Visometry GmbH (Germany)

Contact

Dr.-Ing. Alain Pagani

HAIKU

Human AI teaming Knowledge and Understanding for aviation safety

Human AI teaming Knowledge and Understanding for aviation safety

It is essential both for safe operations, and for society in general, that the people who currently keep aviation so safe can work with, train and supervise these AI systems, and that future autonomous AI systems make judgements and decisions that would be acceptable to humans. HAIKU will pave the way for human-centric-AI by developing new AI-based ‘Digital Assistants’, and associated Human-AI Teaming practices, guidance and assurance processes, via the exploration of interactive AI prototypes in a wide range of aviation contexts.

Therefore, HAIKU will:

  1. Design and develop a set of AI assistants, demonstrated in the different use cases.
  2. Develop a comprehensive Human Factors design guidance and methods capability (‘HF4AI’) on how to develop safe, effective and trustworthy Digital Assistants for Aviation, integrating and expanding on existing state-of-the-art guidance.
  3. Conduct controlled experiments with high operational relevance – illustrating the tasks, roles, autonomy and team performance of the Digital Assistant in a range of normal and emergency scenarios
  4. Develop new safety and validation assurance methods for Digital Assistants, to facilitate early integration into aviation systems by aviation stakeholders and regulatory authorities
  5. Deliver guidance on socially acceptable AI in safety critical operations, and for maintaining aviation’sstrong safety record.

Partners

  1. Deep Blue Italy DBL 2 EUROCONTROL Belgium ECTL 3 FerroNATS Air Traffic Services Spain FerroNATS 4 Center for Human Performance Research Netherlands CHPR 5 Linköping University Sweden LiU 6 Thales AVS France TAVS 7 Institute Polytechnique de Bordeaux France Bordeaux INP 8 Centre Aquitain des Technologies del’Information Electroniques France CATIE 9 Deutsches Forschungszentrum für Künstliche Intelligenz Germany DFKI 10 Engineering Ingegneria Informatica SpA Italy ENG 11 Luftfartsverket, Air Navigation Service Provider Sweden Sweden LFV 12 Ecole Nationale De L’aviation Civile France ENAC 13 TUI Airways Ltd United Kingdom TUI 14 Suite5 Data Intelligence Solutions Limited Cyprus Suite5 15 Airholding SA Portugal EMBRT 16 Embraer SA Brazil EMBSA 17 Ethniko Kentro Erevnas Kai Technologikis Anaptyxis Greece CERTH 18 London Luton Airport Operations Ltd United Kingdom LLA

Contact

Nareg Minaskan Karabid

Dr.-Ing. Alain Pagani