FAIRe

Frugal Artificial Intelligence in Resource-limited environments

Frugal Artificial Intelligence in Resource-limited environments

Artificial intelligence (AI) is finding increasingly diverse applications in the physical world, especially in embedded, cyber-physical devices with limited resources and under demanding conditions. This type of AI is referred to as “Frugal AI” and is characterised by low memory requirements, reduced computing power and the use of less data. The FAIRe (Frugal Artificial Intelligence in Resource-limited environments) project of DFKI and the French computer science institute Inria is developing a comprehensive approach for all abstraction layers of AI applications at the edge.

Edge devices such as driver assistance and infotainment systems in cars, medical devices, manufacturing or service robots and mobile phones have nowhere near the resources of huge cloud data centres that modern machine learning applications require. The challenge is to deal with limited computing power, limited storage space and limited power consumption.

FAIRe aims to enable the deployment of AI applications on mobile devices through an innovative approach to reduce model size and computational overhead by quantising the network, optimising the network architecture, optimising the computations and finally executing on specialised hardware (e.g. RISC-V based or FPGAs).

This combines the expertise from several DFKI research areas: the actual AI algorithms, the hardware on which they run and the compiler layer in between, which translates AI algorithms as efficiently as possible for a specific hardware. To demonstrate this approach in practice, the project team led by Prof Dr Christoph Lüth is conducting a case study on human-robot interaction (HRI) that covers all of these aspects.

Edge AI projects such as FAIRe contribute to making AI applications widely usable on mobile devices and open up new potential for applications.

Partners

  • Inria Taran
  • Inria Cash
  • Inria Corse

Contact

Prof. Dr. Christoph Lüth

Luminous

Language Augmentation for Humanverse

LUMINOUS aims at the creation of the next generation of Language Augmented XR systems, where natural language-based communication and Multimodal Large Language Models (MLLM) enable adaptation to individual, not predefined user needs and unseen environments. This will enable future XR users to interact fluently with their environment, while having instant access to constantly updated global as well as domain- specific knowledge sources to accomplish novel tasks. We aim to exploit MLLMs injected with domain specific knowledge for describing novel tasks on user demand. These are then communicated through a speech interface and/or a task adaptable avatar (e.g., coach/teacher) in terms of different visual aids and procedural steps for the accomplishment of the task. Language driven specification of the style, facial expressions, and specific attitudes of virtual avatars will facilitate generalisable and situation-aware communication in multiple use cases and different sectors. LLMs will benefit in parallel in identifying new objects that were not part of their training data and then describing them in a way that they become visually recognizable. Our results will be prototyped and tested in three pilots, focussing on neurorehabilitation (support of stroke patients with language impairments), immersive industrial safety training, and 3D architectural design review. A consortium of six leading R&D institutes experts in six different disciplines (AI, Augmented Vision, NLP, Computer Graphics, Neurorehabilitation, Ethics) will follow a challenging workplan, aiming to bring about a new era at the crossroads of two of the most promising current technological developments (LLM/AI and XR), made in Europe.

Partners

  1. Deutsches Forschungszentrum für Künstliche Intelligenz GmbH 2. Ludus Tech SL 3. Mindesk Societa a Responsabilita Limita 4. Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V., 5. Universidad del Pais Vasco/Euskal Herriko Universitatea, 6. Fundación Centro de Tecnologias de Interacción visual y comunicaciones Vicomtech 7. University College Dublin, National University of Ireland, 8. Hypercliq IKE 9. Ricoh International B.V. – Niederlassung Deutschland, 10. MindMaze SA, 11. Centre Hospitalier Universitaire Vaudois, 12. University College London

Contact

Muhammad Zeshan Afzal

Prof. Dr. Didier Stricker

BERTHA

BEhavioural Replication of Human drivers for CCAM

The Horizon Europe project BERTHA kicked off on November 22nd-24th in Valencia, Spain. The project has been granted €7,981,799.50 from the European Commission to develop a Driver Behavioral Model (DBM) that can be used in connected autonomous vehicles to make them safer and more human-like. The resulting DBM will be available on an open-source HUB to validate its feasibility, and it will also be implemented in CARLA, an open-source autonomous driving simulator.

The industry of Connected, Cooperative, and Automated Mobility (CCAM) presents important opportunities for the European Union. However, its deployment requires new tools that enable the design and analysis of autonomous vehicle components, together with their digital validation, and a common language between Tier vendors and OEM manufacturers.

One of the shortcomings arises from the lack of a validated and scientifically based Driver Behavioral Model (DBM) to cover the aspects of human driving performance, which will allow to understand and test the interaction of connected autonomous vehicles (CAVs) with other cars in a safer and predictable way from a human perspective.

Therefore, a Driver Behavioral Model could guarantee digital validation of the components of autonomous vehicles and, if incorporated into the ECUs software, could generate a more human-like response of such vehicles, thus increasing their acceptance.

To cover this need in the CCAM industry, the BERTHA project will develop a scalable and probabilistic Driver Behavioral Model (DBM), mostly based on Bayesian Belief Network, which will be key to achieving safer and more human-like autonomous vehicles.

The new DBM will be implemented on an open-source HUB, a repository that will allow industrial validation of its technological and practical feasibility, and become a unique approach for the model’s worldwide scalability.

The resulting DBM will be translated into CARLA, an open-source simulator for autonomous driving research developed by the Spanish partner Computer Vision System. The implementation of BERTHA’s DBM will use diverse demos which allow the building of new driving models in the simulator. This can be embedded in different immersive driving simulators as HAV from IBV.

BERTHA will also develop a methodology which, thanks to the HUB, will share the model with the scientific community to ease its growth. Moreover, its results will include a set of interrelated demonstrators to show the DBM approach as a reference to design human-like, easily predictable, and acceptable behaviour of automated driving functions in mixed traffic scenarios.

Partners

Instituto de Biomecanica de Valencia (ES). Institut Vedecom (FR), Universite Gustave Eiffel (FR), German Research Center for Artificial Intelligence (DE), Computer Vision Center (ES), Altran Deutschland (DE), Continental Automotive France (FR), CIDAUT Foundation (ES), Austrian Institute of Technology (AT), Universitat de València (ES), Europcar International (FR), FI Group (PT), Panasonic Automotive Systems Europe (DE) Korea Transport Institute (KOTI)

Contact

Dr.-Ing. Christian Müller

Dr.-Ing. Jason Raphael Rambach

HERON

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

The project will deliver a complete novel vision-guided mobile robotic solution to automate the assembly and screwdriving of final ssembly operations, which are currently performed manually. The solution will include a robotic cell integrating real-time process control to guarantee the process quality including a digital twin platform for accurate process simulation and trajectory optimization to minimize setup time and increase flexibility. A demonstrator will be built for system validation performing quality control procedures and screwing of automotive parts on the chassis of a vehicle.

Partners

Aldakin S.L (Spain)

Simumatik A.B (Sweden)

Visometry GmbH (Germany)

Contact

Dr.-Ing. Alain Pagani

HAIKU

Human AI teaming Knowledge and Understanding for aviation safety

Human AI teaming Knowledge and Understanding for aviation safety

It is essential both for safe operations, and for society in general, that the people who currently keep aviation so safe can work with, train and supervise these AI systems, and that future autonomous AI systems make judgements and decisions that would be acceptable to humans. HAIKU will pave the way for human-centric-AI by developing new AI-based ‘Digital Assistants’, and associated Human-AI Teaming practices, guidance and assurance processes, via the exploration of interactive AI prototypes in a wide range of aviation contexts.

Therefore, HAIKU will:

  1. Design and develop a set of AI assistants, demonstrated in the different use cases.
  2. Develop a comprehensive Human Factors design guidance and methods capability (‘HF4AI’) on how to develop safe, effective and trustworthy Digital Assistants for Aviation, integrating and expanding on existing state-of-the-art guidance.
  3. Conduct controlled experiments with high operational relevance – illustrating the tasks, roles, autonomy and team performance of the Digital Assistant in a range of normal and emergency scenarios
  4. Develop new safety and validation assurance methods for Digital Assistants, to facilitate early integration into aviation systems by aviation stakeholders and regulatory authorities
  5. Deliver guidance on socially acceptable AI in safety critical operations, and for maintaining aviation’sstrong safety record.

Partners

  1. Deep Blue Italy DBL 2 EUROCONTROL Belgium ECTL 3 FerroNATS Air Traffic Services Spain FerroNATS 4 Center for Human Performance Research Netherlands CHPR 5 Linköping University Sweden LiU 6 Thales AVS France TAVS 7 Institute Polytechnique de Bordeaux France Bordeaux INP 8 Centre Aquitain des Technologies del’Information Electroniques France CATIE 9 Deutsches Forschungszentrum für Künstliche Intelligenz Germany DFKI 10 Engineering Ingegneria Informatica SpA Italy ENG 11 Luftfartsverket, Air Navigation Service Provider Sweden Sweden LFV 12 Ecole Nationale De L’aviation Civile France ENAC 13 TUI Airways Ltd United Kingdom TUI 14 Suite5 Data Intelligence Solutions Limited Cyprus Suite5 15 Airholding SA Portugal EMBRT 16 Embraer SA Brazil EMBSA 17 Ethniko Kentro Erevnas Kai Technologikis Anaptyxis Greece CERTH 18 London Luton Airport Operations Ltd United Kingdom LLA

Contact

Nareg Minaskan Karabid

Dr.-Ing. Alain Pagani

CORTEX2

Cooperative Real-Time Experience with Extended reality

Cooperative Real-Time Experience with Extended reality

The consortium of CORTEX2 — “COoperative Real-Time EXperiences with EXtended reality” — is proud to announce the official start of this European initiative, funded by the European Commission under the Horizon Europe research and innovation programme.

The COVID-19 pandemic pushed individuals and companies worldwide to work primarily from home or completely change their work model in order to stay in business. The share of employees who usually or sometimes work from home rose from 14.6% to 24.4% between 2019 and 2021. In Europe, the proportion of people who work remotely went from 5% to 40% as a result of the pandemic. Today, all the signs are that remote work is here to stay: 72% of employees say their organization is planning some form of permanent teleworking in the future, and 97% would like to work remotely, at least part of their working day, for the rest of their career. But not all organizations are ready to adapt to this new reality, where team collaboration is vital.

Existing services and applications aimed at facilitating remote team collaboration — from video conferencing systems to project management platforms — are not yet ready to efficiently and effectively support all types of activities. And extended reality (XR)-based tools, which can enhance remote collaboration and communication, present significant challenges for most businesses.

The mission of CORTEX2 is to democratize access to the remote collaboration offered by next-generation XR experiences across a wide range of industries and SMEs.

To this aim, CORTEX2 will provide:

  • Full support for AR experience as an extension of video conferencing systems when using heterogeneous service end devices through a novel Mediation Gateway platform. – Resource-efficient teleconferencing tools through innovative transmission methods and automatic summarization of shared long documents. – Easy-to-use and powerful XR experiences with instant 3D reconstruction of environments and objects, and simplified use of natural gestures in collaborative meetings. – Fusion of vision and audio for multichannel semantic interpretation and enhanced tools such as virtual conversational agents and automatic meeting summarization. – Full integration of internet of things (IoT) devices into XR experiences to optimize interaction with running systems and processes. – Optimal extension possibilities and broad adoption by delivering the core system with open APIs and launching open calls to enable further technical extensions, more comprehensive use cases, and deeper evaluation and assessment.

Overall, we will invest a total of 4 million Euros in two open calls, which will be aimed at recruiting tech startups/SMEs to co-develop CORTEX2; engaging new use-cases from different domains to demonstrate CORTEX2 replication through specific integration paths; assessing and validating the social impact associated with XR technology adoption in internal and external use cases.

The first one will be published on October 2023 with the aim to collect two types of applications: Co-development and Use-case. The second one will be published on April 2024, targeting only Co-development type projects.

The CORTEX2 consortium is formed by 10 organizations in 7 countries, which will work together for 36 months. ​​The German Research Center for Artificial Intelligence (DFKI) leads the consortium.

More information on the Project Website: https://cortex2.eu

Partners

  1. – DFKI – Deutsches Forschungszentrum für Künstliche Intelligenz GmbH Germany 2 – LINAGORA – GSO France 3 – ALE – Alcatel-Lucent Entreprise International France 4 – ICOM – Intracom SA Telecom Solutions Greece 5 – AUS – AUSTRALO Alpha Lab MTÜ Estonia 6 – F6S – F6S Network Limited Ireland 7 – KUL– Katholieke Universiteit Leuven Belgium 8 – CEA – Commissariat à l’énergie atomique et aux énergies alternatives France 9 – ACT – Actimage GmbH Germany 10 – UJI – Universitat Jaume I De Castellon

Contact

Dr.-Ing. Alain Pagani

FLUENTLY

Fluently - the essence of human-robot interaction

Fluently – the essence of human-robot interaction

Fluently leverages the latest advancements in AI-driven decision-making process to achieve true social collaboration between humans and machines while matching extremely dynamic manufacturing contexts. The Fluently Smart Interface unit features: 1) interpretation of speech content, speech tone and gestures, automatically translated into robot instructions, making industrial robots accessible to any skill profile; 2) assessment of the operator’s state through a dedicated sensors’ infrastructure that complements a persistent context awareness to enrich an AI-based behavioural framework in charge of triggering the generation of specific robot strategies; 3) modelling products and production changes in a way they could be recognized, interpreted and matched by robots in cooperation with humans. Robots equipped with Fluently will constantly embrace humans’ physical and cognitive loads, but will also learn and build experience with their human teammates to establish a manufacturing practise relying upon quality and wellbeing.

FLUENTLY targets three large scale industrial value chains playing an instrumental role in the present andfuture manufacturing industry in Europe, that are: 1) lithium cell batteries dismantling and recycling (fullymanual); 2) inspection and repairing of aerospace engines (partially automated); 3) laser-based multi-techs forcomplex metal components manufacturing, from joining and cutting to additive manufacturing and surfacefunctionalization (fully automated in the equipment but strongly dependent upon human process assessment).

Partners

  • REPLY DEUTSCHLAND SE (Reply), Germany,
  • STMICROELECTRONICS SRL (STM), Italy,
  • BIT & BRAIN TECHNOLOGIES SL (BBR), Spain,
  • MORPHICA SOCIETA A RESPONSABILITA LIMITATA (MOR), Italy,
  • IRIS SRL (IRIS), Italy,
  • SYSTHMATA YPOLOGISTIKIS ORASHS IRIDA LABS AE (IRIDA), Greece,
  • GLEECHI AB (GLE), Sweden,
  • FORENINGEN ODENSE ROBOTICS (ODE), Denmark,
  • TRANSITION TECHNOLOGIES PSC SPOLKA AKCYJNA (TT), Poland,
  • MALTA ELECTROMOBILITY MANUFACTURING LIMITED (MEM), Malta,
  • POLITECNICO DI TORINO (POLITO), Italy,
  • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany,
  • TECHNISCHE UNIVERSITEIT EINDHOVEN (TUe), Netherlands,
  • SYDDANSK UNIVERSITET (SDU), Denmark,
  • COMPETENCE INDUSTRY MANUFACTURING 40 SCARL (CIM), Italy,
  • PRIMA ADDITIVE SRL (PA), Italy,
  • SCUOLA UNIVERSITARIA PROFESSIONALE DELLA SVIZZERA ITALIANA (SUPSI), Switzerland,
  • MCH-TRONICS SAGL (MCH),Switzerland,
  • FANUC SWITZERLAND GMBH (FANUC Europe), Switzerland,
  • UNIVERSITY OF BATH (UBAH), United Kingdom
  • WASEDA UNIVERSITY (WUT), Japan

Contact

Dipl.-Inf. Bernd Kiefer

Dr.-Ing. Alain Pagani

HumanTech

Human Centered Technologies for a Safer and Greener European Construction Industry

Human Centered Technologies for a Safer and Greener European Construction Industry

The European construction industry faces three major challenges: improve its productivity, increase the safety and wellbeing of its workforce and make a shift towards a green, resource efficient industry. To address these challenges adequately, HumanTech proposes a human-centered approach, involving breakthrough technologies such as wearables for worker safety and support, and intelligent robotic technology that can harmoniously co-exist with human workers while also contributing to the green transition of the industry.

Our aim is to achieve major advances beyond the current state-of-the-art in all these technologies, that can have a disruptive effect in the way construction is conducted.

These advances will include:

Introduction of robotic devices equipped with vision and intelligence to enable them to navigate autonomously and safely in a highly unstructured environment, collaborate with humans and dynamically update a semantic digital twin of the construction site.

Intelligent unobtrusive workers protection and support equipment ranging from exoskeletons triggered by wearable body pose and strain sensors, to wearable cameras and XR glasses to provide real-time worker localisation and guidance for the efficient and accurate fulfilment of their tasks.

An entirely new breed of Dynamic Semantic Digital Twins (DSDTs) of construction sites simulating in detail the current state of a construction site at geometric and semantic level, based on an extended BIM formulation (BIMxD)

Partners

Hypercliq IKE Technische Universität Kaiserslautern Scaled Robotics SL Bundesanstalt für Arbeitsschutz und Arbeitsmedizin Sci-Track GmbH SINTEF Manufacturing AS Acciona construccion SA STAM SRL Holo-Industrie 4.0 Software GmbH Fundacion Tecnalia Research & Innovation Catenda AS Technological University of the Shannon : Midlands Midwest Ricoh international BV Australo Interinnov Marketing Lab SL Prinstones GmbH Universita degli Studi di Padova European Builders Confederation Palfinger Structural Inspection GmbH Züricher Hochschule für Angewandte Wissenschaften Implenia Schweiz AG Kajima corporation

Contact

Dr. Bruno Walter Mirbach

Dr.-Ing. Jason Raphael Rambach

GreifbAR

Greifbare Realität - geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

Greifbare Realität – geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

On 01.10.2021, the research project GreifbAR started under the leadership of the DFKI (research area Augmented Reality). The goal of the GreifbAR project is to make mixed reality (MR) worlds, including virtual (VR) and augmented reality (“AR”), tangible and graspable by allowing users to interact with real and virtual objects with their bare hands. Hand accuracy and dexterity is paramount for performing precise tasks in many fields, but the capture of hand-object interaction in current MR systems is woefully inadequate. Current systems rely on hand-held controllers or capture devices that are limited to hand gestures without contact with real objects. GreifbAR solves this limitation by introducing a sensing system that detects both the full hand grip including hand surface and object pose when users interact with real objects or tools. This sensing system will be integrated into a mixed reality training simulator that will be demonstrated in two relevant use cases: industrial assembly and surgical skills training. The usability and applicability as well as the added value for training situations will be thoroughly analysed through user studies.

Partners

Berliner Charite (University Medicine Berlin) NMY (Mixed reality applications for industrial and communication customers) Uni Passau (Chair of Psychology with a focus on human-machine interaction).

Contact

Dr. Dipl.-Inf. Gerd Reis

Dr.-Ing. Nadia Robertini

Open6GHub

6G for Society and Sustainability

6G for Society and Sustainability

The project “Open6GHub” develops a 6G vision for sovereign citizens in a hyper-connected world from 2030. The aim of the Open6GHub is to provide contributions to a global 6G harmonization process and standard in the European context. We consider German interests in terms of our societal needs (sustainability, climate protection, data protection, resilience, …) while maintaining the competitiveness of our companies and our technological sovereignty. Another interest is the position of Germany and Europe in the international competition for 6G. The Open6GHub will contribute to the development of an overall 6G architecture, but also end-to-end solutions in the following, but not limited to, areas: advanced network topologies with highly agile organic networking, security and resilience, THz and photonic transmission methods, sensor functionalities in the network and their intelligent use, as well as processing and application-specific radio protocols.

Partners

  • Deutsches Forschungszentrum für Künstliche Intelligenz DFKI GmbH (DFKI)
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
  • Fraunhofer FOKUS (FOKUS)
  • Fraunhofer IAF (IAF)
  • Fraunhofer SIT (SIT)
  • Leibniz-Institut für innovative Mikroelektronik (IHP)
  • Karlsruher Institut für Technologie (KIT)
  • Hasso-Plattner-Institut Potsdam (HPI)
  • RWTH Aachen University (RWTH)
  • Technische Universität Berlin (TUB)
  • Technische Universität Darmstadt (TUDa)
  • Technische Universität Ilmenau (ILM)
  • Technische Universität Kaiserslautern (TUK)
  • Universität Bremen (UB)
  • Universität Duisburg-Essen (UDE)
  • Albert-Ludwigs-Universität Freiburg (ALU)
  • Universität Stuttgart (UST)

Contact

Prof. Dr.-Ing. Hans Dieter Schotten