SocialWear

SocialWear - Socially Interactive Smart Fashion

SocialWear – Socially Interactive Smart Fashion

Im Bereich Wearable Computing liegt der Schwerpunkt traditionell auf der Verwendung Kleidungsstücke als Plattformen für das On-Body-Sensing. Die Funktionalität solcher Systeme wird durch Abtastung und Berechnung definiert. Gegenwärtig sind Überlegungen zum Modedesign nur Mittel zum Zweck: die Optimierung der Sensor-/Berechnungsleistung bei gleichzeitiger Minimierung des Unbehagens für den Benutzer. Mit anderen Worten, innerhalb des traditionellen Wearable Computing-Ansatzes ist das Kleidungsstück im Wesentlichen ein einfaches Behältnis für hochentwickelte digitale Intelligenz, aber es schließt nicht die Lücke zwischen der Funktion und den tatsächlichen Bedürfnissen des Benutzers. Parallel dazu hat die High-Tech-Fashion-Gemeinschaft nach Möglichkeiten gesucht, die Elektronik in neue Designkonzepte einzubinden. Hier lag der Schwerpunkt auf Design-Aspekten, wobei die digitale Funktion oft ziemlich einfach ist: typischerweise eine Art von Lichteffekten, die durch einfache Signale wie die Menge an Bewegung, Impuls oder Umgebungsbedingungen (Licht, Ton, Temperatur) mit wenig intelligenter Verarbeitung gesteuert werden. Mit anderen Worten, im traditionellen High-Tech-Fashion-Ansatz ist der digitale Teil ein einfaches “Add-on” zu anspruchsvollem Design. Aufbauend auf einer einzigartigen Reihe von Kompetenzen der verschiedenen beteiligten DFKI-Gruppen wollen wir eine neue Generation intelligenter Mode entwickeln, die anspruchsvolle künstliche Intelligenz mit anspruchsvollem Design verbindet. Um dies zu erreichen, müssen wir den gesamten klassischen Prozess der Entwicklung sowohl von Kleidungsstücken als auch der zugehörigen tragbaren Elektronik neu überdenken: Mode- und Elektronikdesign-Kriterien sowie Implementierungsprozesse müssen nahtlos integriert werden können. Wir werden Signalverarbeitungs- und Lernmethoden entwickeln, die es solchen intelligenten Kleidungsstücken ermöglichen, komplexe soziale Umgebungen zu verstehen und auf sie zu reagieren, und neue Interaktionsparadigmen entwerfen, um die soziale Interaktion auf neue, subtile und reichhaltige Weise zu verbessern und zu vermitteln.Dabei werden wir ein breites Spektrum entlang der Größe der sozialen Gruppe und des Übergangs zwischen impliziter und expliziter Interaktion berücksichtigen.

Partners

n/a

Contact

Dr. Patrick Gebhard

Dr.-Ing. Bo Zhou

RACKET

Rare Class Learning and Unknown Events Detection for Flexible Production

Rare Class Learning and Unknown Events Detection for Flexible Production

The RACKET project addresses the problem of detecting rare and unknown faults by combining model-based and machine learning methods. The approach is based on the assumption that a physical or procedural model of a manufacturing plant is available, which is not fully specified and has uncertainties in structure, parameters and variables. Gaps and errors in this model are detected by machine learning and corrected, resulting in a more realistic process model (nominal model). This model can be used to simulate system behavior and estimate the future characteristics of a product.

Actual product defects can thus be attributed to anomalies in the output signal and to inconsistencies in the process variables, without the need for a known failure event or an accurate failure model. Errors have a wide range, i.e., geometric errors such as scratches, out-of-tolerance dimensional variables, or dynamic errors such as deviations between estimated and actual product position on a conveyor belt, process steps or incorrect path assignment in the production flow, etc., and can occur at the product and process level.

Contact

Carsten Harms, M.Sc.

Dr.-Ing. Alain Pagani

I-Nergy

Prof. Dr. Didier Stricker

Artificial Intelligence for Next Generation Energy

AI spreading in the energy sector is expected to dramatically reshape energy value chain in the next years, by improving business processes performance, while increasing environmental sustainability, strengthening social relationships and propagating high social value among citizens. However, uncertain business cases, fragmented regulations, standards immaturity and low-technical SMEs workforce skills barriers are actually hampering the full exploitation of AI along the energy value chain. I-NERGY will deliver: (i) Financing support through Open Calls to third parties SMEs for new energy use cases and technology building blocks validation, as well as for developing new AI-based energy services, while fully aligning to AI4EU service requirements and strengthening the SME competitiveness on AI for energy; (b) An open modular framework for supporting AI-on-Demand in the energy sector by capitalising on state-of-the-art AI, IoT, semantics, federated learning, analytics tools, which leverage on edge-level AI-based cross-sector multi-stakeholder sovereignty and regulatory preserving interoperable data handling. I-NERGY aims at evolving, scaling up and demonstrating innovative energy-tailored AI-as-a-Service (AIaaS) Toolbox, AI energy analytics and digital twins services that will be validated along 9 pilots, which: (a) Span over the full energy value chain, ranging from optimised management of grid and non-grid RES assets, improved efficiency and reliability of electricity networks operation, optimal risk assessment for energy efficiency investments planning, optimising local and virtual energy communities involvement in flexibility and green energy marketplaces; (b) Delivers other energy and non-energy services to realise synergies among energy commodities (district heating, buildings) and with nonenergy sectors (i.e. e-mobility, personal safety/security, AAL), and with non- or low-technical domains end users (i.e. elderly people).

Partners

ENGINEERING – INGEGNERIA INFORMATICA SPA (ENGINEERING – INGEGNERIA INFORMATICA SPA) FUNDACION ASTURIANA DE LA ENERGIA RIGA MUNICIPAL AGENCY FUNDACION CARTIF PQ TECNOLOGICO BOECILLO Rheinisch-Westfälische Technische Hochschule Aachen COMSENSUS, KOMUNIKACIJE IN SENZORIKA, DOO SONCE ENERGIJA D.O.O. VEOLIA SERVICIOS LECAM SOCIEDAD ANONIMA UNIPERSONAL STUDIO TECNICO BFP SOCIETA A RESPONSABILITA LIMITATA ZELENA ENERGETSKA ZADRUGA ZA USLUGE Iron Thermoilektriki Anonymi Etaireia ASM TERNI SPA CENTRO DE INVESTIGACAO EM ENERGIA REN – STATE GRID SA PARITY PLATFORM IDIOTIKI KEFALAIOUXIKI ETAIREIA Institute of Communication & Computer Systems Fundingbox Accelerator SP. Z O.O. Fundingbox Accelerator SP. Z O.O.

Contact

Prof. Dr. Didier Stricker

DECODE

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Machine Learning, and in particular Artificial Intelligence (AI) in Deep Learning, has revolutionized Computer Vision in almost all areas. These include topics such as motion estimation, object recognition, semantic segmentation (division and classification of parts of an image), pose estimation of people and hands, and many more. A major problem with this method is the distribution of the data. Training data often differs greatly from real applications and do not adequately cover them. Even if suitable data are available, extensive retraining is time-consuming and costly. Adaptive methods that continuously learn (lifelong learning) are the central challenge for the development of robust, realistic AI applications. In addition to the rich history in the field of general continuous learning, the topic of continuous learning for machine vision under real conditions has recently gained interest. The goal of the DECODE project is to explore continuously adaptive models for reconstructing and understanding human motion and the environment in application-related environments. For this purpose, mobile, visual and inertial sensors (accelerometers and angular rate sensors) will be used. For these different types of sensors and data, different approaches from the field of continuous learning will be researched and developed to ensure a smooth transfer from laboratory conditions to everyday, realistic scenarios. The work will concentrate on in the areas of segmented image and video segmentation, kinematic and pose estimation and the estimation of kinematics and pose of the human body as well as the representation of movements and their context. The field of potential applications for the methods developed in DECODE is wide-ranging and includes detailed ergonomic analysis of human-machine analysis of human-machine interactions, for example in the workplace, in factories, or in vehicles.

Contact

Dr.-Ing. Nadia Robertini

Dr.-Ing. René Schuster

Open6GHub

6G for Society and Sustainability

6G for Society and Sustainability

The project “Open6GHub” develops a 6G vision for sovereign citizens in a hyper-connected world from 2030. The aim of the Open6GHub is to provide contributions to a global 6G harmonization process and standard in the European context. We consider German interests in terms of our societal needs (sustainability, climate protection, data protection, resilience, …) while maintaining the competitiveness of our companies and our technological sovereignty. Another interest is the position of Germany and Europe in the international competition for 6G. The Open6GHub will contribute to the development of an overall 6G architecture, but also end-to-end solutions in the following, but not limited to, areas: advanced network topologies with highly agile organic networking, security and resilience, THz and photonic transmission methods, sensor functionalities in the network and their intelligent use, as well as processing and application-specific radio protocols.

Partners

  • Deutsches Forschungszentrum für Künstliche Intelligenz DFKI GmbH (DFKI)
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
  • Fraunhofer FOKUS (FOKUS)
  • Fraunhofer IAF (IAF)
  • Fraunhofer SIT (SIT)
  • Leibniz-Institut für innovative Mikroelektronik (IHP)
  • Karlsruher Institut für Technologie (KIT)
  • Hasso-Plattner-Institut Potsdam (HPI)
  • RWTH Aachen University (RWTH)
  • Technische Universität Berlin (TUB)
  • Technische Universität Darmstadt (TUDa)
  • Technische Universität Ilmenau (ILM)
  • Technische Universität Kaiserslautern (TUK)
  • Universität Bremen (UB)
  • Universität Duisburg-Essen (UDE)
  • Albert-Ludwigs-Universität Freiburg (ALU)
  • Universität Stuttgart (UST)

Contact

Prof. Dr.-Ing. Hans Dieter Schotten

HumanTech

Human Centered Technologies for a Safer and Greener European Construction Industry

Human Centered Technologies for a Safer and Greener European Construction Industry

The European construction industry faces three major challenges: improve its productivity, increase the safety and wellbeing of its workforce and make a shift towards a green, resource efficient industry. To address these challenges adequately, HumanTech proposes a human-centered approach, involving breakthrough technologies such as wearables for worker safety and support, and intelligent robotic technology that can harmoniously co-exist with human workers while also contributing to the green transition of the industry.

Our aim is to achieve major advances beyond the current state-of-the-art in all these technologies, that can have a disruptive effect in the way construction is conducted.

These advances will include:

Introduction of robotic devices equipped with vision and intelligence to enable them to navigate autonomously and safely in a highly unstructured environment, collaborate with humans and dynamically update a semantic digital twin of the construction site.

Intelligent unobtrusive workers protection and support equipment ranging from exoskeletons triggered by wearable body pose and strain sensors, to wearable cameras and XR glasses to provide real-time worker localisation and guidance for the efficient and accurate fulfilment of their tasks.

An entirely new breed of Dynamic Semantic Digital Twins (DSDTs) of construction sites simulating in detail the current state of a construction site at geometric and semantic level, based on an extended BIM formulation (BIMxD)

Partners

Hypercliq IKE Technische Universität Kaiserslautern Scaled Robotics SL Bundesanstalt für Arbeitsschutz und Arbeitsmedizin Sci-Track GmbH SINTEF Manufacturing AS Acciona construccion SA STAM SRL Holo-Industrie 4.0 Software GmbH Fundacion Tecnalia Research & Innovation Catenda AS Technological University of the Shannon : Midlands Midwest Ricoh international BV Australo Interinnov Marketing Lab SL Prinstones GmbH Universita degli Studi di Padova European Builders Confederation Palfinger Structural Inspection GmbH Züricher Hochschule für Angewandte Wissenschaften Implenia Schweiz AG Kajima corporation

Contact

Dr. Bruno Walter Mirbach

Dr.-Ing. Jason Raphael Rambach

FUMOS

Fusion multimodaler optischer Sensoren zur 3D Bewegungserfassung in dichten, dynamischen Szenen für mobile, autonome Systeme

Fusion multimodaler optischer Sensoren zur 3D Bewegungserfassung in dichten, dynamischen Szenen für mobile, autonome Systeme

Autonomous vehicles will be an indispensable component of future mobility systems. Autonomous vehicles can significantly increase the safety of driving while simultaneously increasing traffic density. Autonomously operating vehicles must be able to continuously and accurately detect their environment and the movements of other road users. To this end, new types of real-time capable sensor systems must be researched. Cameras and laser scanners operate according to different principles and offer different advantages in capturing the environment. The aim of this project is to investigate whether and how the two sensor systems can be combined to reliably detect movements in traffic in real time. The challenge in this case is to suitably combine the heterogeneous data of both systems and to find suitable representations for the geometric and visual features of a traffic scene. These must be optimized to the extent that reliable information can be provided for vehicle control in real time. If such a hybrid sensor system can be designed and successfully built, this could represent a breakthrough for sensor equipment for autonomous vehicles and a decisive step for the implementation of this technology.

Contact

Ramy Battrawy, M.Sc.

Dr.-Ing. René Schuster

FLUENTLY

Fluently - the essence of human-robot interaction

Fluently – the essence of human-robot interaction

Fluently leverages the latest advancements in AI-driven decision-making process to achieve true social collaboration between humans and machines while matching extremely dynamic manufacturing contexts. The Fluently Smart Interface unit features: 1) interpretation of speech content, speech tone and gestures, automatically translated into robot instructions, making industrial robots accessible to any skill profile; 2) assessment of the operator’s state through a dedicated sensors’ infrastructure that complements a persistent context awareness to enrich an AI-based behavioural framework in charge of triggering the generation of specific robot strategies; 3) modelling products and production changes in a way they could be recognized, interpreted and matched by robots in cooperation with humans. Robots equipped with Fluently will constantly embrace humans’ physical and cognitive loads, but will also learn and build experience with their human teammates to establish a manufacturing practise relying upon quality and wellbeing.

FLUENTLY targets three large scale industrial value chains playing an instrumental role in the present andfuture manufacturing industry in Europe, that are: 1) lithium cell batteries dismantling and recycling (fullymanual); 2) inspection and repairing of aerospace engines (partially automated); 3) laser-based multi-techs forcomplex metal components manufacturing, from joining and cutting to additive manufacturing and surfacefunctionalization (fully automated in the equipment but strongly dependent upon human process assessment).

Partners

  • REPLY DEUTSCHLAND SE (Reply), Germany,
  • STMICROELECTRONICS SRL (STM), Italy,
  • BIT & BRAIN TECHNOLOGIES SL (BBR), Spain,
  • MORPHICA SOCIETA A RESPONSABILITA LIMITATA (MOR), Italy,
  • IRIS SRL (IRIS), Italy,
  • SYSTHMATA YPOLOGISTIKIS ORASHS IRIDA LABS AE (IRIDA), Greece,
  • GLEECHI AB (GLE), Sweden,
  • FORENINGEN ODENSE ROBOTICS (ODE), Denmark,
  • TRANSITION TECHNOLOGIES PSC SPOLKA AKCYJNA (TT), Poland,
  • MALTA ELECTROMOBILITY MANUFACTURING LIMITED (MEM), Malta,
  • POLITECNICO DI TORINO (POLITO), Italy,
  • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany,
  • TECHNISCHE UNIVERSITEIT EINDHOVEN (TUe), Netherlands,
  • SYDDANSK UNIVERSITET (SDU), Denmark,
  • COMPETENCE INDUSTRY MANUFACTURING 40 SCARL (CIM), Italy,
  • PRIMA ADDITIVE SRL (PA), Italy,
  • SCUOLA UNIVERSITARIA PROFESSIONALE DELLA SVIZZERA ITALIANA (SUPSI), Switzerland,
  • MCH-TRONICS SAGL (MCH),Switzerland,
  • FANUC SWITZERLAND GMBH (FANUC Europe), Switzerland,
  • UNIVERSITY OF BATH (UBAH), United Kingdom
  • WASEDA UNIVERSITY (WUT), Japan

Contact

Dipl.-Inf. Bernd Kiefer

Dr.-Ing. Alain Pagani

CORTEX2

Cooperative Real-Time Experience with Extended reality

Cooperative Real-Time Experience with Extended reality

The consortium of CORTEX2 — “COoperative Real-Time EXperiences with EXtended reality” — is proud to announce the official start of this European initiative, funded by the European Commission under the Horizon Europe research and innovation programme.

The COVID-19 pandemic pushed individuals and companies worldwide to work primarily from home or completely change their work model in order to stay in business. The share of employees who usually or sometimes work from home rose from 14.6% to 24.4% between 2019 and 2021. In Europe, the proportion of people who work remotely went from 5% to 40% as a result of the pandemic. Today, all the signs are that remote work is here to stay: 72% of employees say their organization is planning some form of permanent teleworking in the future, and 97% would like to work remotely, at least part of their working day, for the rest of their career. But not all organizations are ready to adapt to this new reality, where team collaboration is vital.

Existing services and applications aimed at facilitating remote team collaboration — from video conferencing systems to project management platforms — are not yet ready to efficiently and effectively support all types of activities. And extended reality (XR)-based tools, which can enhance remote collaboration and communication, present significant challenges for most businesses.

The mission of CORTEX2 is to democratize access to the remote collaboration offered by next-generation XR experiences across a wide range of industries and SMEs.

To this aim, CORTEX2 will provide:

  • Full support for AR experience as an extension of video conferencing systems when using heterogeneous service end devices through a novel Mediation Gateway platform. – Resource-efficient teleconferencing tools through innovative transmission methods and automatic summarization of shared long documents. – Easy-to-use and powerful XR experiences with instant 3D reconstruction of environments and objects, and simplified use of natural gestures in collaborative meetings. – Fusion of vision and audio for multichannel semantic interpretation and enhanced tools such as virtual conversational agents and automatic meeting summarization. – Full integration of internet of things (IoT) devices into XR experiences to optimize interaction with running systems and processes. – Optimal extension possibilities and broad adoption by delivering the core system with open APIs and launching open calls to enable further technical extensions, more comprehensive use cases, and deeper evaluation and assessment.

Overall, we will invest a total of 4 million Euros in two open calls, which will be aimed at recruiting tech startups/SMEs to co-develop CORTEX2; engaging new use-cases from different domains to demonstrate CORTEX2 replication through specific integration paths; assessing and validating the social impact associated with XR technology adoption in internal and external use cases.

The first one will be published on October 2023 with the aim to collect two types of applications: Co-development and Use-case. The second one will be published on April 2024, targeting only Co-development type projects.

The CORTEX2 consortium is formed by 10 organizations in 7 countries, which will work together for 36 months. ​​The German Research Center for Artificial Intelligence (DFKI) leads the consortium.

More information on the Project Website: https://cortex2.eu

Partners

  1. – DFKI – Deutsches Forschungszentrum für Künstliche Intelligenz GmbH Germany 2 – LINAGORA – GSO France 3 – ALE – Alcatel-Lucent Entreprise International France 4 – ICOM – Intracom SA Telecom Solutions Greece 5 – AUS – AUSTRALO Alpha Lab MTÜ Estonia 6 – F6S – F6S Network Limited Ireland 7 – KUL– Katholieke Universiteit Leuven Belgium 8 – CEA – Commissariat à l’énergie atomique et aux énergies alternatives France 9 – ACT – Actimage GmbH Germany 10 – UJI – Universitat Jaume I De Castellon

Contact

Dr.-Ing. Alain Pagani

HAIKU

Human AI teaming Knowledge and Understanding for aviation safety

Human AI teaming Knowledge and Understanding for aviation safety

It is essential both for safe operations, and for society in general, that the people who currently keep aviation so safe can work with, train and supervise these AI systems, and that future autonomous AI systems make judgements and decisions that would be acceptable to humans. HAIKU will pave the way for human-centric-AI by developing new AI-based ‘Digital Assistants’, and associated Human-AI Teaming practices, guidance and assurance processes, via the exploration of interactive AI prototypes in a wide range of aviation contexts.

Therefore, HAIKU will:

  1. Design and develop a set of AI assistants, demonstrated in the different use cases.
  2. Develop a comprehensive Human Factors design guidance and methods capability (‘HF4AI’) on how to develop safe, effective and trustworthy Digital Assistants for Aviation, integrating and expanding on existing state-of-the-art guidance.
  3. Conduct controlled experiments with high operational relevance – illustrating the tasks, roles, autonomy and team performance of the Digital Assistant in a range of normal and emergency scenarios
  4. Develop new safety and validation assurance methods for Digital Assistants, to facilitate early integration into aviation systems by aviation stakeholders and regulatory authorities
  5. Deliver guidance on socially acceptable AI in safety critical operations, and for maintaining aviation’sstrong safety record.

Partners

  1. Deep Blue Italy DBL 2 EUROCONTROL Belgium ECTL 3 FerroNATS Air Traffic Services Spain FerroNATS 4 Center for Human Performance Research Netherlands CHPR 5 Linköping University Sweden LiU 6 Thales AVS France TAVS 7 Institute Polytechnique de Bordeaux France Bordeaux INP 8 Centre Aquitain des Technologies del’Information Electroniques France CATIE 9 Deutsches Forschungszentrum für Künstliche Intelligenz Germany DFKI 10 Engineering Ingegneria Informatica SpA Italy ENG 11 Luftfartsverket, Air Navigation Service Provider Sweden Sweden LFV 12 Ecole Nationale De L’aviation Civile France ENAC 13 TUI Airways Ltd United Kingdom TUI 14 Suite5 Data Intelligence Solutions Limited Cyprus Suite5 15 Airholding SA Portugal EMBRT 16 Embraer SA Brazil EMBSA 17 Ethniko Kentro Erevnas Kai Technologikis Anaptyxis Greece CERTH 18 London Luton Airport Operations Ltd United Kingdom LLA

Contact

Nareg Minaskan Karabid

Dr.-Ing. Alain Pagani