News Archive
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

CORTEX2

Cooperative Real-Time Experience with Extended reality

Cooperative Real-Time Experience with Extended reality

The consortium of CORTEX2 — “COoperative Real-Time EXperiences with EXtended reality” — is proud to announce the official start of this European initiative, funded by the European Commission under the Horizon Europe research and innovation programme.

The COVID-19 pandemic pushed individuals and companies worldwide to work primarily from home or completely change their work model in order to stay in business. The share of employees who usually or sometimes work from home rose from 14.6% to 24.4% between 2019 and 2021. In Europe, the proportion of people who work remotely went from 5% to 40% as a result of the pandemic. Today, all the signs are that remote work is here to stay: 72% of employees say their organization is planning some form of permanent teleworking in the future, and 97% would like to work remotely, at least part of their working day, for the rest of their career. But not all organizations are ready to adapt to this new reality, where team collaboration is vital.

Existing services and applications aimed at facilitating remote team collaboration — from video conferencing systems to project management platforms — are not yet ready to efficiently and effectively support all types of activities. And extended reality (XR)-based tools, which can enhance remote collaboration and communication, present significant challenges for most businesses.

The mission of CORTEX2 is to democratize access to the remote collaboration offered by next-generation XR experiences across a wide range of industries and SMEs.

To this aim, CORTEX2 will provide:

  • Full support for AR experience as an extension of video conferencing systems when using heterogeneous service end devices through a novel Mediation Gateway platform. – Resource-efficient teleconferencing tools through innovative transmission methods and automatic summarization of shared long documents. – Easy-to-use and powerful XR experiences with instant 3D reconstruction of environments and objects, and simplified use of natural gestures in collaborative meetings. – Fusion of vision and audio for multichannel semantic interpretation and enhanced tools such as virtual conversational agents and automatic meeting summarization. – Full integration of internet of things (IoT) devices into XR experiences to optimize interaction with running systems and processes. – Optimal extension possibilities and broad adoption by delivering the core system with open APIs and launching open calls to enable further technical extensions, more comprehensive use cases, and deeper evaluation and assessment.

Overall, we will invest a total of 4 million Euros in two open calls, which will be aimed at recruiting tech startups/SMEs to co-develop CORTEX2; engaging new use-cases from different domains to demonstrate CORTEX2 replication through specific integration paths; assessing and validating the social impact associated with XR technology adoption in internal and external use cases.

The first one will be published on October 2023 with the aim to collect two types of applications: Co-development and Use-case. The second one will be published on April 2024, targeting only Co-development type projects.

The CORTEX2 consortium is formed by 10 organizations in 7 countries, which will work together for 36 months. ​​The German Research Center for Artificial Intelligence (DFKI) leads the consortium.

More information on the Project Website: https://cortex2.eu

Partners

  1. – DFKI – Deutsches Forschungszentrum für Künstliche Intelligenz GmbH Germany 2 – LINAGORA – GSO France 3 – ALE – Alcatel-Lucent Entreprise International France 4 – ICOM – Intracom SA Telecom Solutions Greece 5 – AUS – AUSTRALO Alpha Lab MTÜ Estonia 6 – F6S – F6S Network Limited Ireland 7 – KUL– Katholieke Universiteit Leuven Belgium 8 – CEA – Commissariat à l’énergie atomique et aux énergies alternatives France 9 – ACT – Actimage GmbH Germany 10 – UJI – Universitat Jaume I De Castellon

Contact

Dr.-Ing. Alain Pagani

FLUENTLY

Fluently - the essence of human-robot interaction

Fluently – the essence of human-robot interaction

Fluently leverages the latest advancements in AI-driven decision-making process to achieve true social collaboration between humans and machines while matching extremely dynamic manufacturing contexts. The Fluently Smart Interface unit features: 1) interpretation of speech content, speech tone and gestures, automatically translated into robot instructions, making industrial robots accessible to any skill profile; 2) assessment of the operator’s state through a dedicated sensors’ infrastructure that complements a persistent context awareness to enrich an AI-based behavioural framework in charge of triggering the generation of specific robot strategies; 3) modelling products and production changes in a way they could be recognized, interpreted and matched by robots in cooperation with humans. Robots equipped with Fluently will constantly embrace humans’ physical and cognitive loads, but will also learn and build experience with their human teammates to establish a manufacturing practise relying upon quality and wellbeing.

FLUENTLY targets three large scale industrial value chains playing an instrumental role in the present andfuture manufacturing industry in Europe, that are: 1) lithium cell batteries dismantling and recycling (fullymanual); 2) inspection and repairing of aerospace engines (partially automated); 3) laser-based multi-techs forcomplex metal components manufacturing, from joining and cutting to additive manufacturing and surfacefunctionalization (fully automated in the equipment but strongly dependent upon human process assessment).

Partners

  • REPLY DEUTSCHLAND SE (Reply), Germany,
  • STMICROELECTRONICS SRL (STM), Italy,
  • BIT & BRAIN TECHNOLOGIES SL (BBR), Spain,
  • MORPHICA SOCIETA A RESPONSABILITA LIMITATA (MOR), Italy,
  • IRIS SRL (IRIS), Italy,
  • SYSTHMATA YPOLOGISTIKIS ORASHS IRIDA LABS AE (IRIDA), Greece,
  • GLEECHI AB (GLE), Sweden,
  • FORENINGEN ODENSE ROBOTICS (ODE), Denmark,
  • TRANSITION TECHNOLOGIES PSC SPOLKA AKCYJNA (TT), Poland,
  • MALTA ELECTROMOBILITY MANUFACTURING LIMITED (MEM), Malta,
  • POLITECNICO DI TORINO (POLITO), Italy,
  • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany,
  • TECHNISCHE UNIVERSITEIT EINDHOVEN (TUe), Netherlands,
  • SYDDANSK UNIVERSITET (SDU), Denmark,
  • COMPETENCE INDUSTRY MANUFACTURING 40 SCARL (CIM), Italy,
  • PRIMA ADDITIVE SRL (PA), Italy,
  • SCUOLA UNIVERSITARIA PROFESSIONALE DELLA SVIZZERA ITALIANA (SUPSI), Switzerland,
  • MCH-TRONICS SAGL (MCH),Switzerland,
  • FANUC SWITZERLAND GMBH (FANUC Europe), Switzerland,
  • UNIVERSITY OF BATH (UBAH), United Kingdom
  • WASEDA UNIVERSITY (WUT), Japan

Contact

Dipl.-Inf. Bernd Kiefer

Dr.-Ing. Alain Pagani

HumanTech

Human Centered Technologies for a Safer and Greener European Construction Industry

Human Centered Technologies for a Safer and Greener European Construction Industry

The European construction industry faces three major challenges: improve its productivity, increase the safety and wellbeing of its workforce and make a shift towards a green, resource efficient industry. To address these challenges adequately, HumanTech proposes a human-centered approach, involving breakthrough technologies such as wearables for worker safety and support, and intelligent robotic technology that can harmoniously co-exist with human workers while also contributing to the green transition of the industry.

Our aim is to achieve major advances beyond the current state-of-the-art in all these technologies, that can have a disruptive effect in the way construction is conducted.

These advances will include:

Introduction of robotic devices equipped with vision and intelligence to enable them to navigate autonomously and safely in a highly unstructured environment, collaborate with humans and dynamically update a semantic digital twin of the construction site.

Intelligent unobtrusive workers protection and support equipment ranging from exoskeletons triggered by wearable body pose and strain sensors, to wearable cameras and XR glasses to provide real-time worker localisation and guidance for the efficient and accurate fulfilment of their tasks.

An entirely new breed of Dynamic Semantic Digital Twins (DSDTs) of construction sites simulating in detail the current state of a construction site at geometric and semantic level, based on an extended BIM formulation (BIMxD)

Partners

Hypercliq IKE Technische Universität Kaiserslautern Scaled Robotics SL Bundesanstalt für Arbeitsschutz und Arbeitsmedizin Sci-Track GmbH SINTEF Manufacturing AS Acciona construccion SA STAM SRL Holo-Industrie 4.0 Software GmbH Fundacion Tecnalia Research & Innovation Catenda AS Technological University of the Shannon : Midlands Midwest Ricoh international BV Australo Interinnov Marketing Lab SL Prinstones GmbH Universita degli Studi di Padova European Builders Confederation Palfinger Structural Inspection GmbH Züricher Hochschule für Angewandte Wissenschaften Implenia Schweiz AG Kajima corporation

Contact

Dr. Bruno Walter Mirbach

Dr.-Ing. Jason Raphael Rambach

GreifbAR

Greifbare Realität - geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

Greifbare Realität – geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

On 01.10.2021, the research project GreifbAR started under the leadership of the DFKI (research area Augmented Reality). The goal of the GreifbAR project is to make mixed reality (MR) worlds, including virtual (VR) and augmented reality (“AR”), tangible and graspable by allowing users to interact with real and virtual objects with their bare hands. Hand accuracy and dexterity is paramount for performing precise tasks in many fields, but the capture of hand-object interaction in current MR systems is woefully inadequate. Current systems rely on hand-held controllers or capture devices that are limited to hand gestures without contact with real objects. GreifbAR solves this limitation by introducing a sensing system that detects both the full hand grip including hand surface and object pose when users interact with real objects or tools. This sensing system will be integrated into a mixed reality training simulator that will be demonstrated in two relevant use cases: industrial assembly and surgical skills training. The usability and applicability as well as the added value for training situations will be thoroughly analysed through user studies.

Partners

Berliner Charite (University Medicine Berlin) NMY (Mixed reality applications for industrial and communication customers) Uni Passau (Chair of Psychology with a focus on human-machine interaction).

Contact

Dr. Dipl.-Inf. Gerd Reis

Dr.-Ing. Nadia Robertini

Open6GHub

6G for Society and Sustainability

6G for Society and Sustainability

The project “Open6GHub” develops a 6G vision for sovereign citizens in a hyper-connected world from 2030. The aim of the Open6GHub is to provide contributions to a global 6G harmonization process and standard in the European context. We consider German interests in terms of our societal needs (sustainability, climate protection, data protection, resilience, …) while maintaining the competitiveness of our companies and our technological sovereignty. Another interest is the position of Germany and Europe in the international competition for 6G. The Open6GHub will contribute to the development of an overall 6G architecture, but also end-to-end solutions in the following, but not limited to, areas: advanced network topologies with highly agile organic networking, security and resilience, THz and photonic transmission methods, sensor functionalities in the network and their intelligent use, as well as processing and application-specific radio protocols.

Partners

  • Deutsches Forschungszentrum für Künstliche Intelligenz DFKI GmbH (DFKI)
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
  • Fraunhofer FOKUS (FOKUS)
  • Fraunhofer IAF (IAF)
  • Fraunhofer SIT (SIT)
  • Leibniz-Institut für innovative Mikroelektronik (IHP)
  • Karlsruher Institut für Technologie (KIT)
  • Hasso-Plattner-Institut Potsdam (HPI)
  • RWTH Aachen University (RWTH)
  • Technische Universität Berlin (TUB)
  • Technische Universität Darmstadt (TUDa)
  • Technische Universität Ilmenau (ILM)
  • Technische Universität Kaiserslautern (TUK)
  • Universität Bremen (UB)
  • Universität Duisburg-Essen (UDE)
  • Albert-Ludwigs-Universität Freiburg (ALU)
  • Universität Stuttgart (UST)

Contact

Prof. Dr.-Ing. Hans Dieter Schotten

DECODE

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Continual learning for visual and multi-modal encoding of human surrounding and behavior

Machine Learning, and in particular Artificial Intelligence (AI) in Deep Learning, has revolutionized Computer Vision in almost all areas. These include topics such as motion estimation, object recognition, semantic segmentation (division and classification of parts of an image), pose estimation of people and hands, and many more. A major problem with this method is the distribution of the data. Training data often differs greatly from real applications and do not adequately cover them. Even if suitable data are available, extensive retraining is time-consuming and costly. Adaptive methods that continuously learn (lifelong learning) are the central challenge for the development of robust, realistic AI applications. In addition to the rich history in the field of general continuous learning, the topic of continuous learning for machine vision under real conditions has recently gained interest. The goal of the DECODE project is to explore continuously adaptive models for reconstructing and understanding human motion and the environment in application-related environments. For this purpose, mobile, visual and inertial sensors (accelerometers and angular rate sensors) will be used. For these different types of sensors and data, different approaches from the field of continuous learning will be researched and developed to ensure a smooth transfer from laboratory conditions to everyday, realistic scenarios. The work will concentrate on in the areas of segmented image and video segmentation, kinematic and pose estimation and the estimation of kinematics and pose of the human body as well as the representation of movements and their context. The field of potential applications for the methods developed in DECODE is wide-ranging and includes detailed ergonomic analysis of human-machine analysis of human-machine interactions, for example in the workplace, in factories, or in vehicles.

Contact

Dr.-Ing. Nadia Robertini

Dr.-Ing. René Schuster

I-Nergy

Artificial Intelligence for Next Generation Energy

AI spreading in the energy sector is expected to dramatically reshape energy value chain in the next years, by improving business processes performance, while increasing environmental sustainability, strengthening social relationships and propagating high social value among citizens. However, uncertain business cases, fragmented regulations, standards immaturity and low-technical SMEs workforce skills barriers are actually hampering the full exploitation of AI along the energy value chain. I-NERGY will deliver: (i) Financing support through Open Calls to third parties SMEs for new energy use cases and technology building blocks validation, as well as for developing new AI-based energy services, while fully aligning to AI4EU service requirements and strengthening the SME competitiveness on AI for energy; (b) An open modular framework for supporting AI-on-Demand in the energy sector by capitalising on state-of-the-art AI, IoT, semantics, federated learning, analytics tools, which leverage on edge-level AI-based cross-sector multi-stakeholder sovereignty and regulatory preserving interoperable data handling. I-NERGY aims at evolving, scaling up and demonstrating innovative energy-tailored AI-as-a-Service (AIaaS) Toolbox, AI energy analytics and digital twins services that will be validated along 9 pilots, which: (a) Span over the full energy value chain, ranging from optimised management of grid and non-grid RES assets, improved efficiency and reliability of electricity networks operation, optimal risk assessment for energy efficiency investments planning, optimising local and virtual energy communities involvement in flexibility and green energy marketplaces; (b) Delivers other energy and non-energy services to realise synergies among energy commodities (district heating, buildings) and with nonenergy sectors (i.e. e-mobility, personal safety/security, AAL), and with non- or low-technical domains end users (i.e. elderly people).

Partners

ENGINEERING – INGEGNERIA INFORMATICA SPA (ENGINEERING – INGEGNERIA INFORMATICA SPA) FUNDACION ASTURIANA DE LA ENERGIA RIGA MUNICIPAL AGENCY FUNDACION CARTIF PQ TECNOLOGICO BOECILLO Rheinisch-Westfälische Technische Hochschule Aachen COMSENSUS, KOMUNIKACIJE IN SENZORIKA, DOO SONCE ENERGIJA D.O.O. VEOLIA SERVICIOS LECAM SOCIEDAD ANONIMA UNIPERSONAL STUDIO TECNICO BFP SOCIETA A RESPONSABILITA LIMITATA ZELENA ENERGETSKA ZADRUGA ZA USLUGE Iron Thermoilektriki Anonymi Etaireia ASM TERNI SPA CENTRO DE INVESTIGACAO EM ENERGIA REN – STATE GRID SA PARITY PLATFORM IDIOTIKI KEFALAIOUXIKI ETAIREIA Institute of Communication & Computer Systems Fundingbox Accelerator SP. Z O.O. Fundingbox Accelerator SP. Z O.O.

Contact

Prof. Dr. Didier Stricker

dAIEDGE

A network of excellence for distributed, trustworthy, efficient and scalable AI at the Edge

The dAIEDGE Network of Excellence (NoE) seeks to strengthen and support the development of the dynamic European cutting-edge AI ecosystem under the umbrella of the European AI Lighthourse and to sistain the development of advanced AI.

dAIEDGE will foster a space for the exchange of ideas, concepts, and trends on next generation cutting-edge AI, creating links between ecosystem actors to help the EC and the peripheral AI constituency identify strategies for future developments in Europe.

Partners

Aegis Rider, Bonseyes Community Association, Blekinge Institute of Technology, Commissariat à l’Energie Atomique et aux énergies alternatives, Centre d’excellence en technologies de l’information et de la communication, Centre Suisse d’Electronique et de Microtechnique, Deutsches Forschungszentrum für Künstliche Intelligenz, Deutsches Zentrum für Luft- und Raumfahrt e.V., ETH Zürich, Fraunhofer Gesellschaft, FundingBox Accelerator SP, Foundation for Research and Technology – Hellas, Haute école spécialisée de Suisse, HIPERT SRL, IMEC, Institut national de recherche en informatique et automatique, INSAIT – Institute for Computer Science, Artificial Intelligence and Technology, IoT Digital Innovation Hub, Katholieke Universiteit Leuven, NVISO SA, SAFRAN Electronics and Defense, SINTEF AS, Sorbonne Université, CNRS, ST Microelectronics, Synopsys International Limited, Thales, Ubotica Technologies Limited, University of Castilla-La Mancha, The University of Edinburgh, University of Glasgow, University of Modena and Reggio Emilia, University of Salamanca, Varjo Technologies, VERSES Global B.V., Vicomtech.

Contact

Dr.-Ing. Alain Pagani

RACKET

Rare Class Learning and Unknown Events Detection for Flexible Production

Rare Class Learning and Unknown Events Detection for Flexible Production

The RACKET project addresses the problem of detecting rare and unknown faults by combining model-based and machine learning methods. The approach is based on the assumption that a physical or procedural model of a manufacturing plant is available, which is not fully specified and has uncertainties in structure, parameters and variables. Gaps and errors in this model are detected by machine learning and corrected, resulting in a more realistic process model (nominal model). This model can be used to simulate system behavior and estimate the future characteristics of a product.

Actual product defects can thus be attributed to anomalies in the output signal and to inconsistencies in the process variables, without the need for a known failure event or an accurate failure model. Errors have a wide range, i.e., geometric errors such as scratches, out-of-tolerance dimensional variables, or dynamic errors such as deviations between estimated and actual product position on a conveyor belt, process steps or incorrect path assignment in the production flow, etc., and can occur at the product and process level.

Contact

Carsten Harms, M.Sc.

Dr.-Ing. Alain Pagani

Revise-UP

Verbesserung der Prozesseffizienz des werkstofflichen Recyclings von Post-Consumer Kunststoff-Verpackungsabfällen durch intelligentes Stoffstrommanagement

Verbesserung der Prozesseffizienz des werkstofflichen Recyclings von Post-Consumer Kunststoff-Verpackungsabfällen durch intelligentes Stoffstrommanagement

At 3.2 million tonnes per year, post-consumer packaging waste represents the most significant plastic waste stream in Germany. Despite progress to date, mechanical plastics recycling still has significant potential for improvement: In 2021, only about 27 Ma.-% (1.02 million Mg/a) of post-consumer plastics could be converted into recyclates, and only about 12 Ma.-% (0.43 million Mg/a) served as substitutes for virgin plastics (Conversio Market & Strategy GmbH, 2022).

So far, mechanical plastics recycling has been limited by the high effort of manual material flow characterisation, which leads to a lack of transparency along the value chain. During the ReVise concept phase, it was shown that post-consumer material flows can be characterised automatically using inline sensor technology. The subsequent four-year ReVise implementation phase (ReVise-UP) will explore the extent to which sensor-based material flow characterisation can be implemented on an industrial scale to increase transparency and efficiency in plastics recycling.

Three main effects are expected from this increased data transparency. Firstly, positive incentives for improving collection and product qualities should be created in order to increase the quality and use of plastic recyclates. Secondly, sensor-based material flow characteristics are to be used to adapt sorting, treatment and plastics processing processes to fluctuating material flow properties. This promises a considerable increase in the efficiency of the existing technical infrastructure. Thirdly, the improved data situation should enable a holistic ecological and economic evaluation of the entire value chain. As a result, technical investments can be used in a more targeted manner to systematically optimise both ecological and economic benefits.

Our goal is to fundamentally improve the efficiency, cost-effectiveness and sustainability of post-consumer plastics recycling.

Partners

Deutsches Forschungszentrum für Künstliche Intelligenz GmbH Deutsches Institut für Normung e. V. Human Technology Center der RWTH Aachen University Hündgen Entsorgungs GmbH & Co. KG Krones AG Kunststoff Recycling Grünstadt GmbH SKZ – KFE gGmbH STADLER Anlagenbau GmbH Wuppertal Institut für Klima, Umwelt, Energie gGmbH PreZero Recycling Deutschland GmbH & Co. KG bvse – Bundesverband Sekundärrohstoffe und Entsorgung e. V. cirplus GmbH HC Plastics GmbH Henkel AG Initiative „Mülltrennung wirkt“ Procter & Gamble Service GmbH TOMRA Sorting GmbH

Contact

Dr. Bruno Walter Mirbach

Dr.-Ing. Jason Raphael Rambach