News Archive
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023

HyperCOG

HyperCOG

HyperCOG

Innovative Cyber-Physical System (CPS) to cover industrial production needs in the current technological context of Industry 4.0

Partners

  1. LORTEK S COOP ES
  2. FUNDACION TECNALIA RESEARCH & INNOVATION ES
  3. ECOLE SUPERIEURE DES TECHNOLOGIES INDUSTRIELLES AVANCEES France
  4. SIDENOR ACEROS ESPECIALES SL Spain
  5. CIMSA CIMENTO SANAYI VE TICARET ANONIM SIRKETI TR
  6. RHODIA OPERATIONS FR
  7. DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH DE
  8. TECHNOLOGIE INITIATIVE SMARTFACTORY KL E.V. Germany
  9. MONDRAGON SISTEMAS DE INFORMACION SOCIEDAD COOPERATIVA ES
  10. UNIVERSITE PARIS XII VAL DE MARNE FR
  11. Cyber Services Plc HU
  12. EKODENGE MUHENDISLIK MIMARLIK DANISMANLIK TICARET ANONIM SIRKETI TR
  13. 2.-O LCA CONSULTANTS APS Denmark
  14. INSIGHT PUBLISHERS LIMITED UK

Contact

Dr.-Ing. Alain Pagani

Co2Team

Cognitive Collaboration for Teaming

Cognitive Collaboration for Teaming

Während einer Flugreise müssen Piloten komplizierte Situationen meistern – gleichzeitig sehen sie sich aufgrund der Menge und Art der verfügbaren Informationen mit einer zunehmenden Systemkomplexität konfrontiert. Co2Team (Cognitive Collaboration for Teaming) verfolgt die Idee, dass ein auf künstlicher Intelligenz basierendes System den Piloten durch den Einsatz von Cognitive Computing effizient unterstützen kann.

Hauptziel des Projekts ist es, einen technologischen und methodischen Übergang zu einem eigenständigeren Flugverkehr vorzuschlagen. Co2Team wird eine Roadmap für Cognitive Computing entwickeln, um den Piloten für den zukünftigen Luftverkehr zu unterstützen. Dieser Übergang basiert auf einem innovativen bidirektionalen Kommunikationsparadigma und einer optimierten gemeinsamen Kompetenz (Mensch-Maschine) unter Nutzung des Potenzials des kognitiven Rechnens (Pilotmonitoring, Umwelt- und Situationsverständnis, erweiterte Unterstützung, adaptive Automatisierung).

Projektpartner sind die Abteilung Augmented Vision des DFKI, die Deutsche Lufthansa AG und das Institut Polytechnique de Bordeaux (INP Bordeaux).

Partners

  • Deutsche Lufthansa AG
  • Institut Polytechnique de Bordeaux (INP Bordeaux)
  • DFKI GmbH

Contact

Dr.-Ing. Alain Pagani

ARinfuse

Infusing skills in Augmented Reality for geographical information management in the utility sector

Infusing skills in Augmented Reality for geographical information management in the utility sector

ARinfuse is an European project funded under Erasmus+, the EU’s programme to support education, training, youth and sport in Europe.

The objective of ARinfuse is to support individuals in acquiring and developing basic skills and key competences within the field of geoinformatics and utility infrastructure, in order to foster employability. This objective is addressed through the development of new learning modules where Augmented Reality technologies are merged with geoinformatics and applied within the utility infrastructure sector. The developed digital learning content and tools will be implemented in university programs as well as in vocational training programs, and will be made available as Open Educational Resources, open textbooks and Open Source Educational Software.

Partners

  • GeoImaging Ltd (Cyprus)
  • Novogit AB (Sweden)
  • Cyprus University of Technology (Cyprus)
  • GISIG Association (Italy)
  • Sewerage Board of Nicosia (Cyprus)
  • Flanders Environment Agency (VMM, Belgium)
  • DFKI GmbH

Contact

Dr.-Ing. Alain Pagani

CAPTURE

CAPTURE - 3D-scene reconstruction with high resolution and high dynamic range spherical images

CAPTURE – 3D-scene reconstruction with high resolution and high dynamic range spherical images

Reconstruction of 3D-scenes out of camera images represents an essential technology for many applications, such as 3D-digital-cities, digital cultural heritages, games, tele-cooperation, tactical training or forensic. The objective of the project CAPTURE is to develop a novel approach for 3D scene acquisition and develop corresponding theory and practical methods.

Instead of processing a large amount of standard perspective low resolution video images, we use as input data a few single but full spherical high resolution and high dynamic range (HDR) images. Currently available spherical high resolution cameras are able to record fine texture details and the complete scene from a single point in space. Additionally such cameras provide HDR images yielding consistent color and photometric information. We propose to exploit this new technology focusing on the dense/high-quality 3D reconstruction of both indoor and outdoor environments.

The fundamental issue of the project is to develop novel algorithms that take into account the properties of these images, and thus to push forward the current state of the art in 3D scene acquisition and viewing. In particular we develop novel stable and light-invariant image feature detectors, as well as robust assignment methods for image matching and novel 3D reconstruction/viewing algorithms, which exploit the properties of the images.

The multiple spherical view geometry provides a high amount of redundant information about the underlying environment. This, combined with the consistency of the color and photometric information from HDR images, allows us to develop new methods for robust high-precision image matching and 3D structure estimation, resulting in a high-fidelity textured model of the real scene.

The development of the project CAPTURE makes extensive usage of our Computer Vision Development Framework ARGOS. From the software development side, it is necessary to work with large images and merge information from multiple sources simultaneously. We therefore also put special attention in parallel processing of large amount of data as well as clustering capabilities.

The application of this project is the accurate reconstruction of large scenes which includes industrial facilities, touristic and cultural heritage sites, as well as urban environments.

Contact

Dr.-Ing. Alain Pagani

dAIEDGE

A network of excellence for distributed, trustworthy, efficient and scalable AI at the Edge

The dAIEDGE Network of Excellence (NoE) seeks to strengthen and support the development of the dynamic European cutting-edge AI ecosystem under the umbrella of the European AI Lighthourse and to sistain the development of advanced AI.

dAIEDGE will foster a space for the exchange of ideas, concepts, and trends on next generation cutting-edge AI, creating links between ecosystem actors to help the EC and the peripheral AI constituency identify strategies for future developments in Europe.

Partners

Aegis Rider, Bonseyes Community Association, Blekinge Institute of Technology, Commissariat à l’Energie Atomique et aux énergies alternatives, Centre d’excellence en technologies de l’information et de la communication, Centre Suisse d’Electronique et de Microtechnique, Deutsches Forschungszentrum für Künstliche Intelligenz, Deutsches Zentrum für Luft- und Raumfahrt e.V., ETH Zürich, Fraunhofer Gesellschaft, FundingBox Accelerator SP, Foundation for Research and Technology – Hellas, Haute école spécialisée de Suisse, HIPERT SRL, IMEC, Institut national de recherche en informatique et automatique, INSAIT – Institute for Computer Science, Artificial Intelligence and Technology, IoT Digital Innovation Hub, Katholieke Universiteit Leuven, NVISO SA, SAFRAN Electronics and Defense, SINTEF AS, Sorbonne Université, CNRS, ST Microelectronics, Synopsys International Limited, Thales, Ubotica Technologies Limited, University of Castilla-La Mancha, The University of Edinburgh, University of Glasgow, University of Modena and Reggio Emilia, University of Salamanca, Varjo Technologies, VERSES Global B.V., Vicomtech.

Contact

Dr.-Ing. Alain Pagani

HERON

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

The project will deliver a complete novel vision-guided mobile robotic solution to automate the assembly and screwdriving of final ssembly operations, which are currently performed manually. The solution will include a robotic cell integrating real-time process control to guarantee the process quality including a digital twin platform for accurate process simulation and trajectory optimization to minimize setup time and increase flexibility. A demonstrator will be built for system validation performing quality control procedures and screwing of automotive parts on the chassis of a vehicle.

Partners

Aldakin S.L (Spain)

Simumatik A.B (Sweden)

Visometry GmbH (Germany)

Contact

Dr.-Ing. Alain Pagani

HAIKU

Human AI teaming Knowledge and Understanding for aviation safety

Human AI teaming Knowledge and Understanding for aviation safety

It is essential both for safe operations, and for society in general, that the people who currently keep aviation so safe can work with, train and supervise these AI systems, and that future autonomous AI systems make judgements and decisions that would be acceptable to humans. HAIKU will pave the way for human-centric-AI by developing new AI-based ‘Digital Assistants’, and associated Human-AI Teaming practices, guidance and assurance processes, via the exploration of interactive AI prototypes in a wide range of aviation contexts.

Therefore, HAIKU will:

  1. Design and develop a set of AI assistants, demonstrated in the different use cases.
  2. Develop a comprehensive Human Factors design guidance and methods capability (‘HF4AI’) on how to develop safe, effective and trustworthy Digital Assistants for Aviation, integrating and expanding on existing state-of-the-art guidance.
  3. Conduct controlled experiments with high operational relevance – illustrating the tasks, roles, autonomy and team performance of the Digital Assistant in a range of normal and emergency scenarios
  4. Develop new safety and validation assurance methods for Digital Assistants, to facilitate early integration into aviation systems by aviation stakeholders and regulatory authorities
  5. Deliver guidance on socially acceptable AI in safety critical operations, and for maintaining aviation’sstrong safety record.

Partners

  1. Deep Blue Italy DBL 2 EUROCONTROL Belgium ECTL 3 FerroNATS Air Traffic Services Spain FerroNATS 4 Center for Human Performance Research Netherlands CHPR 5 Linköping University Sweden LiU 6 Thales AVS France TAVS 7 Institute Polytechnique de Bordeaux France Bordeaux INP 8 Centre Aquitain des Technologies del’Information Electroniques France CATIE 9 Deutsches Forschungszentrum für Künstliche Intelligenz Germany DFKI 10 Engineering Ingegneria Informatica SpA Italy ENG 11 Luftfartsverket, Air Navigation Service Provider Sweden Sweden LFV 12 Ecole Nationale De L’aviation Civile France ENAC 13 TUI Airways Ltd United Kingdom TUI 14 Suite5 Data Intelligence Solutions Limited Cyprus Suite5 15 Airholding SA Portugal EMBRT 16 Embraer SA Brazil EMBSA 17 Ethniko Kentro Erevnas Kai Technologikis Anaptyxis Greece CERTH 18 London Luton Airport Operations Ltd United Kingdom LLA

Contact

Nareg Minaskan Karabid

Dr.-Ing. Alain Pagani

CORTEX2

Cooperative Real-Time Experience with Extended reality

Cooperative Real-Time Experience with Extended reality

The consortium of CORTEX2 — “COoperative Real-Time EXperiences with EXtended reality” — is proud to announce the official start of this European initiative, funded by the European Commission under the Horizon Europe research and innovation programme.

The COVID-19 pandemic pushed individuals and companies worldwide to work primarily from home or completely change their work model in order to stay in business. The share of employees who usually or sometimes work from home rose from 14.6% to 24.4% between 2019 and 2021. In Europe, the proportion of people who work remotely went from 5% to 40% as a result of the pandemic. Today, all the signs are that remote work is here to stay: 72% of employees say their organization is planning some form of permanent teleworking in the future, and 97% would like to work remotely, at least part of their working day, for the rest of their career. But not all organizations are ready to adapt to this new reality, where team collaboration is vital.

Existing services and applications aimed at facilitating remote team collaboration — from video conferencing systems to project management platforms — are not yet ready to efficiently and effectively support all types of activities. And extended reality (XR)-based tools, which can enhance remote collaboration and communication, present significant challenges for most businesses.

The mission of CORTEX2 is to democratize access to the remote collaboration offered by next-generation XR experiences across a wide range of industries and SMEs.

To this aim, CORTEX2 will provide:

  • Full support for AR experience as an extension of video conferencing systems when using heterogeneous service end devices through a novel Mediation Gateway platform. – Resource-efficient teleconferencing tools through innovative transmission methods and automatic summarization of shared long documents. – Easy-to-use and powerful XR experiences with instant 3D reconstruction of environments and objects, and simplified use of natural gestures in collaborative meetings. – Fusion of vision and audio for multichannel semantic interpretation and enhanced tools such as virtual conversational agents and automatic meeting summarization. – Full integration of internet of things (IoT) devices into XR experiences to optimize interaction with running systems and processes. – Optimal extension possibilities and broad adoption by delivering the core system with open APIs and launching open calls to enable further technical extensions, more comprehensive use cases, and deeper evaluation and assessment.

Overall, we will invest a total of 4 million Euros in two open calls, which will be aimed at recruiting tech startups/SMEs to co-develop CORTEX2; engaging new use-cases from different domains to demonstrate CORTEX2 replication through specific integration paths; assessing and validating the social impact associated with XR technology adoption in internal and external use cases.

The first one will be published on October 2023 with the aim to collect two types of applications: Co-development and Use-case. The second one will be published on April 2024, targeting only Co-development type projects.

The CORTEX2 consortium is formed by 10 organizations in 7 countries, which will work together for 36 months. ​​The German Research Center for Artificial Intelligence (DFKI) leads the consortium.

More information on the Project Website: https://cortex2.eu

Partners

  1. – DFKI – Deutsches Forschungszentrum für Künstliche Intelligenz GmbH Germany 2 – LINAGORA – GSO France 3 – ALE – Alcatel-Lucent Entreprise International France 4 – ICOM – Intracom SA Telecom Solutions Greece 5 – AUS – AUSTRALO Alpha Lab MTÜ Estonia 6 – F6S – F6S Network Limited Ireland 7 – KUL– Katholieke Universiteit Leuven Belgium 8 – CEA – Commissariat à l’énergie atomique et aux énergies alternatives France 9 – ACT – Actimage GmbH Germany 10 – UJI – Universitat Jaume I De Castellon

Contact

Dr.-Ing. Alain Pagani

FLUENTLY

Fluently - the essence of human-robot interaction

Fluently – the essence of human-robot interaction

Fluently leverages the latest advancements in AI-driven decision-making process to achieve true social collaboration between humans and machines while matching extremely dynamic manufacturing contexts. The Fluently Smart Interface unit features: 1) interpretation of speech content, speech tone and gestures, automatically translated into robot instructions, making industrial robots accessible to any skill profile; 2) assessment of the operator’s state through a dedicated sensors’ infrastructure that complements a persistent context awareness to enrich an AI-based behavioural framework in charge of triggering the generation of specific robot strategies; 3) modelling products and production changes in a way they could be recognized, interpreted and matched by robots in cooperation with humans. Robots equipped with Fluently will constantly embrace humans’ physical and cognitive loads, but will also learn and build experience with their human teammates to establish a manufacturing practise relying upon quality and wellbeing.

FLUENTLY targets three large scale industrial value chains playing an instrumental role in the present andfuture manufacturing industry in Europe, that are: 1) lithium cell batteries dismantling and recycling (fullymanual); 2) inspection and repairing of aerospace engines (partially automated); 3) laser-based multi-techs forcomplex metal components manufacturing, from joining and cutting to additive manufacturing and surfacefunctionalization (fully automated in the equipment but strongly dependent upon human process assessment).

Partners

  • REPLY DEUTSCHLAND SE (Reply), Germany,
  • STMICROELECTRONICS SRL (STM), Italy,
  • BIT & BRAIN TECHNOLOGIES SL (BBR), Spain,
  • MORPHICA SOCIETA A RESPONSABILITA LIMITATA (MOR), Italy,
  • IRIS SRL (IRIS), Italy,
  • SYSTHMATA YPOLOGISTIKIS ORASHS IRIDA LABS AE (IRIDA), Greece,
  • GLEECHI AB (GLE), Sweden,
  • FORENINGEN ODENSE ROBOTICS (ODE), Denmark,
  • TRANSITION TECHNOLOGIES PSC SPOLKA AKCYJNA (TT), Poland,
  • MALTA ELECTROMOBILITY MANUFACTURING LIMITED (MEM), Malta,
  • POLITECNICO DI TORINO (POLITO), Italy,
  • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany,
  • TECHNISCHE UNIVERSITEIT EINDHOVEN (TUe), Netherlands,
  • SYDDANSK UNIVERSITET (SDU), Denmark,
  • COMPETENCE INDUSTRY MANUFACTURING 40 SCARL (CIM), Italy,
  • PRIMA ADDITIVE SRL (PA), Italy,
  • SCUOLA UNIVERSITARIA PROFESSIONALE DELLA SVIZZERA ITALIANA (SUPSI), Switzerland,
  • MCH-TRONICS SAGL (MCH),Switzerland,
  • FANUC SWITZERLAND GMBH (FANUC Europe), Switzerland,
  • UNIVERSITY OF BATH (UBAH), United Kingdom
  • WASEDA UNIVERSITY (WUT), Japan

Contact

Dipl.-Inf. Bernd Kiefer

Dr.-Ing. Alain Pagani

RACKET

Rare Class Learning and Unknown Events Detection for Flexible Production

Rare Class Learning and Unknown Events Detection for Flexible Production

The RACKET project addresses the problem of detecting rare and unknown faults by combining model-based and machine learning methods. The approach is based on the assumption that a physical or procedural model of a manufacturing plant is available, which is not fully specified and has uncertainties in structure, parameters and variables. Gaps and errors in this model are detected by machine learning and corrected, resulting in a more realistic process model (nominal model). This model can be used to simulate system behavior and estimate the future characteristics of a product.

Actual product defects can thus be attributed to anomalies in the output signal and to inconsistencies in the process variables, without the need for a known failure event or an accurate failure model. Errors have a wide range, i.e., geometric errors such as scratches, out-of-tolerance dimensional variables, or dynamic errors such as deviations between estimated and actual product position on a conveyor belt, process steps or incorrect path assignment in the production flow, etc., and can occur at the product and process level.

Contact

Carsten Harms, M.Sc.

Dr.-Ing. Alain Pagani