News Archive
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

dAIEDGE

A network of excellence for distributed, trustworthy, efficient and scalable AI at the Edge

The dAIEDGE Network of Excellence (NoE) seeks to strengthen and support the development of the dynamic European cutting-edge AI ecosystem under the umbrella of the European AI Lighthourse and to sistain the development of advanced AI.

dAIEDGE will foster a space for the exchange of ideas, concepts, and trends on next generation cutting-edge AI, creating links between ecosystem actors to help the EC and the peripheral AI constituency identify strategies for future developments in Europe.

Partners

Aegis Rider, Bonseyes Community Association, Blekinge Institute of Technology, Commissariat à l’Energie Atomique et aux énergies alternatives, Centre d’excellence en technologies de l’information et de la communication, Centre Suisse d’Electronique et de Microtechnique, Deutsches Forschungszentrum für Künstliche Intelligenz, Deutsches Zentrum für Luft- und Raumfahrt e.V., ETH Zürich, Fraunhofer Gesellschaft, FundingBox Accelerator SP, Foundation for Research and Technology – Hellas, Haute école spécialisée de Suisse, HIPERT SRL, IMEC, Institut national de recherche en informatique et automatique, INSAIT – Institute for Computer Science, Artificial Intelligence and Technology, IoT Digital Innovation Hub, Katholieke Universiteit Leuven, NVISO SA, SAFRAN Electronics and Defense, SINTEF AS, Sorbonne Université, CNRS, ST Microelectronics, Synopsys International Limited, Thales, Ubotica Technologies Limited, University of Castilla-La Mancha, The University of Edinburgh, University of Glasgow, University of Modena and Reggio Emilia, University of Salamanca, Varjo Technologies, VERSES Global B.V., Vicomtech.

Contact

Dr.-Ing. Alain Pagani

HERON

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

Self-referenced Mobile Collaborative Robotics applied to collaborative and flexible production systems

The project will deliver a complete novel vision-guided mobile robotic solution to automate the assembly and screwdriving of final ssembly operations, which are currently performed manually. The solution will include a robotic cell integrating real-time process control to guarantee the process quality including a digital twin platform for accurate process simulation and trajectory optimization to minimize setup time and increase flexibility. A demonstrator will be built for system validation performing quality control procedures and screwing of automotive parts on the chassis of a vehicle.

Partners

Aldakin S.L (Spain)

Simumatik A.B (Sweden)

Visometry GmbH (Germany)

Contact

Dr.-Ing. Alain Pagani

Eyes Of Things

Eyes Of Things

Eyes Of Things

The aim of the European Eyes of Things it to build a generic Vision device for the Internet of Things.

The device will include a miniature camera and a specific Vision Processing Unit in order to perform all necessary processing tasks directly on the device without the need of transferring the entire images to a distant server. The envisioned applications will enable smart systems to perceive their environment longer and more interactively.

The technology will be demonstrated with applications such as Augmented Reality, Wearable Computing and Ambient Assisted Living.

Vision, our richest sensor, allows inferring big data from reality. Arguably, to be “smart everywhere” we will need to have “eyes everywhere”. Coupled with advances in artificial vision, the possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc.

Currently, computer vision is rapidly moving beyond academic research and factory automation. On the other hand, mass-market mobile devices owe much of their success to their impressing imaging capabilities, so the question arises if such devices could be used as “eyes everywhere”. Vision is the most demanding sensor in terms of power consumption and required processing power and, in this respect, existing massconsumer mobile devices have three problems:

  1. power consumption precludes their ‘always-on’ capability,
  2. they would have unused sensors for most vision-based applications and
  3. since they have been designed for a definite purpose (i.e. as cell phones, PDAs and “readers”) people will not consistently use them for other purposes.

Our objective in this project is to build an optimized core vision platform that can work independently and also embedded into all types of artefacts. The envisioned open hardware must be combined with carefully designed APIs that maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or ‘always-on’ but not both at the same time.

Thus, the “Eyes of Things” project aims at developing a ground-breaking platform that combines: a) a need for more intelligence in future embedded systems, b) computer vision moving rapidly beyond academic research and factory automation and c) the phenomenal technological advances in mobile processing power.

Partners

The project Eyes of Things is a common project with 7 European partners from research and industry and is financed by the program Horizon 2020.

Project partners are: Unvisersidad de Cadtilla-La Mancha (UCLM, Spain), Awaiba Lda (Portugal), Camba.tv Lts (EVERCAM, Ireland), Movidius Ltd (Ireland), Thales Communications and Security SAS (France), Fluxguide OG (Austria) and nViso SA (Switzerland).

Funding by: EU

  • Grant agreement no.: 643924
  • Funding programm: H2020
  • Begin: 01.01.2015
  • End: 30.06.2018

More information: Website of the project

Contact

Dr.-Ing. Alain Pagani

LARA

LARA

LBS & Augmented Reality Assistive System for Utilities Infrastructure Management through Galileo and EGNOS

LARA is a European Project aiming at developing a new mobile device for helping employees of utilities companies in their work on the field. The device to be developed – called the LARA System – consists of a tactile tablet and a set of sensors that can geolocalise the device using the European GALILEO system and EGNOS capabilities.

The LARA system is produced under a collaborative work where different players, SMEs, large companies, universities and research institutes are contributing with different expertise.

The LARA system is a mobile device for utility field workers. In practice, this device will guide the field workers in underground utilities to ‘see’ what is happening underworld, like an “x-ray image” of the underground infrastructure. The system is using Augmented Reality interfaces to render the complex 3D models of the underground utilities infrastructure such as water, gas, electricity, etc. in an approach that is easily understandable and useful during field work.

The 3D information is acquired from existing 3D GIS geodatabases. To this aim, the hand-held device integrates different technologies such as: positioning and sensors (GNSS), Augmented Reality (AR), GIS, geodatabases, etc.

Typical scenario

The end user is a technician working in public or private company operating an underground network in the utilities sector (electricity, gas, water or sewage). His role in the company is to plan interventions on the network such as repairs and control and execute the planned operations on site with his team.

The typical scenario of the usage of the LARA system is divided into several steps:

  1. Preparation. The end user prepares and plans an intervention while sitting in his office. He can see the pipes installation by using a web application that approximately shows  the pipes over a map of the area. He can download information onto his LARA Device.
  2. On-Field coarse localization. On the day of intervention on site, the user drives to the global area of intervention (typical size of the area: city block). The user is then guided by the LARA system to find the position of the operation as follows:  The user starts the application and selects 2D view then a map is shown on the interface with the user location pointing it on the map.
  3. Surveying (precise localization). When the user is near enough from the exact position of intervention, he can switch to 3D/AR mode where the application shows the real images (from the camera), displaying also the pipes as overlay (Augmented Reality). The Information of the pipes is structured in layers, so the user can choose between different information levels (water pipes, electrical wires…).
  4. Field work (excavation, repair…). The user can precisely define the corners of the excavation to be done on the street. He marks them with paint. The workers start to excavate. They have information about which other utilities they will find on the way.
  5. Onsite updates. If the user discovers that some information about the localization of the pipes is wrong, he can suggest an update by providing the actual position.
  6. Backoffice updates. Back to his office, the user connects the LARA system to a server, where the updates are pushed to a queue for verification before integrating them in the network’s databases.

Partners

The project LARA is a common project with 9 partners from research and industry and is financed by the program Horizon 2020. Project partners are: GeoImaging Ltd (Cyprus), Aristotle University of Thessaloniki (Greece), Igenieria Y Soluciones Informaticas des Sur S.L. (Spain), SignalGenerix Ltd (Cyprus), Municipality of Kozani (DEYAK, Greece), Birmingham City Council (UK), Hewlett Packard Espanola S.L. (Spain), University Malaysia Sarawak (Malaysia)

Funding by: EU

  • Grant agreement no.: 641460
  • Funding programm: H2020
  • Begin: 01.02.2015
  • End: 30.06.2017

Contact

Dr.-Ing. Alain Pagani