News Archive
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023

CAPTURE

CAPTURE - 3D-scene reconstruction with high resolution and high dynamic range spherical images

CAPTURE – 3D-scene reconstruction with high resolution and high dynamic range spherical images

Reconstruction of 3D-scenes out of camera images represents an essential technology for many applications, such as 3D-digital-cities, digital cultural heritages, games, tele-cooperation, tactical training or forensic. The objective of the project CAPTURE is to develop a novel approach for 3D scene acquisition and develop corresponding theory and practical methods.

Instead of processing a large amount of standard perspective low resolution video images, we use as input data a few single but full spherical high resolution and high dynamic range (HDR) images. Currently available spherical high resolution cameras are able to record fine texture details and the complete scene from a single point in space. Additionally such cameras provide HDR images yielding consistent color and photometric information. We propose to exploit this new technology focusing on the dense/high-quality 3D reconstruction of both indoor and outdoor environments.

The fundamental issue of the project is to develop novel algorithms that take into account the properties of these images, and thus to push forward the current state of the art in 3D scene acquisition and viewing. In particular we develop novel stable and light-invariant image feature detectors, as well as robust assignment methods for image matching and novel 3D reconstruction/viewing algorithms, which exploit the properties of the images.

The multiple spherical view geometry provides a high amount of redundant information about the underlying environment. This, combined with the consistency of the color and photometric information from HDR images, allows us to develop new methods for robust high-precision image matching and 3D structure estimation, resulting in a high-fidelity textured model of the real scene.

The development of the project CAPTURE makes extensive usage of our Computer Vision Development Framework ARGOS. From the software development side, it is necessary to work with large images and merge information from multiple sources simultaneously. We therefore also put special attention in parallel processing of large amount of data as well as clustering capabilities.

The application of this project is the accurate reconstruction of large scenes which includes industrial facilities, touristic and cultural heritage sites, as well as urban environments.

Contact

Dr.-Ing. Alain Pagani

Eyes Of Things

Eyes Of Things

Eyes Of Things

The aim of the European Eyes of Things it to build a generic Vision device for the Internet of Things.

The device will include a miniature camera and a specific Vision Processing Unit in order to perform all necessary processing tasks directly on the device without the need of transferring the entire images to a distant server. The envisioned applications will enable smart systems to perceive their environment longer and more interactively.

The technology will be demonstrated with applications such as Augmented Reality, Wearable Computing and Ambient Assisted Living.

Vision, our richest sensor, allows inferring big data from reality. Arguably, to be “smart everywhere” we will need to have “eyes everywhere”. Coupled with advances in artificial vision, the possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc.

Currently, computer vision is rapidly moving beyond academic research and factory automation. On the other hand, mass-market mobile devices owe much of their success to their impressing imaging capabilities, so the question arises if such devices could be used as “eyes everywhere”. Vision is the most demanding sensor in terms of power consumption and required processing power and, in this respect, existing massconsumer mobile devices have three problems:

  1. power consumption precludes their ‘always-on’ capability,
  2. they would have unused sensors for most vision-based applications and
  3. since they have been designed for a definite purpose (i.e. as cell phones, PDAs and “readers”) people will not consistently use them for other purposes.

Our objective in this project is to build an optimized core vision platform that can work independently and also embedded into all types of artefacts. The envisioned open hardware must be combined with carefully designed APIs that maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or ‘always-on’ but not both at the same time.

Thus, the “Eyes of Things” project aims at developing a ground-breaking platform that combines: a) a need for more intelligence in future embedded systems, b) computer vision moving rapidly beyond academic research and factory automation and c) the phenomenal technological advances in mobile processing power.

Partners

The project Eyes of Things is a common project with 7 European partners from research and industry and is financed by the program Horizon 2020.

Project partners are: Unvisersidad de Cadtilla-La Mancha (UCLM, Spain), Awaiba Lda (Portugal), Camba.tv Lts (EVERCAM, Ireland), Movidius Ltd (Ireland), Thales Communications and Security SAS (France), Fluxguide OG (Austria) and nViso SA (Switzerland).

Funding by: EU

  • Grant agreement no.: 643924
  • Funding programm: H2020
  • Begin: 01.01.2015
  • End: 30.06.2018

More information: Website of the project

Contact

Dr.-Ing. Alain Pagani

LARA

LARA

LBS & Augmented Reality Assistive System for Utilities Infrastructure Management through Galileo and EGNOS

LARA is a European Project aiming at developing a new mobile device for helping employees of utilities companies in their work on the field. The device to be developed – called the LARA System – consists of a tactile tablet and a set of sensors that can geolocalise the device using the European GALILEO system and EGNOS capabilities.

The LARA system is produced under a collaborative work where different players, SMEs, large companies, universities and research institutes are contributing with different expertise.

The LARA system is a mobile device for utility field workers. In practice, this device will guide the field workers in underground utilities to ‘see’ what is happening underworld, like an “x-ray image” of the underground infrastructure. The system is using Augmented Reality interfaces to render the complex 3D models of the underground utilities infrastructure such as water, gas, electricity, etc. in an approach that is easily understandable and useful during field work.

The 3D information is acquired from existing 3D GIS geodatabases. To this aim, the hand-held device integrates different technologies such as: positioning and sensors (GNSS), Augmented Reality (AR), GIS, geodatabases, etc.

Typical scenario

The end user is a technician working in public or private company operating an underground network in the utilities sector (electricity, gas, water or sewage). His role in the company is to plan interventions on the network such as repairs and control and execute the planned operations on site with his team.

The typical scenario of the usage of the LARA system is divided into several steps:

  1. Preparation. The end user prepares and plans an intervention while sitting in his office. He can see the pipes installation by using a web application that approximately shows  the pipes over a map of the area. He can download information onto his LARA Device.
  2. On-Field coarse localization. On the day of intervention on site, the user drives to the global area of intervention (typical size of the area: city block). The user is then guided by the LARA system to find the position of the operation as follows:  The user starts the application and selects 2D view then a map is shown on the interface with the user location pointing it on the map.
  3. Surveying (precise localization). When the user is near enough from the exact position of intervention, he can switch to 3D/AR mode where the application shows the real images (from the camera), displaying also the pipes as overlay (Augmented Reality). The Information of the pipes is structured in layers, so the user can choose between different information levels (water pipes, electrical wires…).
  4. Field work (excavation, repair…). The user can precisely define the corners of the excavation to be done on the street. He marks them with paint. The workers start to excavate. They have information about which other utilities they will find on the way.
  5. Onsite updates. If the user discovers that some information about the localization of the pipes is wrong, he can suggest an update by providing the actual position.
  6. Backoffice updates. Back to his office, the user connects the LARA system to a server, where the updates are pushed to a queue for verification before integrating them in the network’s databases.

Partners

The project LARA is a common project with 9 partners from research and industry and is financed by the program Horizon 2020. Project partners are: GeoImaging Ltd (Cyprus), Aristotle University of Thessaloniki (Greece), Igenieria Y Soluciones Informaticas des Sur S.L. (Spain), SignalGenerix Ltd (Cyprus), Municipality of Kozani (DEYAK, Greece), Birmingham City Council (UK), Hewlett Packard Espanola S.L. (Spain), University Malaysia Sarawak (Malaysia)

Funding by: EU

  • Grant agreement no.: 641460
  • Funding programm: H2020
  • Begin: 01.02.2015
  • End: 30.06.2017

Contact

Dr.-Ing. Alain Pagani