News Archive
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

AI-Observer

Artificial Intelligence for Earth Observation Twinning

Artificial Intelligence for Earth Observation Twinning

Artificial Intelligence (AI) has a major impact on many sectors and its influence is predicted to expand rapidly in the coming years. One area with considerable untapped potential for AI is the field of Earth Observation, where it can be used to manage large datasets, find new insights in data and generate new products and services. AI is one of the missing core areas that need to be integrated in the EO capabilities of the ERATOSTHENES Centre of Excellence (ECoE). AI-OBSERVER project aims to significantly strengthen and stimulate the scientific excellence and innovation capacity, as well as the research management and administrative skills of the ECoE, through several capacity building activities on AI for EO applications in the Disaster Risk Reduction thematic area, upgrading and modernising its existing department of Resilient Society, as well as its research management and administration departments, and assisting the ECoE to reach its long-term objective of raised excellence on AI for EO on environmental hazards. A close and strategic partnership between the ECoE from Cyprus (Widening country) and two internationally top-class leading research institutions, the German Research Centre for Artificial Intelligence (DFKI) from Germany and the University of Rome Tor Vergata (UNITOV) from Italy, will lead to a research exploratory project on the application of AI on EO for multi-hazard monitoring and assessment in Cyprus. Moreover, CELLOCK Ltd. (CLK), the project’s industrial partner, will lead commercialisation, exploitation and product development aspects of AI-OBSERVER and its exploratory project outputs. All outputs will be disseminated and communicated to stakeholders, the research community, and the public, assisting the ECoE to accomplish its exploitation goals, by creating strong links with various stakeholders from academia and industry in Cyprus and beyond, that ECoE will capitalise on, long after the end of the project.

Partners

ERATOSTHENES Centre of Excellence (ECoE), Zypern (Koordinator) DFKI, Deutshland Universität Rom Tor Vergata, Italien CELLOCK Ltd., Zypern

Contact

Dr. Dipl.-Inf. Gerd Reis

VIDETE

Generation of prior knowledge with the help of learning systems for 4D analysis of complex scenes

Generation of prior knowledge with the help of learning systems for 4D analysis of complex scenes

Motivation

Artificial intelligence currently influences many areas, including machine vision. For applications in the fields of autonomous systems, medicine and industry there are fundamental challenges: 1) generating prior knowledge to solve severely under-determined problems, 2) verifying and explaining the response calculated by the AI, and 3) providing AIs in scenarios with limited computing power.

Goals and Procedure

The goal of VIDETE is to use AI to generate prior knowledge using machine learning processes, thus making previously unsolvable tasks such as the reconstruction of dynamic objects practically manageable with just one camera. With suitable prior knowledge it will be easier to analyze and interpret general scenes, for example in the area of autonomous systems, with the help of algorithms. Furthermore, methods will be developed to justify the calculated results before they are used further. In the field of medicine this would be comparable to the opinion of a colleague in contrast to the general answers of current AI methods. A key technique is considered to be the modularization of algorithms, which will especially increase the availability of AI. Modular components can be realized efficiently in hardware. Thus, calculations (e.g. the recognition of a gesture) can be performed close to the generating sensor. This, in turn, enables semantically enriched information to be communicated with low overhead, which means that AI can also be used on mobile devices with low resources available.

Innovations and Perspectives

Artificial intelligence finds its way into almost all areas of daily life and work. The results expected from the VIDETE project will be independent of the defined research scenarios and can contribute to progress in many application areas (private life, industry, medicine, autonomous systems, etc.).

Contact

Dr. Dipl.-Inf. Gerd Reis

VisIMon

Networked, Intelligent and Interactive System for Continuous, Perioperative  Monitoring and Control of an Irrigation Device, as well as for Functional Monitoring of the Lower Urinary Tactonitoring

Networked, Intelligent and Interactive System for Continuous, Perioperative Monitoring and Control of an Irrigation Device, as well as for Functional Monitoring of the Lower Urinary Tactonitoring

Continuous bladder irrigation is the standard after operations on the bladder, prostate, or kidneys to prevent complications caused by blood clots. The irrigation should be monitored constantly, but this is not possible in everyday clinical practice. Therefore, the motivation of VisIMon is to enable automated monitoring. It leads to improved patient care while at the same time relieving the strain on staff.

The aim of the project VisIMon is the development of a small module worn on the body which monitors the irrigation process with the aid of various sensors. The system should seamlessly integrate with the established standard process. Through the cooperation of interdisciplinary partners from industry and research, the necessary sensors are to be developed and combined into an effective monitoring system. Modern communication technology enables the development of completely new concepts how medical devices interact with a hospital. At more than 200,000 applications per year in Germany, the development is extremely attractive not only from a medical but also from an economic point of view.

Partners

  • Albert-Ludwigs-Universität Freiburg
  • Fraunhofer Gesellschaft zur Förderung der Angewandten Forschung E.V.
  • Lohmann & Birkner Health Care Consulting GmbH
  • Digital Biomedical Imaging Systems AG

Contact

Dr. Dipl.-Inf. Gerd Reis

LiSA

Light and solar management using active and model-predictively controlled components

The research project LiSA is a broad-based joint project in the area of facade, lighting and control technology. The aim is enabling the most energy-efficient operation of office and administrative buildings taking into account user satisfaction. Through exemplary system integration the understanding of the interaction of individual components and the necessity of system solutions are promoted.

At component level technologies are being developed which enable the efficient use of daylight, provide energy-saving lighting with artificial light, and reduce cooling load during summer by means of shading. At the level of sensor technology, a cost-effective sensor is developed, which measures the light conditions as well as heat inputs to the room by solar radiation. A model-predictive control approach optimizes the operation of the components, which can be managed and controlled via wireless communication paths.

With the implementation of the project in a Living Lab Smart Office Space, which is subject to detailed monitoring and in which people use the space according to the actual purpose, it is ensured that the developments are continuously empirically validated and that the results are perceived by users as added value. The people working in the Living Lab have the opportunity to interact with the technology and are thus an essential part of the investigations.

Partners

  • Technische Universität Kaiserslautern
  • DFKI GmbH
  • ebök Planung und Entwicklung GmbH
  • Dresden Elektronik Ingenieurtechnik GmbH
  • Agentilo GmbH
  • Herbert Waldmann GmbH & Co. KG

Contact

Dr. Dipl.-Inf. Gerd Reis

GreifbAR

Greifbare Realität - geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

Greifbare Realität – geschickte Interaktion von Benutzerhänden und -fingern mit realen Werkzeugen in Mixed-Reality Welten

On 01.10.2021, the research project GreifbAR started under the leadership of the DFKI (research area Augmented Reality). The goal of the GreifbAR project is to make mixed reality (MR) worlds, including virtual (VR) and augmented reality (“AR”), tangible and graspable by allowing users to interact with real and virtual objects with their bare hands. Hand accuracy and dexterity is paramount for performing precise tasks in many fields, but the capture of hand-object interaction in current MR systems is woefully inadequate. Current systems rely on hand-held controllers or capture devices that are limited to hand gestures without contact with real objects. GreifbAR solves this limitation by introducing a sensing system that detects both the full hand grip including hand surface and object pose when users interact with real objects or tools. This sensing system will be integrated into a mixed reality training simulator that will be demonstrated in two relevant use cases: industrial assembly and surgical skills training. The usability and applicability as well as the added value for training situations will be thoroughly analysed through user studies.

Partners

Berliner Charite (University Medicine Berlin) NMY (Mixed reality applications for industrial and communication customers) Uni Passau (Chair of Psychology with a focus on human-machine interaction).

Contact

Dr. Dipl.-Inf. Gerd Reis

Dr.-Ing. Nadia Robertini

Marmorbild

Marmorbild

Marmorbild

© S. Siegesmund

The virgin stone marble has been used as preferred material for representative buildings and sculptures. Yet, due to its chemical composition and its porosity marble is prone to natural deterioration in outdoor environments, with an accelerating rate since the beginning of industrialization, mainly due to increasing pollution. A basic requirement for a successful restoration and conservation is a regularly repeated assessment of the current object condition and knowledge about prior restoration actions. Ideally the assessment is non-destructive. This requirement is fulfilled for both the optical digitization of objects shape and appearance, and the ultrasound examination used to acquire properties with respect to material quality.

Goal of the joint research project Marmorbild of the University Kaiserslautern, the Fraunhofer Institute (IBMT), and the Georg-August-University Göttingen is the validation of modern ultrasound technologies and digital reconstruction methods with respect to non-destructive testing of facades, constructions and sculptures built from marble. The proof of concept has been provided with prior research.

The planned portable assessment system holds a high potential for innovation. In the future, more objects can be examined cost-effectively in short time periods. Damage can be identified at an early stage allowing for a target-oriented investment of efforts and financial resources.

Dresdner Knabe

Partners

Funding by: BMBF

  • Funding programm: VIP+
  • Grant agreement no.: 03VP00293
  • Begin: 01.10.2016
  • End: 30.09.2019

Contact

Dr. Gerd Reis

Anna C-trus

Anna C-trus

ANNA – Artificial Neural Network Analyzer

A Framework for the Automatic Inspection of TRUS Images. The project ANNA (Artificial Neural Network Analyzer) aims at the design of a framework to analyze ultrasound images by means of signal processing combined with methods from the field of artificial intelligence (neural networks, self-organizing maps, etc.). Although not obvious ultrasound images do contain information that cannot be recognized by the human visual system and that do provide information about the underlying tissue. On the other hand the human visual system recognizes virtual structures in ultrasound images that are not related at all to the underlying tissue. Especially interesting in this regard is the fact that the careful combination several texture descriptor based filters is suited for an analysis by artificial neural networks and that suspicious regions can be marked reliably. The specific aim of the framework is to automatically analyze conventional rectal ultrasound (TRUS) images of the prostate in order to detect suspicious regions that are likely to relate to a primary cancer focus. These regions are marked for a subsequent biopsy procedure. The advantages of such an analysis are the significantly reduced number of biopsies compared to a random or systematic biopsy procedure to detect a primary cancer and the significantly enhanced success rate to extract primary cancer tissue with a single biopsy procedure. On one hand this results in a faster and more reliable diagnosis with significantly decreased intra-examiner variability, on the other hand the discomfort of the patient due to multiple biopsy sessions is dramatically reduced.

Contact

Dr. Gerd Reis

VIDP

VIDP

Visual Impairment Digital Platform

At present, the existing viewing aids and accessibility tools are rarely mobile and are usually heavy and expensive. They consist of electronics, mechanics, optics and a minimum of software. Their images cannot be adjusted according to individual eye diseases or individual patient parameters.

Digital devices with micro displays, micro cameras and software (to manipulate the image in real time) are not existing. Combining off-the-shelf components such as micro displays and micro cameras with embedded electronics and the latest processor techniques will create an universal, flexible digital viewing aid similar to  goggles or HMD. Together with eye tracking, the user can control all functions of his device, and the image enhancements can be presented individually into the correct viewing position.

The planned digital platform will be mobile, small and easily programmable to accommodate the patient’s particular degree of disablement.  It will be flexible for later adaptation, in the case of changed patient’s parameters. The platform can be upgraded by additional features such as with plug-in modules.

The result is one device for (at least) these following diseases: Macular Degeneration – Morbus Stargardt – Morbus Best – Retinitus Pigmentosa – Diabetic Retinopatie – Usher Syndrome – Diabetes – Cataract – Glaucoma – N. Optic neuropathy – Retinal Vascular accident – Albinism – Cornea Dystrophy – Hemianopia.

The same device can also be used for therapies and rehabilitation of Cortical Blindness patients as well as help in certain cases of Parkinson’s disease. Several additional features are possible. For example, the device can be used as a reading magnifier for aging people either stationary or mobile. Combined with OCR techniques and simple audio output, the device could read short texts, even when travelling or in poor light conditions.  The integration of navigation techniques (GPS, maps) is very useful and gives additional benefit to the device since this information can be presented according to individual needs – what is impossible for current market products.

The combination of the NTE (near the eye) see-through optic, highly integrated electronics and the flexibility of software will reduce volume and price. The same device will be able to project text or images (i. e. maps) magnified on walls or on flat surfaces. Distance sensors can alert the person whose head is approaching an object. The integrated audio can update various and multiple situations or the features’ status. For aging people, it will enable them to keep their independency and mobility.

The device can be used hands free as HMD, as goggles or as handheld binoculars, hanging around the neck if not used. The NTE optics offers the advantage to display good image quality even in bright ambient light conditions, compared to direct view LCD screens. One of the key elements of the digital viewing aid is a transparent Head-Mounted-Display which does not block the user’s sight. This is combined with video-see-through and the ability to enhance images compensating for the patient’s viewing disabilities by using a camera.

For example, such a camera can aid one to see better at night, auto focus for sharper images, give brilliant colours and, most importantly, magnify the scene by optical/digital zoom. The algorithms can adjust and enhance the image, depending on the individual user’s needs. Diopter adjustments and parallax corrections are implemented into the near-the-eye (NTE) optics. Additionally, an optician can add correction lenses and adapt the parameters of the software algorithm with any computer.

Partners

Funding by: BMBF

Contact

Dr. Gerd Reis

TuBUs-Pro

TuBUs-Pro

A tutor system for diagnosing ultrasound images of the prostate

The TuBUs-Pro project aims at tutoring the use of the ANNA-framework to analyze conventional rectal ultrasound images of the prostate. The system provides different training modes with varying difficulty levels, ranging from “mark a suspicious region in a particular ultrasound image” to “find all primary cancers in a complete case”.

The TuBUs-Pro project aims at tutoring the use of the ANNA-framework to analyze conventional rectal ultrasound images of the prostate. The system provides different training modes with varying difficulty levels, ranging from “mark a suspicious region in a particular ultrasound image” to “find all primary cancers in a complete case”. The user is provided with aids (i.e. texture analysis functions) that can be applied to an image and help putting the diagnosis. This way, students get used to pre-analyzed images and to the mental combination of descriptor responses in order to judge a given situation. Although this would be sufficient for a tutoring system TuBUs-Pro can additionally serve as a framework to develop, evaluate, and compare tissue descriptors, since it provides a ground truth i.e. a very large database of manually segmented cases that can be used to judge the performance of newly developed descriptors and descriptor combinations.

TuBUs-Pro-GUI_Image_2

Image 1: From an input image several intermediate images are calculated and presented to the user. These intermediates are combined to form the final output image.

TuBUs-Pro_Image_1

Image 2: The TuBUs-Pro GUI showing the evaluation of an exercise, where three out of five regions were marked correctly and two incorrectly. Note that it is extremely difficult to visually detect primary cancer regions i.e. to distinguish them from healthy tissue.

Contact

Dr. Gerd Reis