News Archive
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

Micro-Dress

Micro-Dress

Micro-Dress

The main objective of Micro-Dress is to extend the limits of feasible customisation for men’s and ladies’ garments to include, for the first time, user-controllable wearable functionality and user-selectable degree of material eco-friendliness.

The challenges related to the production of such new features will be researched within a framework based on two distinct business and supply chain models. On the one hand Micro-Dress will introduce mechanisms to expand the existing mass customisation model of a major international brand, while on the other, to extend an innovative mass-customisation model known as ‘micro-factories’ which targets innovative SMEs.

These two aspects can add functional value, as well as emotional value to the products of a consumer market that is currently dominated by either of two extremes: original fashion and brand, or imitation and low‐cost.

It may appear at first sight that the two main ideas (eco‐friendliness and wearable functionality) are somehow contradictory, or at least not converging. However we intend to prove that ecology and wearable functionality can co‐exist. This becomes even more interesting in a user‐centered business scenario, where the customer is directly involved in the design/configuration process, empowered by the freedom to configure both the technology related added value (user selectable sensors, actuators, physiology monitoring devices), as well as the degree of eco‐friendliness of his/her outfits (natural and healthy garments, preserving the environment and energy resources).

The scientific and technological objectives of the Micro-Dress project are:

  • To develop rapid manufacturing techniques to be able to directly write onto the fabric and produce microelectronics components directly woven into the articles themselves.
  • To derive eco-efficiency and eco-logistics-related algorithms and web-tools, allowing user- configurable eco-certification based on information relating to materials and processes along the supply chain (yarn to garment).
  • To develop a new biosensor-based screening test able to revolutionise the process of screening certain garment components created to address specific issues relating to consumer health (fabrics, accessories, etc).
  • To develop an e-supply chain management platform to model the sourcing of e-devices and the concept of configurable eco-certification along the two supply chains (vertical brand chain and the supply network of micro-factories).

To support the Micro-Dress vision for the two selected business models, an e-supply chain management platform will be built on the principle of Software-as-a-Service in order to maximise its usability. The results will be demonstrated via two pilot schemes, one focusing on user-configurable eco-certification, the second on the customisable attachment of e-devices.

Partners

Funding by: EU

Contact

Prof. Dr. Didier Stricker

VIDP

VIDP

Visual Impairment Digital Platform

At present, the existing viewing aids and accessibility tools are rarely mobile and are usually heavy and expensive. They consist of electronics, mechanics, optics and a minimum of software. Their images cannot be adjusted according to individual eye diseases or individual patient parameters.

Digital devices with micro displays, micro cameras and software (to manipulate the image in real time) are not existing. Combining off-the-shelf components such as micro displays and micro cameras with embedded electronics and the latest processor techniques will create an universal, flexible digital viewing aid similar to  goggles or HMD. Together with eye tracking, the user can control all functions of his device, and the image enhancements can be presented individually into the correct viewing position.

The planned digital platform will be mobile, small and easily programmable to accommodate the patient’s particular degree of disablement.  It will be flexible for later adaptation, in the case of changed patient’s parameters. The platform can be upgraded by additional features such as with plug-in modules.

The result is one device for (at least) these following diseases: Macular Degeneration – Morbus Stargardt – Morbus Best – Retinitus Pigmentosa – Diabetic Retinopatie – Usher Syndrome – Diabetes – Cataract – Glaucoma – N. Optic neuropathy – Retinal Vascular accident – Albinism – Cornea Dystrophy – Hemianopia.

The same device can also be used for therapies and rehabilitation of Cortical Blindness patients as well as help in certain cases of Parkinson’s disease. Several additional features are possible. For example, the device can be used as a reading magnifier for aging people either stationary or mobile. Combined with OCR techniques and simple audio output, the device could read short texts, even when travelling or in poor light conditions.  The integration of navigation techniques (GPS, maps) is very useful and gives additional benefit to the device since this information can be presented according to individual needs – what is impossible for current market products.

The combination of the NTE (near the eye) see-through optic, highly integrated electronics and the flexibility of software will reduce volume and price. The same device will be able to project text or images (i. e. maps) magnified on walls or on flat surfaces. Distance sensors can alert the person whose head is approaching an object. The integrated audio can update various and multiple situations or the features’ status. For aging people, it will enable them to keep their independency and mobility.

The device can be used hands free as HMD, as goggles or as handheld binoculars, hanging around the neck if not used. The NTE optics offers the advantage to display good image quality even in bright ambient light conditions, compared to direct view LCD screens. One of the key elements of the digital viewing aid is a transparent Head-Mounted-Display which does not block the user’s sight. This is combined with video-see-through and the ability to enhance images compensating for the patient’s viewing disabilities by using a camera.

For example, such a camera can aid one to see better at night, auto focus for sharper images, give brilliant colours and, most importantly, magnify the scene by optical/digital zoom. The algorithms can adjust and enhance the image, depending on the individual user’s needs. Diopter adjustments and parallax corrections are implemented into the near-the-eye (NTE) optics. Additionally, an optician can add correction lenses and adapt the parameters of the software algorithm with any computer.

Partners

Funding by: BMBF

Contact

Dr. Gerd Reis

STREET3D

STREET3D

Real time modeling and visualization of traffic-related air pollution

Based on the reliable STREET concept, a simple but effective method to assess the traffic-related air pollution in urban areas which was initially developed by the “Association for Technical Inspection” (TÜV Süd) and validated in German cities like Düsseldorf, Tübingen or Neuss as well as by the AIRPARIF network in Paris, a new and innovative concept for the calculation and visualization of traffic-related air pollution is being implemented.

Background for the project is the deficit of micro scale information for the assessment of air pollution, e.g. particulates, in urban areas. Therefore the air pollution is calculated on a high resolution three-dimensional grid and in real-time integrating small- and large-scale structures which have a serious impact on the local concentrations of air pollutants.
Innovative 3D visualization techniques and interactive result presentation are used to enhance the underlying data to support urban planners in making decisions and communicating these to the public.

Partners

Funding by: BMWi (ZIM)

Contact

Prof. Dr. Didier Stricker

TuBUs-Pro

TuBUs-Pro

A tutor system for diagnosing ultrasound images of the prostate

The TuBUs-Pro project aims at tutoring the use of the ANNA-framework to analyze conventional rectal ultrasound images of the prostate. The system provides different training modes with varying difficulty levels, ranging from “mark a suspicious region in a particular ultrasound image” to “find all primary cancers in a complete case”.

The TuBUs-Pro project aims at tutoring the use of the ANNA-framework to analyze conventional rectal ultrasound images of the prostate. The system provides different training modes with varying difficulty levels, ranging from “mark a suspicious region in a particular ultrasound image” to “find all primary cancers in a complete case”. The user is provided with aids (i.e. texture analysis functions) that can be applied to an image and help putting the diagnosis. This way, students get used to pre-analyzed images and to the mental combination of descriptor responses in order to judge a given situation. Although this would be sufficient for a tutoring system TuBUs-Pro can additionally serve as a framework to develop, evaluate, and compare tissue descriptors, since it provides a ground truth i.e. a very large database of manually segmented cases that can be used to judge the performance of newly developed descriptors and descriptor combinations.

TuBUs-Pro-GUI_Image_2

Image 1: From an input image several intermediate images are calculated and presented to the user. These intermediates are combined to form the final output image.

TuBUs-Pro_Image_1

Image 2: The TuBUs-Pro GUI showing the evaluation of an exercise, where three out of five regions were marked correctly and two incorrectly. Note that it is extremely difficult to visually detect primary cancer regions i.e. to distinguish them from healthy tissue.

Contact

Dr. Gerd Reis

IVMT

IVMT

Development of new Interaction- and Visualization metaphors for Multi-Touch devices

The project pursues two goals regarding multi-touch devices. On the one hand research is conducted in human computer interaction (HCI). On the other hand research regarding multi-touch hardware and touch tracking is carried out.

The project pursues two goals regarding multi-touch devices. On the one hand research is conducted in human computer interaction (HCI). The major goal is to develop human centered interaction and visualization metaphors which are not constrained by the fact that most applications on multi-touch devices are just standard desktop applications with additional multi-touch support. These WIMP (window, icon, menu, and pointing device) applications are designed to be controlled by a single point of interaction (i.e. mouse pointer) and multi-touch is often only used to emulate a mouse. By abstracting from desktop applications new interaction methods are developed which also rely on new visualization metaphors to allow for intuitive multi-touch interaction and manipulation.

On the other hand research regarding multi-touch hardware and touch tracking is carried out. Available vision based multi-touch devices have a large form factor which is inherent in the technique when using projectors and normal cameras for projection and tracking. In this project a different hardware setup is researched to allow a smaller form factor, enabling the use of the multi-touch device horizontally (e.g. as table) as well as vertically (e.g. on a stand/wall).

Partners

Funding by: Stiftung RLP Innovation

Contact

Prof. Dr. Didier Stricker