News Archive
  • December 2024
  • October 2024
  • September 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023

ServiceFactory

ServiceFactory

SwarmTrack

The aim of this research project is to create an open platform and associated digital infrastructure in the form of interfaces and architectures that allow market participants to achieve simple, secure and fair participation at different levels of value creation. This platform will be designed for the recording, analysis and aggregation of data acquired from everyday used sensors (Smart Objects), as well as the conversion of these data into digital services (Smart Services), both technically and with regard to the underlying business and business model. Furthermore, for an initial technical product area, sports shoes will be considered, the possibility is exerted to extend these to cyberphysical systems which go beyond the existing possibilities of data collection (pure sensors). This is to create a broadly communicable demonstrator object that can bring the possibilities of smart services and intelligent digitization to the general public. In addition, the technical prerequisites as well as suitable business models and processes are created in order to collect the data under strict observance of the legal framework as well as the consumer and data protection for the optimization of product development, production and logistics chains. Here, too, an open structure will allow market participants to work with these data at different levels of added value.

During the project period, national standardization and work for the necessary internationalization can be started in parallel, and all preparatory work should be done by the end of the project. Standardization is the foundation for trust to motivate companies to participate in the platform. On the basis of the defined standards a certification can take place. This is necessary to ensure data exchange (encryption) and the security of the networks (against cybercrime). This also includes clarifying questions about the right to data in the cloud.

Partners

ServiceFactory Partners

Funding by: BMWi

  • Grant agreement no.: 01MD16003F
  • Begin: 01.01.2016
  • End: 30.06.2018

Contact

Manthan Pancholi

SwarmTrack

SwarmTrack

SwarmTrack

The goal of the SwarmTrack project is the research and development of a novel method for accurate automatic tracking of objects moving in groups.

A common problem in object tracking are occlusions, where moving objects or static parts of the scene obstruct the view on the target objects. This may lead to tracking loss or wrong assignment of identity labels. Another important issue is confusion where nearby objects have a similar or identical appearance. This may lead to errors in the identity assignment, where labels are switched between objects.

SwarmTrack investigates possibilities for exploiting the group structure in the spatial and temporal domain to derive clues and constraints for correct identity assignment after occlusions and in the presence of similar objects. Such clues include the spatial arrangement of the tracked objects and its changes over time as well as motion continuity and coherency constraints. Furthermore, SwarmTrack investigates methods for the automatic creation of coherent groups as well as updating them, when objects leave the group or new objects join.

The resulting multi-target tracking approach has applications in traffic monitoring and analysis as well as other fields where objects move coherently in groups.

Partners

Funding by: Stiftung RLP Innovation

Contact

Stephan Krauß

Anna C-trus

Anna C-trus

ANNA – Artificial Neural Network Analyzer

A Framework for the Automatic Inspection of TRUS Images. The project ANNA (Artificial Neural Network Analyzer) aims at the design of a framework to analyze ultrasound images by means of signal processing combined with methods from the field of artificial intelligence (neural networks, self-organizing maps, etc.). Although not obvious ultrasound images do contain information that cannot be recognized by the human visual system and that do provide information about the underlying tissue. On the other hand the human visual system recognizes virtual structures in ultrasound images that are not related at all to the underlying tissue. Especially interesting in this regard is the fact that the careful combination several texture descriptor based filters is suited for an analysis by artificial neural networks and that suspicious regions can be marked reliably. The specific aim of the framework is to automatically analyze conventional rectal ultrasound (TRUS) images of the prostate in order to detect suspicious regions that are likely to relate to a primary cancer focus. These regions are marked for a subsequent biopsy procedure. The advantages of such an analysis are the significantly reduced number of biopsies compared to a random or systematic biopsy procedure to detect a primary cancer and the significantly enhanced success rate to extract primary cancer tissue with a single biopsy procedure. On one hand this results in a faster and more reliable diagnosis with significantly decreased intra-examiner variability, on the other hand the discomfort of the patient due to multiple biopsy sessions is dramatically reduced.

Contact

Dr. Gerd Reis

Eyes Of Things

Eyes Of Things

Eyes Of Things

The aim of the European Eyes of Things it to build a generic Vision device for the Internet of Things.

The device will include a miniature camera and a specific Vision Processing Unit in order to perform all necessary processing tasks directly on the device without the need of transferring the entire images to a distant server. The envisioned applications will enable smart systems to perceive their environment longer and more interactively.

The technology will be demonstrated with applications such as Augmented Reality, Wearable Computing and Ambient Assisted Living.

Vision, our richest sensor, allows inferring big data from reality. Arguably, to be “smart everywhere” we will need to have “eyes everywhere”. Coupled with advances in artificial vision, the possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc.

Currently, computer vision is rapidly moving beyond academic research and factory automation. On the other hand, mass-market mobile devices owe much of their success to their impressing imaging capabilities, so the question arises if such devices could be used as “eyes everywhere”. Vision is the most demanding sensor in terms of power consumption and required processing power and, in this respect, existing massconsumer mobile devices have three problems:

  1. power consumption precludes their ‘always-on’ capability,
  2. they would have unused sensors for most vision-based applications and
  3. since they have been designed for a definite purpose (i.e. as cell phones, PDAs and “readers”) people will not consistently use them for other purposes.

Our objective in this project is to build an optimized core vision platform that can work independently and also embedded into all types of artefacts. The envisioned open hardware must be combined with carefully designed APIs that maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or ‘always-on’ but not both at the same time.

Thus, the “Eyes of Things” project aims at developing a ground-breaking platform that combines: a) a need for more intelligence in future embedded systems, b) computer vision moving rapidly beyond academic research and factory automation and c) the phenomenal technological advances in mobile processing power.

Partners

The project Eyes of Things is a common project with 7 European partners from research and industry and is financed by the program Horizon 2020.

Project partners are: Unvisersidad de Cadtilla-La Mancha (UCLM, Spain), Awaiba Lda (Portugal), Camba.tv Lts (EVERCAM, Ireland), Movidius Ltd (Ireland), Thales Communications and Security SAS (France), Fluxguide OG (Austria) and nViso SA (Switzerland).

Funding by: EU

  • Grant agreement no.: 643924
  • Funding programm: H2020
  • Begin: 01.01.2015
  • End: 30.06.2018

More information: Website of the project

Contact

Dr.-Ing. Alain Pagani

LARA

LARA

LBS & Augmented Reality Assistive System for Utilities Infrastructure Management through Galileo and EGNOS

LARA is a European Project aiming at developing a new mobile device for helping employees of utilities companies in their work on the field. The device to be developed – called the LARA System – consists of a tactile tablet and a set of sensors that can geolocalise the device using the European GALILEO system and EGNOS capabilities.

The LARA system is produced under a collaborative work where different players, SMEs, large companies, universities and research institutes are contributing with different expertise.

The LARA system is a mobile device for utility field workers. In practice, this device will guide the field workers in underground utilities to ‘see’ what is happening underworld, like an “x-ray image” of the underground infrastructure. The system is using Augmented Reality interfaces to render the complex 3D models of the underground utilities infrastructure such as water, gas, electricity, etc. in an approach that is easily understandable and useful during field work.

The 3D information is acquired from existing 3D GIS geodatabases. To this aim, the hand-held device integrates different technologies such as: positioning and sensors (GNSS), Augmented Reality (AR), GIS, geodatabases, etc.

Typical scenario

The end user is a technician working in public or private company operating an underground network in the utilities sector (electricity, gas, water or sewage). His role in the company is to plan interventions on the network such as repairs and control and execute the planned operations on site with his team.

The typical scenario of the usage of the LARA system is divided into several steps:

  1. Preparation. The end user prepares and plans an intervention while sitting in his office. He can see the pipes installation by using a web application that approximately shows  the pipes over a map of the area. He can download information onto his LARA Device.
  2. On-Field coarse localization. On the day of intervention on site, the user drives to the global area of intervention (typical size of the area: city block). The user is then guided by the LARA system to find the position of the operation as follows:  The user starts the application and selects 2D view then a map is shown on the interface with the user location pointing it on the map.
  3. Surveying (precise localization). When the user is near enough from the exact position of intervention, he can switch to 3D/AR mode where the application shows the real images (from the camera), displaying also the pipes as overlay (Augmented Reality). The Information of the pipes is structured in layers, so the user can choose between different information levels (water pipes, electrical wires…).
  4. Field work (excavation, repair…). The user can precisely define the corners of the excavation to be done on the street. He marks them with paint. The workers start to excavate. They have information about which other utilities they will find on the way.
  5. Onsite updates. If the user discovers that some information about the localization of the pipes is wrong, he can suggest an update by providing the actual position.
  6. Backoffice updates. Back to his office, the user connects the LARA system to a server, where the updates are pushed to a queue for verification before integrating them in the network’s databases.

Partners

The project LARA is a common project with 9 partners from research and industry and is financed by the program Horizon 2020. Project partners are: GeoImaging Ltd (Cyprus), Aristotle University of Thessaloniki (Greece), Igenieria Y Soluciones Informaticas des Sur S.L. (Spain), SignalGenerix Ltd (Cyprus), Municipality of Kozani (DEYAK, Greece), Birmingham City Council (UK), Hewlett Packard Espanola S.L. (Spain), University Malaysia Sarawak (Malaysia)

Funding by: EU

  • Grant agreement no.: 641460
  • Funding programm: H2020
  • Begin: 01.02.2015
  • End: 30.06.2017

Contact

Dr.-Ing. Alain Pagani

On Eye

On Eye

Digital video is becoming a dominant element of commercial web-sites and is definitively the driving force behind the current expansion of the web and the new “Generation Mobile”.The presentation of a product with help of a video increases dramatically the shopping experience of the online-customer.

Digital video is becoming a dominant element of commercial web-sites and is definitively the driving force behind the current expansion of the web and the new “Generation Mobile”. The presentation of a product with help of a video increases dramatically the shopping experience of the online-customer. Video is becoming an essential medium against classic product description with text and pictures. Moreover the apparition of new end-devices such as large-screen smartphones and web-pads (iPad, webTab) as well as novel content platforms (Youtube, WebTVs) made video to a standard medium for Internet communication.

The next development step will consist in extending videos to embedded and local interaction possibilities. Objects or persons in a video represent interactive “tags”, which can be selected and hereby provide additional information. A promising commercial application of this technology is “Embedded Advertisement”. Banners, product information or hyperlinks to an e-shop are attached to a given product in the video. The user watches a movie or TV-show naturally, without any product hints or additional messages as currently blended in most of the videos portals. She/He has however the possibility to move his/her PC-mouse or remote control over the video-viewer and to let appear the tags marking the available products. He/she can then stop the video, and click on the marked objects and get wished information, or go directly to the online-shop.

While the fundamental concepts of “interactive-videos” are well-known and all the required mechanisms are available in the major video-players, their production is still very limited, since no reliable tool for automatic object tracking exists on the market. The goal of this project is therefore to develop an effective video-editor for the easy post-production of interactive videos, relying on image processing, online learning methods and adapted user input. The algorithms should enable to track an object over long sequences in a reliable way in spite of appearance changes and partial occlusions. The user input will be designed in order to support the tracking process and thus enable effective and fast creation of interactive videos.

Partners

Funding by: BMBF

Contact

Prof. Dr. Didier Stricker

ActivityPlus

ActivityPlus

Development of a personalized activity monitoring system for everyday life

Being physically active is – apart from not smoking – the most powerful lifestyle choice individuals can make to improve their health. With recent progress in wearable sensing and computing it becomes reasonable for individuals to wear different sensors all day, thus global activity monitoring is establishing: type, intensity and duration of performed physical activities can be detected. However, available systems still have many restrictions, allowing an accurate monitoring only in very limited scenarios. The project ActivityPlus addresses these restrictions by developing a personalized activity monitoring system for everyday life.

activity-plusactivityplus-mobile

One of the main challenges is to extend the number of activities to be recognized, and to deal with all other activities occurring in daily life. This would allow to use the developed system in everyday situations, and not restrict it to scenarios where only the few traditionally recognized activities (e.g. walking, running, cycling) are allowed.

Therefore, activity recognition is regarded as a complex classification problem in ActivityPlus. New algorithms are developed to approach the challenges, focusing on the research of meta-level classifiers. The other main goal of the project is the personalization of activity monitoring. The basic idea hereby is that systems trained generally will always have a certain inaccuracy when used by a new individual. Therefore, personal information should be taken into account when developing and training activity monitoring methods.

In ActivityPlus personal data (such as age, weight or resting heart rate) is considered to create new features for classifiers, and individual movement patterns of various activities are integrated into the activity monitoring algorithms.

Funding by: Stiftung RLP Innovation

Contact

Prof. Dr. Didier Stricker

AR-Handbook

AR-Handbook

Intelligent Augmented Reality Handbooks

Digital manuals that are faded in directly into the field of view of the user via a head-mounted display are one of the most often used application examples for Augmented Reality (AR) scenarios. AR manuals can significantly simplify or accelerate maintenance, reparation or installation work on complex systems.

Show Them How it Works – Worker support for future factories

Digital handbooks, presented as step-by-step instructions in a head-mounted display (HUD) directly in the user’s field of view, facilitate and accelerate the maintenance, repair, or installation of complex units. They explain precisely and clearly each individual step at the site, can be called up at anytime, reduce the safety risk to the employee, and they contribute to perfect results.

DFKI’s Augmented Reality research department is working on simplifying the creation of these AR Handbooks through the integration of AI technologies with the aim of making them fit for actual operations. In the past, this so called “authoring” was generally performed manually and with the associated high costs. The system often required scripted descriptions of actions that had to be manually prepared; furthermore, expert knowledge of the tracking system in use and how to install tracking assistance was necessary.

At the Federal Ministry of Education and Research (BMBF) exhibit stand at CeBIT this year, DFKI introduces the new AR Handbook System that allows for automated documentation and support of simple work processes by means of a lightweight system.

An integrated camera recognizes each manual action performed and superimposes this in the HUD with previously recorded video sequences to effectively show the next work action. This does not require any special marker or other support and – in contrast to many other methods – it recognizes the undefined hand gestures. The job sequences lend themselves to quick and easy recording and require only minimal post-processing. This technology significantly decreases the labor time required for the creation of Augmented Reality manuals and because it is far less complex, it encourages wide spread use.

The authoring tool independently breaks down the sequence after viewing into its separate distinguishable actions and then combines the separate sections with a stochastic transition model. An action observed during operation can be assigned in time to the corresponding section and then, pointers can be displayed at the precise point in time for the subsequent section. This kind of learning (“Teach-In”) is found in many areas of Artificial Intelligence and is an especially current research subject in the field of robotics. It is also known in the literature as “programming by demonstration.”

Additionally, the method fully and automatically creates semi-transparent overlays in which a “shadow image” of the pending action is displayed. Important details or supplemental pointers can be emphasized by adding graphic symbols like arrows or lines. The simplified authoring and teach-in method, which is performed by employees who are trained in the specific operation rather than by software experts, opens up additional fields of application, for example, in quality management.

Technicians at an assembly work station can record “reference procedures” to ensure that all future assembly activities follow the same procedural pattern. A limited version of the AR Handbook is now available for Android smartphones and tablets. This means that in the future, even the private user can obtain support when assembling furniture or installing and operating household appliances.


Nils Petersen and Didier Stricker, ‘Learning Task Structure from Video Examples for Workflow Tracking and Authoring’, in Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2012


Hier geht’s zum deutschen Text: pdfPB_AV_AR-Handbuch_20130218.pdf512.03 KB
 

CeBIT 2012

When will AR Manuals have their breakthrough?

In order to make AR manuals really fit for use, the DFKI research department Augmented Vision works on the simplification of their creation by integrating AI technologies. So far, the so-called authoring process happened mostly manually and therefore involves considerable time and effort. The systems often need manually written, script-like descriptions of the activities; moreover, expert knowledge about the tracking system used and the installation of the tracking aids is necessary.

Learning by watching…

At CeBIT 2012, the DFKI research department Augmented Vision presented an AR manual that shows the user the necessary steps for the installation of a RAM bar in a notebook via a head-mounted camera. User-friendliness was in the focus of the development, so the authoring process has been significantly simplified. The system learns the necessary steps by singular or recurring demonstration of the respective action (1). Thereby it doesn’t need special markers or other aids and also recognizes freehand gestures, which distinguishes it from many other methods.
The authoring tool automatically decomposes a onetime viewed plot into single distinguishable parts and subsequently recombines them by means of a stochastic transition model. An observed action can be precisely mapped to one of these parts, and notes concerning the following steps can be overlaid at the exact moment (3). This type of learning (“teach-in”) is a cutting-edge research topic in AI and especially in robotics and commonly referred to as “programming by demonstration” in the literature.

…watching and applying

The method also automatically generates respective overlays that fade in a half-transparent “shadow image” of the action to be carried out. Important details or additional references can be highlighted directly in the recorded sequence by inserting graphical symbols like arrows or lines (2).
The simplified authoring and teaching method opens up new fields of applications, for example, in quality management, as it can be used by specialists who are actually trained in those fields instead of software specialists. Skilled employees could record “reference work cycles”, thus guaranteeing that subsequent repetitions are carried out in the exact same way (3).

Vision: Usable by everyone

The research department Augmented Vision is already working on an Android-smartphone version that would make the “AR manual” application available for consumers, too. They could thereby be supported, for example, in assembling furniture or when installing and operating household appliances.

Contact

Dr.-Ing. Nils Petersen
Alexander Lemken

OrcaM

OrcaM

OrcaM is a device used for 3D acquisition of objects.

The possibility to digitize real objects is of major interest in various application domains, like for setting up a virtual museum, to establish virtual object repositories for digital worlds (e.g. games, Second Life), or for the conversion of handmade models into a digitally usable representation.

Target groups

The possibility to digitize real objects is of major interest in various application domains, like for setting up a virtual museum, to establish virtual object repositories for digital worlds (e.g. games, Second Life), or for the conversion of handmade models into a digitally usable representation.

OrcaM supports all these and many more application domains in providing excellent models of real world objects with an in-plane detail resolution in the submillimeter range. The resulting models are tuned for Web-applications, i.e. they provide a very low polygonal count (in the range of 20k triangles for a “standard” object). Detail information is encoded in respective normal- and displacement maps.

Method

The objects are placed on a glass plate and digitized. The process of digitization is based on the structured-light principle, i.e. dedicated light patterns are projected onto the object while the object is photographed from different angles using multiple cameras. Based on these images the shape (geometry) of the object can be acquired contact-less. Currently the images taken during the acquisition are 16M pixel RGB images with consumer-grade quality. In near future the cameras will be replaced by monochrome cameras to further enhance acquisition speed and -resolution as well as to prevent demosaicing (de-bayering) artifacts.

Additional images taken without the projector’s light pattern but under diffuse and directional lighting are used to reconstruct the color as well as the illumination behavior (appearance) of the object. Using OrcaM objects with a diameter of up to 80cm and a weight of up to 100kg can be reconstructed.

Special Features

OrcaM is designed to reconstruct whole objects in a single pass. Therefore it is necessary to also acquire the objects from below. To achieve this the object is placed on a rotate-able and height adjustable glass carrier. The fringes as well as the LEDs illuminate the objects through the glass plate. Also the images are taken through the carrier. The inevitable reflections on the glass and the distortions introduced by taking pictures through the plate are automatically compensated. Spurious points due to interreflections and/or sensor noise are automatically identified and removed. More information on this topic will be provided soon in “Robust Outlier Removal from Point Clouds Acquired with Structured Light” J. Köhler et.al. (accepted at Eurographics).

Another important feature of OrcaM is the reconstruction of very dark and light absorbing materials where outstanding results can be achieved. As an example see image “sports shoe” depicting various views of a worn shoe reconstruction. The shoe has been acquired in a single pass from all directions. The sole was placed directly on the glass plate. Please note that even at the material boundaries the reconstruction is nearly perfect. In the middle of the sole (near the bottom of the image) a transparent gel cushin could not be reconstructed, but note that the material underneath said cushin is reconstructed with very little noise.

Aim

Aim of OrcaM is the automated transition of real world objects into high quality digital representations, enabling their use in the Internet, movies, images, computer games, and many more digital applications. Especially it enables their recreation using modern 3D-printer technologies, milling mashines and the like. Hence, especially in the domain of culural heritage (but also in other very different domains) it becomes possible to provide the interested audience with 3D-objects either as an interactive visualization or as a real object. Note, OrcaM is currently only concerned with the digitalization and not the recreation.

Naturally questions with respect to resolution arise. The generally put question “what resolution does OrcaM provide” is a little bit more involved. Assume the current setting with 16M pixels per image. OrcaM supports objects with a diameter of up to 80cm. The cameras are set up such that the objects fill almost the complete image. Hence a single pixel covers approximately 0.04mm², or in other words 25 pixels are aquired per square millimeter. Of course the cameras are adjusted for smaller objects to also cover the complete image. Hence, said resolution is the lowest resolution currently provided, whereas objects with half the diameter can be acquired with approximately four times the resolution.

Furthermore, at every pose of the camera head and the glass plate (the number of different head and carrier poses depends mainly on the objects geometry) not only a single camera acquires information but seven cameras are used, summing up to the acquisition of approximately 175 pixels per square millimeter. Since we acquire each point on the object with multiple poses, the number of samples per pixel is further raised, but note that some of this information is quasi redundant, hence it is mainly used to minimize noise.

For most state of the art systems the above only holds with respect to the image plane, whereas the so called depth-resolution heavily depends on computational accuracy. Even worse, the depth resolution is position and view-dependant. In order to become independent from the actual setting and to provide a good depth resolution, special satellite cameras were introduced constraining the depth uncertainty to be comparably low as the xy-uncertainty. Although not completely valid in theory, in practice the depth resolution can be assumed as being equally high than the xy-resolution.

Another important point to note with respect to resolution is that the system has two more or less independent resolution limits. The first applies to so called “in-plane” geometry. Here in-plane refers to structures with the main extent parallel to the objects local surface and very small extent normal to the local surface. This resolution is extremely high (sub-millimeter) as can be found in the video of Lehmbruck’s “Female Torso” where the tiny mold grooves are reliably reconstructed. Another example can be found in the image “paper towel” where all individual imprints were successfully acquired. Note that the imprints are not captured in the texture image which is purely white but only in the normal-map.

The second type of resolution is the so called “out-of-plane” resolution. Here the main extent of a feature is normal to the objects local surface. In this case it is far more involved to correctly validate surface-point candidates for very tiny structures, instead of classifying them as noise. Although our basic acquisition resolution is the same as in the first case, such structures will be rejected during the reconstruction when they are less than say 2mm wide. Current research is aiming at the compensation of this effect.

Formats

Equally interesting as the resolution is the data format of the digital models. Here several formats can be provided. Currently a triangle mesh, a texture map and a normal map is computed. The object itself is represented as a triangle mesh. This can be provided in standard formats like OBJ, PLY, STL, etc., which can easily be converted into virtually any format. Additionally to the pure vertex data, a vertex color, the vertex normal and vertex bi-normal can be provided. The latter, however, is not stored with every format. In general the meshes are reduced in resolution to about 20k triangles to enable their use in Web applications. However, there is absolutely no need in doing so, hence the meshes can be generated in their original resolution comprising something around 10 million triangles.

Currently maps with 16M pixels are generated (4k POT), but higher resolutions can be generated on demand. Please note that the packing of the textures is up to 100% (in the mean case 10%) denser than with former state of the art texture packing algorithms. See Nöll Efficient Packing … for detail information. It is important to know that details are encoded in such maps in order to conserve that information for visualization purposes or for the recreation of geometry using displacement mapping. The textures are generated as uncompressed PNG RGB images, and can be converted into most other formats without much effort. Normals are computed in tangent space as well as object space.

Limitations

As with many other acquisition devices it is currently not possible to acquire transparent materials. Also highly reflective materials pose significant challenges. Partial solutions can be used to remedy the situation and research is aiming at a general solution to this challenge. First results acquiring car finish and brushed / polished metal are very promising. Another important limitation is that the system needs to photograph an individual point from multiple view positions. This poses significant challenges for highly convex parts of an object due to occlusion. We are working on a reconfigurable camera head that will be able to optimize camera positions with respect to the object geometry.

Results

Preliminary results of OrcaM reconstructions can be found here.

Contact

Johannes Köhler

Odysseus Studio

Odysseus Studio

Odysseus Studio

Odysseus Studio is a standalone 3D visualizer and editor. It’s build on top of our internally developed Odysseus Framework as a showcase. Odysseus was built from scratch with the main goals of providing:

  • High Quality Rendering in Real Time
  • A Programmable Graphic Pipeline based on shader technology
  • Effect and material handling in Real Time
  • A shared scene representation (possibility for more than one view)

Studio is an on going project and it’s currently in version 0.2. It’s currently under heavy development but we feel it is already fit for a first public release.

Some features might be incomplete (like not supporting all type of meshes in the COLLADA standard) and some features might be missing (like being able to merge two scenes) but we think Studio already offers enough to be quite useful. Studio was developed to serve as our standalone solution to visualize and edit our 3D scenes so you can use Studio to load your scene and make a fullscreen presentation (we also support several stereo configurations) or use Studio to edit and prepare your scenes for later presentation. You can edit your scene graph, objects, effects and materials. Studio handles lights and cameras that are already in your scene but a way to add/remove them is not available in th GUI at this time.

Studio is free to download and use but no support or warranty of use is given by the authors. We do accept bug reports and suggestions thought! 🙂
Odysseus, the framework, is not made available at this time.

For more information or suggestions please contact the author at jose.henriques@dfki.de and keep checking here for new versions.

Downloads

exeWindows x64 (Requires Windows XP/Vista/7 64bit version)9.79 MB
exeWindows x64 (Requires Windows XP/Vista/7 64bit version)9.79 MB
dmgMac OS X (Requires Mac OS X 10.5+ 64bit version)26.91 MB
debLinux Debian package (64bit version)5.81 MB

Also check the presentation/tutorial movie made available on the side and zipdownload the scene file9.24 MB being used for testing Studio yourself! (This scene is a result of our project CAPTURE)


Rendering

rendering_thumbStudio is built upon the Odysseus framework witch offers:
– Modern programmable pipeline support using CGFX
– Post processing and Scene effects
– Real time shadows
Drag and drop your *.cgfx effects onto the geometry you want to shade and tweak your newly instantiated material parameters in real time.


COLLADA Support

collada_thumbStudio supports reading and writing to COLLADA. COLLADA is a COLLAborative Design Activity for establishing an open standard digital asset schema for interactive 3D applications. www.collada.org
Studio is currently supporting Triangles and Polygonal geometry meshes. Besides the Scene Graph information, Lights, cameras, images and Effects/materials library are fully used by Studio for IO.


Million Points

point-cloud_thumbStudio can render huge point clouds. (Tested up to 100 million points on a NVidia GTX 580 3GB).
Loading times can be made faster by saving/loading in our COLLADA binary format.


 Spherical Map Viewer

spherical-map_thumbStudio comes with a simple Spherical map viewer.
In this mode you can look around and zoom in and out of you HDR panoramas.


 Post Scene Effects

sceneeffects_thumbOdysseus provides multiple Render Target support and exposes it by complying with the CGFX SAS standard. From you effect shader code you can create and render to multiple Render Targets and create multiple passes techniques to create effects such as a post processing glow, motion blurs or depth of field effects.


Stereo Modes

stereo thumbStudio can render in stereo mode (Side by Side mode depicted). Due to the flexibility of Odysseus rendering framework many stereo modes can be created and used in Studio.