Clothes tracking (Catwalk sequence 2)
- “OnEye — Producing and broadcasting generalized interactive video”, Alain Pagani and Christian Bailer and Didier Stricker, Proceedings of the Networked and Electronic Media Summit (NEM Summit), 2013
- “A user supported tracking framework for interactive video production”, Christian Bailer and Alain Pagani and Didier Stricker, Proceedings of the European Conference on Visual Media Production (CVMP) 2013
A more technical presentation will be forthcoming.
Nowadays, ergonomic evaluation of manual workflows is mostly based on subjective assessment and is performed offline. This video demonstrates a system, which provides objective measures for global ergonomic evaluation and even permits real-time assessment and feedback. The system continuously estimates the worker’s motions based on a body sensor network and derives global biomechanical scores using the ergonomic tool Rapid Upper Limb Assessment (RULA). Based on this, the user receives visual and acoustic feedback in real-time through a head-mounted display. This permits the worker to modify his posture immediately in order to decrease the risk of musculosceletal disorders. Moreover, the ergonomic scores are documented for offline analysis. The system could be used for planning, optimizing or training new workflows. It has been developed within the European project COGNITO (www.ict-cognito.org) in close cooperation between signal processing, biomechanics and end user requirements experts.
At the very beginning of the video the sculpture is depicted as wireframe model, to show the fairly low resolution of the basic geometry (generated from a point cloud of approximately 300 million points). This geometry comprises less than 20k triangles, hence it is suited for most tasks. Nevertheless lots of detail information is provided as the respective diffuse rendering demonstrates (00:12).
To provide more detail we apply a high resolution normal map (currently with 16M pixels resolution). The respective result is shown starting at 00:20. Note that illumination effects apply, so the rendering appears a bit darker. Starting at 00:30 the model is rendered using diffuse texture colour only, which is used as basic colour. Note that no illumination has been calculated.
The final reconstruction result can be found at the end of the video (00:40), where respective anisotropic material parameters have been applied. For a Orcam System Demonstration see http://youtu.be/zHEi55oJJOA
In this video it is shown how the hardware is opened to insert an object to be reconstructed. Currently the maximum size of objects is limited to 80cm diameter and a weight of approximately 100kg.
After closing the sphere again the acquisition process is fully automatic, though tuneable to account for complicated object geometries. Please note that the acquisition process has been extremely condensed and only drafts some steps necessary to acquire the respective information for a single camera position. I.e. horizontal and vertical fringe projection, directed illumination with light(patches), rotation of the carrier, etc. After the acquisition process the reconstruction of the object is computed fully automatic. A rendered result of the vase can be found at the end of the video. Note first that the rendering has been performed using a real world high-resolution HDR environment, which is reflecting in the vase and which introduces a pretty high amount of blue sky colour to the rendering. Secondly note that the reconstructed vase is NOT symmetric, which is in perfect agreement with the original
For OrcaM Reconstruction Sequences (“Female Torso” Wilhelm Lembruck) see:
With Studio you can visualize your 3D scene as well as tweak your materials and effects.