Cognitive Workflow Capturing and Rendering with On- Body Sensor Networks (COGNITO)

Cognitive Workflow Capturing and Rendering with On- Body Sensor Networks (COGNITO)
Gabriele Bleser, Luis Almeida, Ardhendu Behera, Andrew Calway, Anthony Cohn, Dima Damen, Hugo Domingues, Andrew Gee, Dominic Gorecky, David Hogg, Michael Kraly, Gustavo Maçães, Frederic Marin, Walterio Mayol-Cuevas, Markus Miezal, Katharina Mura, Nils Petersen, Nicolas Vignais, Luis Paulo Santos, Gerrit Spaas, Didier Stricker
Tech Report

Abstract:
The major goal of COGNITO was to develop enabling technologies for intelligent user assistance systems with focus on industrial manual tasks. This comprises technology for capturing user activity in relation to industrial workspaces through on-body sensors, algorithms for linking the captured information to underlying workflow patterns and adaptive user interfaces which use this higher-level information for providing tailored feedback and support through adaptive augmented reality (AR) techniques. The major contributions of COGNITO are in line with its execution objectives and can be summarised as follows: A new generation of precise wireless inertial measurement units (IMUs) and an HMD with integrated visual-inertial unit and gaze tracking apparatus; A novel method for dense 3D workspace reconstruction and robust camera tracking including fast relocalisation for agile movements based on colour and depth (RGBD) information; a novel learning-based method for detection and recognition of handled textureless objects; a complete multi-object detection and tracking framework built upon the previous components; A novel visual-inertial upper body tracking method using egocentric vision measurements for increased robustness against magnetic disturbances and, built upon this, an innovative system for online global biomechanical analysis and feedback according to the RULA (rapid upper limb assessment) standard; A detailed musculoskeletal model of the hand and forearm and a database of muscle forces and articular loads for typical industrial tasks enabling online biomechanical analysis; A novel method for hand detection and tracking based on a monocular RGB camera, which can be linked to the aforementioned hand model; Novel, domain-independent methods for workflow recovery and monitoring based on spatiotemporal pairwise relations deduced from scene features, objects and user motions, which handle workspace appearance changes and are robust against broken tracks and missed detections; Workflow authoring tools and user interfaces exploiting the low- and higher-level information for context-sensitive user feedback. Besides the major advances in various key technologies as reported above, we have also developed an integrated system, which makes use of these key technologies and addresses the complete action-perception-feedback loop, linking data capture with workflow and ergonomic understanding and feedback. Moreover, we have developed and tested this system and its components on three increasingly complex test datasets based on typical industrial manual tasks and have thoroughly documented the results. Finally, we have also developed lightweight monocular workflow segmentation, authoring and monitoring tools and a demonstrator system based on this.
Keywords:
Egocentric sensing, workflow recognition, workflow monitoring, pattern recognition, inertial body tracking, computer vision, Augmented Reality, cognitive systems