Intelligent Augmented Reality Handbooks
Digital manuals that are faded in directly into the field of view of the user via a head-mounted display are one of the most often used application examples for Augmented Reality (AR) scenarios. AR manuals can significantly simplify or accelerate maintenance, reparation or installation work on complex systems.
Show Them How it Works – Worker support for future factories
Digital handbooks, presented as step-by-step instructions in a head-mounted display (HUD) directly in the user’s field of view, facilitate and accelerate the maintenance, repair, or installation of complex units. They explain precisely and clearly each individual step at the site, can be called up at anytime, reduce the safety risk to the employee, and they contribute to perfect results.
DFKI’s Augmented Reality research department is working on simplifying the creation of these AR Handbooks through the integration of AI technologies with the aim of making them fit for actual operations. In the past, this so called “authoring” was generally performed manually and with the associated high costs. The system often required scripted descriptions of actions that had to be manually prepared; furthermore, expert knowledge of the tracking system in use and how to install tracking assistance was necessary.
At the Federal Ministry of Education and Research (BMBF) exhibit stand at CeBIT this year, DFKI introduces the new AR Handbook System that allows for automated documentation and support of simple work processes by means of a lightweight system.
An integrated camera recognizes each manual action performed and superimposes this in the HUD with previously recorded video sequences to effectively show the next work action. This does not require any special marker or other support and – in contrast to many other methods – it recognizes the undefined hand gestures. The job sequences lend themselves to quick and easy recording and require only minimal post-processing. This technology significantly decreases the labor time required for the creation of Augmented Reality manuals and because it is far less complex, it encourages wide spread use.
The authoring tool independently breaks down the sequence after viewing into its separate distinguishable actions and then combines the separate sections with a stochastic transition model. An action observed during operation can be assigned in time to the corresponding section and then, pointers can be displayed at the precise point in time for the subsequent section. This kind of learning (“Teach-In”) is found in many areas of Artificial Intelligence and is an especially current research subject in the field of robotics. It is also known in the literature as “programming by demonstration.”
Additionally, the method fully and automatically creates semi-transparent overlays in which a “shadow image” of the pending action is displayed. Important details or supplemental pointers can be emphasized by adding graphic symbols like arrows or lines. The simplified authoring and teach-in method, which is performed by employees who are trained in the specific operation rather than by software experts, opens up additional fields of application, for example, in quality management.
Technicians at an assembly work station can record “reference procedures” to ensure that all future assembly activities follow the same procedural pattern. A limited version of the AR Handbook is now available for Android smartphones and tablets. This means that in the future, even the private user can obtain support when assembling furniture or installing and operating household appliances.
Nils Petersen and Didier Stricker, ‘Learning Task Structure from Video Examples for Workflow Tracking and Authoring’, in Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2012
Hier geht’s zum deutschen Text: PB_AV_AR-Handbuch_20130218.pdf512.03 KB
CeBIT 2012
When will AR Manuals have their breakthrough?
In order to make AR manuals really fit for use, the DFKI research department Augmented Vision works on the simplification of their creation by integrating AI technologies. So far, the so-called authoring process happened mostly manually and therefore involves considerable time and effort. The systems often need manually written, script-like descriptions of the activities; moreover, expert knowledge about the tracking system used and the installation of the tracking aids is necessary.
Learning by watching…
At CeBIT 2012, the DFKI research department Augmented Vision presented an AR manual that shows the user the necessary steps for the installation of a RAM bar in a notebook via a head-mounted camera. User-friendliness was in the focus of the development, so the authoring process has been significantly simplified. The system learns the necessary steps by singular or recurring demonstration of the respective action (1). Thereby it doesn’t need special markers or other aids and also recognizes freehand gestures, which distinguishes it from many other methods.
The authoring tool automatically decomposes a onetime viewed plot into single distinguishable parts and subsequently recombines them by means of a stochastic transition model. An observed action can be precisely mapped to one of these parts, and notes concerning the following steps can be overlaid at the exact moment (3). This type of learning (“teach-in”) is a cutting-edge research topic in AI and especially in robotics and commonly referred to as “programming by demonstration” in the literature.
…watching and applying
The method also automatically generates respective overlays that fade in a half-transparent “shadow image” of the action to be carried out. Important details or additional references can be highlighted directly in the recorded sequence by inserting graphical symbols like arrows or lines (2).
The simplified authoring and teaching method opens up new fields of applications, for example, in quality management, as it can be used by specialists who are actually trained in those fields instead of software specialists. Skilled employees could record “reference work cycles”, thus guaranteeing that subsequent repetitions are carried out in the exact same way (3).
Vision: Usable by everyone
The research department Augmented Vision is already working on an Android-smartphone version that would make the “AR manual” application available for consumers, too. They could thereby be supported, for example, in assembling furniture or when installing and operating household appliances.