Intelligent Augmented Reality Handbooks
Show Them How it Works – Worker support for future factories
Digital handbooks, presented as step-by-step instructions in a head-mounted display (HUD) directly in the user's field of view, facilitate and accelerate the maintenance, repair, or installation of complex units. They explain precisely and clearly each individual step at the site, can be called up at anytime, reduce the safety risk to the employee, and they contribute to perfect results.
DFKI's Augmented Reality research department is working on simplifying the creation of these AR Handbooks through the integration of AI technologies with the aim of making them fit for actual operations. In the past, this socalled “authoring” was generally performed manually and with the associated high costs. The system often required scripted descriptions of actions that had to be manually prepared; furthermore, expert knowledge of the tracking system in use and how to install tracking assistance was necessary.
Hier geht's zum deutschen Text: PDF
When will AR Manuals have their breakthrough?
In order to make AR manuals really fit for use, the DFKI research department Augmented Vision works on the simplification of their creation by integrating AI technologies. So far, the so-called authoring process happened mostly manually and therefore involves considerable time and effort. The systems often need manually written, script-like descriptions of the activities; moreover, expert knowledge about the tracking system used and the installation of the tracking aids is necessary.
Learning by watching...
At CeBIT 2012, the DFKI research department Augmented Vision presented an AR manual that shows the user the necessary steps for the installation of a RAM bar in a notebook via a head-mounted camera. User-friendliness was in the focus of the development, so the authoring process has been significantly simplified. The system learns the necessary steps by singular or recurring demonstration of the respective action (1). Thereby it doesn’t need special markers or other aids and also recognizes freehand gestures, which distinguishes it from many other methods.
The authoring tool automatically decomposes a onetime viewed plot into single distinguishable parts and subsequently recombines them by means of a stochastic transition model. An observed action can be precisely mapped to one of these parts, and notes concerning the following steps can be overlaid at the exact moment (3). This type of learning (“teach-in”) is a cutting-edge research topic in AI and especially in robotics and commonly referred to as “programming by demonstration” in the literature.
...watching and applying
The method also automatically generates respective overlays that fade in a half-transparent “shadow image” of the action to be carried out. Important details or additional references can be highlighted directly in the recorded sequence by inserting graphical symbols like arrows or lines (2).
The simplified authoring and teaching method opens up new fields of applications, for example, in quality management, as it can be used by specialists who are actually trained in those fields instead of software specialists. Skilled employees could record “reference work cycles”, thus guaranteeing that subsequent repetitions are carried out in the exact same way (3).
Vision: Usable by everyone
The research department Augmented Vision is already working on an Android-smartphone version that would make the “AR manual” application available for consumers, too. They could thereby be supported, for example, in assembling furniture or when installing and operating household appliances.
Click here for the demonstration video
The underlying technology has been developed in the framework of the project EMERGENT (www.software-cluster.org) in the Software Cluster and supported by the Federal Ministry of Education and Research (BMBF) under the funding label “01|C10S01”, and has been partially funded by the EU project COGNITO “ICT-248290” (www.ict-cognito.org).