Multimodal Interaction Strategies in a Multi-Device Environment using Natural Speech

Multimodal Interaction Strategies in a Multi-Device Environment using Natural Speech
Christian Husodo Schulz, Daniel Sonntag, Markus Weber, Takumi Toyama
Proceedings of the Companion Publication of the 2013 International Conference on Intelligent User Interfaces Companion IUI Workshop on Interactive Machine Learning, located at IUI 2013, March 19-22, Santa Monica, CA, USA

Abstract:
In this paper we present an intelligent user interface which combines a speech-based interface with several other input modalities. The integration of multiple devices into a working environment should provide greater flexibility to the daily routine of medical experts for example. To this end, we will introduce a medical cyber-physical system that demonstrates the use of a bidirectional connection between a speech-based interface and a head-mounted see-through display. We will show examples of how we can exploit multiple input modalities and thus increase the usability of a speech-based interaction system.