Adaptive human-machine interfaces in cognitive production environments

F. Wallhoff, M. Ablaßmeier, A. Bannat, S. Buchta, A. Rauschert, G. Rigoll, M. Wiesbeck

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

26 Scopus citations

Abstract

This article presents an integrated framework for multi-modal adaptive cognitive technical systems to guide, assist and observe human workers in complex manual assembly environments. The demand for highly flexible construction facilities obviously contradicts longer training and preparation phases of human workers. By giving context-aware building instructions over retina displays, text-to-speech commands or acoustical signals, a non-specialized industrial stand-by men in a production task can precisely be alloted to execute the next processing step without any previous knowledge. Using non-invasive gesture recognizers and object detectors the human worker can be observed in order to track the production line and initiate the subsequent step in the interaction loop. Aiming at testing and evaluating the desired human-machine interfaces and its capabilities a virtual working place together with a concrete use case is introduced.

Original languageEnglish
Title of host publicationProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007
PublisherIEEE Computer Society
Pages2246-2249
Number of pages4
ISBN (Print)1424410177, 9781424410170
DOIs
StatePublished - 2007
EventIEEE International Conference onMultimedia and Expo, ICME 2007 - Beijing, China
Duration: 2 Jul 20075 Jul 2007

Publication series

NameProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007

Conference

ConferenceIEEE International Conference onMultimedia and Expo, ICME 2007
Country/TerritoryChina
CityBeijing
Period2/07/075/07/07

Fingerprint

Dive into the research topics of 'Adaptive human-machine interfaces in cognitive production environments'. Together they form a unique fingerprint.

Cite this