Speech interaction in virtual reality

Johannes Muller, Christian Krapichler, Lam Son Nguyen, Karl Hans Englmeier, Manfred Lang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

A system for the visualization of three-dimensional anatomical data, derived from magnetic resonance imaging (MRI) or computed tomography (CT), enables the physician to navigate through and interact with the patient's 3D scans in a virtual environment. This paper presents the multimodal human-machine interaction focusing the speech input. For the concerned task, a speech understanding front-end using a special kind of semantic decoder was successfully adopted. Now, the navigation as well as certain parameters and functions can be directly accessed by spoken commands. Using the implemented interaction modalities, the speed and efficiency of the diagnosis could be considerably improved.

Original languageEnglish
Title of host publicationProceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 1998
Pages3757-3760
Number of pages4
DOIs
StatePublished - 1998
Event1998 23rd IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 1998 - Seattle, WA, United States
Duration: 12 May 199815 May 1998

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume6
ISSN (Print)1520-6149

Conference

Conference1998 23rd IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 1998
Country/TerritoryUnited States
CitySeattle, WA
Period12/05/9815/05/98

Fingerprint

Dive into the research topics of 'Speech interaction in virtual reality'. Together they form a unique fingerprint.

Cite this