SonifEye: Sonification of Visual Information Using Physical Modeling Sound Synthesis

Hessam Roodaki, Navid Navab, Abouzar Eslami, Christopher Stapleton, Nassir Navab

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

Sonic interaction as a technique for conveying information has advantages over conventional visual augmented reality methods specially when augmenting the visual field with extra information brings distraction. Sonification of knowledge extracted by applying computational methods to sensory data is a well-established concept. However, some aspects of sonic interaction design such as aesthetics, the cognitive effort required for perceiving information, and avoiding alarm fatigue are not well studied in literature. In this work, we present a sonification scheme based on employment of physical modeling sound synthesis which targets focus demanding tasks requiring extreme precision. Proposed mapping techniques are designed to require minimum training for users to adapt to and minimum mental effort to interpret the conveyed information. Two experiments are conducted to assess the feasibility of the proposed method and compare it against visual augmented reality in high precision tasks. The observed quantitative results suggest that utilizing sound patches generated by physical modeling achieve the desired goal of improving the user experience and general task performance with minimal training.

Original languageEnglish
Article number8007327
Pages (from-to)2366-2371
Number of pages6
JournalIEEE Transactions on Visualization and Computer Graphics
Volume23
Issue number11
DOIs
StatePublished - Nov 2017

Keywords

  • Aural augmented reality
  • auditory feedback
  • sonic interaction
  • sonification

Fingerprint

Dive into the research topics of 'SonifEye: Sonification of Visual Information Using Physical Modeling Sound Synthesis'. Together they form a unique fingerprint.

Cite this