Audio Enhancement for Computer Audition—An Iterative Training Paradigm Using Sample Importance

Manuel Milling, Shuo Liu, Andreas Triantafyllopoulos, Ilhan Aslan, Björn W. Schuller

Research output: Contribution to journalArticlepeer-review

Abstract

Neural network models for audio tasks, such as automatic speech recognition (ASR) and acoustic scene classification (ASC), are susceptible to noise contamination for real-life applications. To improve audio quality, an enhancement module, which can be developed independently, is explicitly used at the front-end of the target audio applications. In this paper, we present an end-to-end learning solution to jointly optimise the models for audio enhancement (AE) and the subsequent applications. To guide the optimisation of the AE module towards a target application, and especially to overcome difficult samples, we make use of the sample-wise performance measure as an indication of sample importance. In experiments, we consider four representative applications to evaluate our training paradigm, i.e., ASR, speech command recognition (SCR), speech emotion recognition (SER), and ASC. These applications are associated with speech and nonspeech tasks concerning semantic and non-semantic features, transient and global information, and the experimental results indicate that our proposed approach can considerably boost the noise robustness of the models, especially at low signal-to-noise ratios, for a wide range of computer audition tasks in everyday-life noisy environments.

Original languageEnglish
Pages (from-to)895-911
Number of pages17
JournalJournal of Computer Science and Technology
Volume39
Issue number4
DOIs
StatePublished - Jul 2024

Keywords

  • audio enhancement
  • computer audition
  • joint optimisation
  • multi-task learning
  • voice suppression

Fingerprint

Dive into the research topics of 'Audio Enhancement for Computer Audition—An Iterative Training Paradigm Using Sample Importance'. Together they form a unique fingerprint.

Cite this