Feature enhancement by deep LSTM networks for ASR in reverberant multisource environments

Felix Weninger, Jürgen Geiger, Martin Wöllmer, Björn Schuller, Gerhard Rigoll

Research output: Contribution to journalArticlepeer-review

52 Scopus citations

Abstract

This article investigates speech feature enhancement based on deep bidirectional recurrent neural networks. The Long Short-Term Memory (LSTM) architecture is used to exploit a self-learnt amount of temporal context in learning the correspondences of noisy and reverberant with undistorted speech features. The resulting networks are applied to feature enhancement in the context of the 2013 2nd Computational Hearing in Multisource Environments (CHiME) Challenge track 2 task, which consists of the Wall Street Journal (WSJ-0) corpus distorted by highly non-stationary, convolutive noise. In extensive test runs, different feature front-ends, network training targets, and network topologies are evaluated in terms of frame-wise regression error and speech recognition performance. Furthermore, we consider gradually refined speech recognition back-ends from baseline 'out-of-the-box' clean models to discriminatively trained multi-condition models adapted to the enhanced features. In the result, deep bidirectional LSTM networks processing log Mel filterbank outputs deliver best results with clean models, reaching down to 42% word error rate (WER) at signal-to-noise ratios ranging from -6 to 9 dB (multi-condition CHiME Challenge baseline: 55% WER). Discriminative training of the back-end using LSTM enhanced features is shown to further decrease WER to 22%. To our knowledge, this is the best result reported for the 2nd CHiME Challenge WSJ-0 task yet.

Original languageEnglish
Pages (from-to)888-902
Number of pages15
JournalComputer Speech and Language
Volume28
Issue number4
DOIs
StatePublished - Jul 2014

Keywords

  • Automatic speech recognition
  • Deep neural networks
  • Feature enhancement
  • Long Short-Term Memory

Fingerprint

Dive into the research topics of 'Feature enhancement by deep LSTM networks for ASR in reverberant multisource environments'. Together they form a unique fingerprint.

Cite this