Deep learning for environmentally robust speech recognition: An overview of recent developments

Zixing Zhang, Jürgen Geiger, Jouni Pohjalainen, Amr El Desoky Mousa, Wenyu Jin, Björn Schuller

Research output: Contribution to journalReview articlepeer-review

246 Scopus citations


Eliminating the negative effect of non-stationary environmental noise is a long-standing research topic for automatic speech recognition but still remains an important challenge. Data-driven supervised approaches, especially the ones based on deep neural networks, have recently emerged as potential alternatives to traditional unsupervised approaches and with sufficient training, can alleviate the shortcomings of the unsupervised methods in various real-life acoustic environments. In this light, we review recently developed, representative deep learning approaches for tackling non-stationary additive and convolutional degradation of speech with the aim of providing guidelines for those involved in the development of environmentally robust speech recognition systems. We separately discuss single- and multi-channel techniques developed for the front-end and back-end of speech recognition systems, as well as joint front-end and back-end training frameworks. In the meanwhile, we discuss the pros and cons of these approaches and provide their experimental results on benchmark databases. We expect that this overview can facilitate the development of the robustness of speech recognition systems in acoustic noisy environments.

Original languageEnglish
Article number49
JournalACM Transactions on Intelligent Systems and Technology
Issue number5
StatePublished - Apr 2018
Externally publishedYes


  • Deep learning
  • Multi-channel speech recognition
  • Neural networks
  • Nonstationary noise
  • Robust speech recognition


Dive into the research topics of 'Deep learning for environmentally robust speech recognition: An overview of recent developments'. Together they form a unique fingerprint.

Cite this