Towards Constructing HMM Structure for Speech Recognition with Deep Neural Fenonic Baseform Growing

Lujun Li, Tobias Watzel, Ludwig Kurzinger, Gerhard Rigoll

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


For decades, acoustic models in speech recognition systems pivot on Hidden Markov Models (HMMs), e.g., Gaussian Mixture Model-HMM system, Deep Neural Network-HMM system, etc., and achieve remarkable results. However, the popular HMM model is the three-state left-to-right structure, without the superiority certainty. There are multiple studies on the HMM structure's optimization, but none of them addresses this problem leveraging deep learning algorithms. For the first time, this paper proposes a new training method based on Deep Neural Fenonic Baseform Growing to optimize the HMM structure, which is concisely designed and computationally cheap. Moreover, this data-driven method customizes the HMM structure for each phone precisely without external assumptions concerning the number of states or transition patterns. Experimental results on both TIMIT and TEDliumv2 corpora indicate that the proposed HMM structure improves both the monophone system and the triphone system substantially. Besides, its adoption further improves state-of-the-art speech recognition systems with remarkably reduced parameters.

Original languageEnglish
Article number9371697
Pages (from-to)39098-39110
Number of pages13
JournalIEEE Access
StatePublished - 2021


  • Deep neural network
  • HMM topology
  • speech recognition
  • vector quantization


Dive into the research topics of 'Towards Constructing HMM Structure for Speech Recognition with Deep Neural Fenonic Baseform Growing'. Together they form a unique fingerprint.

Cite this