TY - GEN
T1 - CTC-Segmentation of Large Corpora for German End-to-End Speech Recognition
AU - Kürzinger, Ludwig
AU - Winkelbauer, Dominik
AU - Li, Lujun
AU - Watzel, Tobias
AU - Rigoll, Gerhard
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Recent end-to-end Automatic Speech Recognition (ASR) systems demonstrated the ability to outperform conventional hybrid DNN/HMM ASR. Aside from architectural improvements in those systems, those models grew in terms of depth, parameters and model capacity. However, these models also require more training data to achieve comparable performance. In this work, we combine freely available corpora for German speech recognition, including yet unlabeled speech data, to a big dataset of over 1700 h of speech data. For data preparation, we propose a two-stage approach that uses an ASR model pre-trained with Connectionist Temporal Classification (CTC) to boot-strap more training data from unsegmented or unlabeled training data. Utterances are then extracted from label probabilities obtained from the network trained with CTC to determine segment alignments. With this training data, we trained a hybrid CTC/attention Transformer model that achieves 12.8% WER on the Tuda-DE test set, surpassing the previous baseline of 14.4% of conventional hybrid DNN/HMM ASR.
AB - Recent end-to-end Automatic Speech Recognition (ASR) systems demonstrated the ability to outperform conventional hybrid DNN/HMM ASR. Aside from architectural improvements in those systems, those models grew in terms of depth, parameters and model capacity. However, these models also require more training data to achieve comparable performance. In this work, we combine freely available corpora for German speech recognition, including yet unlabeled speech data, to a big dataset of over 1700 h of speech data. For data preparation, we propose a two-stage approach that uses an ASR model pre-trained with Connectionist Temporal Classification (CTC) to boot-strap more training data from unsegmented or unlabeled training data. Utterances are then extracted from label probabilities obtained from the network trained with CTC to determine segment alignments. With this training data, we trained a hybrid CTC/attention Transformer model that achieves 12.8% WER on the Tuda-DE test set, surpassing the previous baseline of 14.4% of conventional hybrid DNN/HMM ASR.
KW - CTC-segmentation
KW - End-to-end automatic speech recognition
KW - German speech dataset
KW - Hybrid CTC/Attention
UR - http://www.scopus.com/inward/record.url?scp=85092905593&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-60276-5_27
DO - 10.1007/978-3-030-60276-5_27
M3 - Conference contribution
AN - SCOPUS:85092905593
SN - 9783030602758
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 267
EP - 278
BT - Speech and Computer - 22nd International Conference, SPECOM 2020, Proceedings
A2 - Karpov, Alexey
A2 - Potapova, Rodmonga
PB - Springer Science and Business Media Deutschland GmbH
T2 - 22nd International Conference on Speech and Computer, SPECOM 2020
Y2 - 7 October 2020 through 9 October 2020
ER -