TY - GEN
T1 - A new unsupervised learning algorithm for multilayer perceptrons based on information theory principles
AU - Rigoll, Gerhard
PY - 1991
Y1 - 1991
N2 - The author describes a novel learning algorithm for multilayer perceptrons (MLPs). The trained MLPs are used as the vector quantizer (VQ) in a hidden Markov model (HMM) based speech recognition system. This approach represents an unsupervised learning algorithm for multilayer perceptrons, i.e., the neurons of the output layer do not receive any specific target values during training, but instead the output is learned during training using principles of self-organization. Information theory principles are used as learning criteria for the MLP. When using VQ in a HMM-based speech recognition system, multiple features such as cepstral parameters, differential cepstral parameters, and energy can be used as joint input into the same VQ, thus avoiding the use of multiple codebooks. In this case, the principle of 'sensor fusion' can be transferred to the speech recognition area with same intention, namely using neural networks for merging the output of different information sources in order to obtain an improved feature extractor for more robust pattern recognition.
AB - The author describes a novel learning algorithm for multilayer perceptrons (MLPs). The trained MLPs are used as the vector quantizer (VQ) in a hidden Markov model (HMM) based speech recognition system. This approach represents an unsupervised learning algorithm for multilayer perceptrons, i.e., the neurons of the output layer do not receive any specific target values during training, but instead the output is learned during training using principles of self-organization. Information theory principles are used as learning criteria for the MLP. When using VQ in a HMM-based speech recognition system, multiple features such as cepstral parameters, differential cepstral parameters, and energy can be used as joint input into the same VQ, thus avoiding the use of multiple codebooks. In this case, the principle of 'sensor fusion' can be transferred to the speech recognition area with same intention, namely using neural networks for merging the output of different information sources in order to obtain an improved feature extractor for more robust pattern recognition.
UR - http://www.scopus.com/inward/record.url?scp=0026308862&partnerID=8YFLogxK
U2 - 10.1109/ijcnn.1991.170683
DO - 10.1109/ijcnn.1991.170683
M3 - Conference contribution
AN - SCOPUS:0026308862
SN - 0780302273
SN - 9780780302273
T3 - 91 IEEE Int Jt Conf Neural Networks IJCNN 91
SP - 1764
EP - 1769
BT - 91 IEEE Int Jt Conf Neural Networks IJCNN 91
PB - Publ by IEEE
T2 - 1991 IEEE International Joint Conference on Neural Networks - IJCNN '91
Y2 - 18 November 1991 through 21 November 1991
ER -