TY - JOUR
T1 - The INTERSPEECH 2018 computational paralinguistics challenge
T2 - 19th Annual Conference of the International Speech Communication, INTERSPEECH 2018
AU - Schuller, Björn W.
AU - Steidl, Stefan
AU - Batliner, Anton
AU - Marschik, Peter B.
AU - Baumeister, Harald
AU - Dong, Fengquan
AU - Hantke, Simone
AU - Pokorny, Florian B.
AU - Rathner, Eva Maria
AU - Bartl-Pokorny, Katrin D.
AU - Einspieler, Christa
AU - Zhang, Dajie
AU - Baird, Alice
AU - Amiriparian, Shahin
AU - Qian, Kun
AU - Ren, Zhao
AU - Schmitt, Maximilian
AU - Tzirakis, Panagiotis
AU - Zafeiriou, Stefanos
N1 - Publisher Copyright:
© 2018 International Speech Communication Association. All rights reserved.
PY - 2018
Y1 - 2018
N2 - The INTERSPEECH 2018 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Atypical Affect Sub-Challenge, four basic emotions annotated in the speech of handicapped subjects have to be classified; in the Self-Assessed Affect Sub-Challenge, valence scores given by the speakers themselves are used for a three-class classification problem; in the Crying Sub-Challenge, three types of infant vocalisations have to be told apart; and in the Heart Beats Sub-Challenge, three different types of heart beats have to be determined. We describe the Sub-Challenges, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the 'usual' ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.
AB - The INTERSPEECH 2018 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Atypical Affect Sub-Challenge, four basic emotions annotated in the speech of handicapped subjects have to be classified; in the Self-Assessed Affect Sub-Challenge, valence scores given by the speakers themselves are used for a three-class classification problem; in the Crying Sub-Challenge, three types of infant vocalisations have to be told apart; and in the Heart Beats Sub-Challenge, three different types of heart beats have to be determined. We describe the Sub-Challenges, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the 'usual' ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.
KW - Atypical Affect
KW - Challenge
KW - Computational Paralinguistics
KW - Crying
KW - Heart Beats
KW - Self-Assessed Affect
UR - http://www.scopus.com/inward/record.url?scp=85052817968&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2018-51
DO - 10.21437/Interspeech.2018-51
M3 - Conference article
AN - SCOPUS:85052817968
SN - 2308-457X
VL - 2018-September
SP - 122
EP - 126
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Y2 - 2 September 2018 through 6 September 2018
ER -