Testing Correctness, Fairness, and Robustness of Speech Emotion Recognition Models

Anna Derington, Hagen Wierstorf, Ali Ozkil, Florian Eyben, Felix Burkhardt, Bjorn W. Schuller

Research output: Contribution to journalArticlepeer-review

Abstract

Machine learning models for speech emotion recognition (SER) can be trained for different tasks and are usually evaluated based on a few available datasets per task. Tasks could include arousal, valence, dominance, emotional categories, or tone of voice. Those models are mainly evaluated in terms of correlation or recall, and always show some errors in their predictions. The errors manifest themselves in model behaviour, which can be very different along different dimensions even if the same recall or correlation is achieved by the model. This paper introduces a testing framework to investigate behaviour of speech emotion recognition models, by requiring different metrics to reach a certain threshold in order to pass a test. The test metrics can be grouped in terms of correctness, fairness, and robustness. It also provides a method for automatically specifying test thresholds for fairness tests, based on the datasets used, and recommendations on how to select the remaining test thresholds. We evaluated a xLSTM-based and nine transformer-based acoustic foundation models against a convolutional baseline model, testing their performance on arousal, valence, dominance, and emotional category classification. The test results highlight, that models with high correlation or recall might rely on shortcuts - such as text sentiment -, and differ in terms of fairness.

Original languageEnglish
JournalIEEE Transactions on Affective Computing
DOIs
StateAccepted/In press - 2025

Fingerprint

Dive into the research topics of 'Testing Correctness, Fairness, and Robustness of Speech Emotion Recognition Models'. Together they form a unique fingerprint.

Cite this