Refashioning Emotion Recognition Modeling: The Advent of Generalized Large Models

Zixing Zhang, Liyizhe Peng, Tao Pang, Jing Han, Huan Zhao, Bjorn W. Schuller

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

After its inception, emotion recognition or affective computing has increasingly become an active research topic due to its broad applications. The corresponding computational models have gradually migrated from statistically shallow models to neural-network-based deep models, which can significantly boost the performance of emotion recognition and consistently achieve the best results on different benchmarks, and thus has been considered the first option for emotion recognition. However, the debut of large language models (LLMs), such as ChatGPT and GPT4, has remarkably astonished the world due to their emerged capabilities of zero/few-shot learning, in-context learning (ICL), chain-of-thought, and others that are never shown in previous deep models. In the present article, we comprehensively investigate how the LLMs perform in emotion recognition in terms of diverse aspects, including ICL, few-shot prompting, accuracy, generalization, and explanation. Moreover, we offer some insights and pose other potential challenges, hoping to ignite broader discussions about enhancing emotion recognition in the new era of advanced and more generalized models.

Original languageEnglish
Pages (from-to)6690-6704
Number of pages15
JournalIEEE Transactions on Computational Social Systems
Volume11
Issue number5
DOIs
StatePublished - 2024
Externally publishedYes

Keywords

  • Emotion recognition
  • few-shot learning
  • in-context learning (ICL)
  • large language model (LLM)

Fingerprint

Dive into the research topics of 'Refashioning Emotion Recognition Modeling: The Advent of Generalized Large Models'. Together they form a unique fingerprint.

Cite this