TY - JOUR
T1 - Refashioning Emotion Recognition Modeling
T2 - The Advent of Generalized Large Models
AU - Zhang, Zixing
AU - Peng, Liyizhe
AU - Pang, Tao
AU - Han, Jing
AU - Zhao, Huan
AU - Schuller, Bjorn W.
N1 - Publisher Copyright:
© 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
PY - 2024
Y1 - 2024
N2 - After its inception, emotion recognition or affective computing has increasingly become an active research topic due to its broad applications. The corresponding computational models have gradually migrated from statistically shallow models to neural-network-based deep models, which can significantly boost the performance of emotion recognition and consistently achieve the best results on different benchmarks, and thus has been considered the first option for emotion recognition. However, the debut of large language models (LLMs), such as ChatGPT and GPT4, has remarkably astonished the world due to their emerged capabilities of zero/few-shot learning, in-context learning (ICL), chain-of-thought, and others that are never shown in previous deep models. In the present article, we comprehensively investigate how the LLMs perform in emotion recognition in terms of diverse aspects, including ICL, few-shot prompting, accuracy, generalization, and explanation. Moreover, we offer some insights and pose other potential challenges, hoping to ignite broader discussions about enhancing emotion recognition in the new era of advanced and more generalized models.
AB - After its inception, emotion recognition or affective computing has increasingly become an active research topic due to its broad applications. The corresponding computational models have gradually migrated from statistically shallow models to neural-network-based deep models, which can significantly boost the performance of emotion recognition and consistently achieve the best results on different benchmarks, and thus has been considered the first option for emotion recognition. However, the debut of large language models (LLMs), such as ChatGPT and GPT4, has remarkably astonished the world due to their emerged capabilities of zero/few-shot learning, in-context learning (ICL), chain-of-thought, and others that are never shown in previous deep models. In the present article, we comprehensively investigate how the LLMs perform in emotion recognition in terms of diverse aspects, including ICL, few-shot prompting, accuracy, generalization, and explanation. Moreover, we offer some insights and pose other potential challenges, hoping to ignite broader discussions about enhancing emotion recognition in the new era of advanced and more generalized models.
KW - Emotion recognition
KW - few-shot learning
KW - in-context learning (ICL)
KW - large language model (LLM)
UR - http://www.scopus.com/inward/record.url?scp=85194839568&partnerID=8YFLogxK
U2 - 10.1109/TCSS.2024.3396345
DO - 10.1109/TCSS.2024.3396345
M3 - Article
AN - SCOPUS:85194839568
SN - 2329-924X
VL - 11
SP - 6690
EP - 6704
JO - IEEE Transactions on Computational Social Systems
JF - IEEE Transactions on Computational Social Systems
IS - 5
ER -