Can ChatGPT's Responses Boost Traditional Natural Language Processing?

Mostafa M. Amin, Erik Cambria, Bjorn W. Schuller

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

The employment of foundation models is steadily expanding, especially with the launch of ChatGPT and the release of other foundation models. These models have shown the potential of emerging capabilities to solve problems without being particularly trained to solve them. A previous work demonstrated these emerging capabilities in affective computing tasks; the performance quality was similar to that of traditional natural language processing (NLP) techniques but fell short of specialized trained models, like fine-tuning of the RoBERTa language model. In this work, we extend this by exploring whether ChatGPT has novel knowledge that would enhance existing specialized models when they are fused together. We achieve this by investigating the utility of verbose responses from ChatGPT for solving a downstream task in addition to studying the utility of fusing that with existing NLP methods. The study is conducted on three affective computing problems: namely, sentiment analysis, suicide tendency detection, and big-five personality assessment. The results conclude that ChatGPT has, indeed, novel knowledge that can improve existing NLP techniques by way of fusion, be it early or late fusion.

Original languageEnglish
Pages (from-to)5-11
Number of pages7
JournalIEEE Intelligent Systems
Volume38
Issue number5
DOIs
StatePublished - 1 Sep 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'Can ChatGPT's Responses Boost Traditional Natural Language Processing?'. Together they form a unique fingerprint.

Cite this