This Paper Had the Smartest Reviewers - Flattery Detection Utilising an Audio-Textual Transformer-Based Approach

Lukas Christ, Shahin Amiriparian, Friederike Hawighorst, Ann Kathrin Schill, Angelo Boutalikakis, Lorenz Graf-Vlachy, Andreas König, Björn W. Schuller

Research output: Contribution to journalConference articlepeer-review

Abstract

Flattery is an important aspect of human communication that facilitates social bonding, shapes perceptions, and influences behaviour through strategic compliments and praise, leveraging the power of speech to build rapport effectively. Its automatic detection can thus enhance the naturalness of human-AI interactions. To meet this need, we present a novel audio textual dataset comprising 20 hours of speech and train machine learning models for automatic flattery detection. In particular, we employ pretrained AST, Wav2Vec2, and Whisper models for the speech modality, and Whisper TTS models combined with a RoBERTa text classifier for the textual modality. Subsequently, we build a multimodal classifier by combining text and audio representations. Evaluation on unseen test data demonstrates promising results, with Unweighted Average Recall scores reaching 82.46% in audio-only experiments, 85.97 % in text-only experiments, and 87.16 % using a multimodal approach.

Original languageEnglish
Pages (from-to)3530-3534
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
StatePublished - 2024
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sep 20245 Sep 2024

Keywords

  • Transformers
  • computational paralinguistics
  • flattery
  • human-AI interaction
  • speech classification

Fingerprint

Dive into the research topics of 'This Paper Had the Smartest Reviewers - Flattery Detection Utilising an Audio-Textual Transformer-Based Approach'. Together they form a unique fingerprint.

Cite this