MEC 2017: Multimodal Emotion Recognition Challenge

Ya Li, Jianhua Tao, Bjorn Schuller, Shiguang Shan, Dongmei Jiang, Jia Jia

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

43 Scopus citations

Abstract

This paper introduces baselines for the Multimodal Emotion Recognition Challenge (MEC) 2017, which is a part of the first Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia) 2018. The aim of MEC 2017 is to improve the performance of emotion recognition in real-world conditions. The Chinese Natural Audio-Visual Emotion Database (CHEAVD) 2.0 is utilized as the challenge database, which is an extension of CHEAVD as released in MEC 2016. MEC 2017 has three sub-challenges and 31 teams participate in either all or part of them. 27 teams, 16 teams and 17 teams participate in audio (only), video (only) and multimodal emotion recognition sub-challenges, respectively. Baseline scores of the audio (only) and the video (only) sub-challenges are generated from Support Vector Machines (SVM) where audio features and video features are considered separately. In the multimodal sub-challenge, feature-level fusion and decision-level fusion are both utilized. The baselines of the audio (only), the video (only) and the multimodal sub-challenges are 39.2%, 21.7% and 35.7% in macro average precision.

Original languageEnglish
Title of host publication2018 1st Asian Conference on Affective Computing and Intelligent Interaction, ACII Asia 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781538653111
DOIs
StatePublished - 21 Sep 2018
Externally publishedYes
Event1st Asian Conference on Affective Computing and Intelligent Interaction, ACII Asia 2018 - Beijing, China
Duration: 20 May 201822 May 2018

Publication series

Name2018 1st Asian Conference on Affective Computing and Intelligent Interaction, ACII Asia 2018

Conference

Conference1st Asian Conference on Affective Computing and Intelligent Interaction, ACII Asia 2018
Country/TerritoryChina
CityBeijing
Period20/05/1822/05/18

Keywords

  • Audio-visual corpus
  • Emotion recognition challenges
  • Fusion methods
  • Multimodal features

Fingerprint

Dive into the research topics of 'MEC 2017: Multimodal Emotion Recognition Challenge'. Together they form a unique fingerprint.

Cite this