MEC 2016: The multimodal emotion recognition challenge of CCPR 2016

Ya Li, Jianhua Tao, Björn Schuller, Shiguang Shan, Dongmei Jiang, Jia Jia

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

36 Scopus citations

Abstract

Emotion recognition is a significant research filed of pattern recognition and artificial intelligence. The Multimodal Emotion Recognition Challenge (MEC) is a part of the 2016 Chinese Conference on Pattern Recognition (CCPR). The goal of this competition is to compare multimedia processing and machine learning methods for multimodal emotion recognition. The challenge also aims to provide a common benchmark data set, to bring together the audio and video emotion recognition communities, and to promote the research in multimodal emotion recognition. The data used in this challenge is the Chinese Natural Audio- Visual Emotion Database (CHEAVD), which is selected from Chinese movies and TV programs. The discrete emotion labels are annotated by four experienced assistants. Three sub-challenges are defined: audio, video and multimodal emotion recognition. This paper introduces the baseline audio, visual features, and the recognition results by Random Forests.

Original languageEnglish
Title of host publicationPattern Recognition - 7th Chinese Conference, CCPR 2016, Proceedings
EditorsTieniu Tan, Xilin Chen, Xuelong Li, Jian Yang, Hong Cheng, Jie Zhou
PublisherSpringer Verlag
Pages667-678
Number of pages12
ISBN (Print)9789811030048
DOIs
StatePublished - 2016
Externally publishedYes

Publication series

NameCommunications in Computer and Information Science
Volume663
ISSN (Print)1865-0929

Keywords

  • Affective computing
  • Audio-visual corpus
  • Challenge
  • Emotion
  • Features
  • Multimodal fusion

Fingerprint

Dive into the research topics of 'MEC 2016: The multimodal emotion recognition challenge of CCPR 2016'. Together they form a unique fingerprint.

Cite this