AVEC 2011 - The first international audio/visual emotion challenge

Björn Schuller, Michel Valstar, Florian Eyben, Gary McKeown, Roddy Cowie, Maja Pantic

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

223 Scopus citations


The Audio/Visual Emotion Challenge and Workshop (AVEC 2011) is the first competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. This paper first describes the challenge participation conditions. Next follows the data used - the SEMAINE corpus - and its partitioning into train, development, and test partitions for the challenge with labelling in four dimensions, namely activity, expectation, power, and valence. Further, audio and video baseline features are introduced as well as baseline results that use these features for the three sub-challenges of audio, video, and audiovisual emotion recognition.

Original languageEnglish
Title of host publicationAffective Computing and Intelligent Interaction - 4th International Conference, ACII 2011, Proceedings
Number of pages10
EditionPART 2
StatePublished - 2011
Event4th International Conference on Affective Computing and Intelligent Interaction, ACII 2011 - Memphis, TN, United States
Duration: 9 Oct 201112 Oct 2011

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume6975 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference4th International Conference on Affective Computing and Intelligent Interaction, ACII 2011
Country/TerritoryUnited States
CityMemphis, TN


  • Audiovisual Emotion Recognition
  • Challenge
  • Facial Expression Analysis
  • Speech Emotion Recognition


Dive into the research topics of 'AVEC 2011 - The first international audio/visual emotion challenge'. Together they form a unique fingerprint.

Cite this