Recognition of 3D facial expression dynamics

Georgia Sandbach, Stefanos Zafeiriou, Maja Pantic, Daniel Rueckert

Research output: Contribution to journalArticlepeer-review

97 Scopus citations

Abstract

In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modelled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order to extract features for each of the onset and offset segments of the expression. These features are then used to train GentleBoost classifiers and build a Hidden Markov Model in order to model the full temporal dynamics of the expression. The proposed fully automatic system was employed on the BU-4DFE database for distinguishing between the six universal expressions: Happy, Sad, Angry, Disgust, Surprise and Fear. Comparisons with a similar 2D system based on the motion extracted from facial intensity images was also performed. The attained results suggest that the use of the 3D information does indeed improve the recognition accuracy when compared to the 2D data in a fully automatic manner.

Original languageEnglish
Pages (from-to)762-773
Number of pages12
JournalImage and Vision Computing
Volume30
Issue number10
DOIs
StatePublished - Oct 2012
Externally publishedYes

Keywords

  • 2D/3D comparison
  • 3D facial geometries
  • Facial expression recognition
  • Motion-based features
  • Quad-tree decomposition

Fingerprint

Dive into the research topics of 'Recognition of 3D facial expression dynamics'. Together they form a unique fingerprint.

Cite this