Synthesising 3D Facial Motion from 'In-the-Wild' Speech

Panagiotis Tzirakis, Athanasios Papaioannou, Alexandros Lattas, Michail Tarasiou, Bjorn Schuller, Stefanos Zafeiriou

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

Synthesising 3D facial motion from speech is a crucial problem manifesting in a multitude of applications such as computer games and movies. Recently proposed methods tackle this problem in controlled conditions of speech. In this paper, we introduce the first methodology for 3D facial motion synthesis from speech captured in arbitrary recording conditions ('in-the-wild') and independent of the speaker. For our purposes, we captured 4D sequences of people uttering 500 words, contained in the Lip Reading in the Wild (LRW) words, a publicly available large-scale in-the-wild dataset, and built a set of 3D blendshapes appropriate for speech. We correlate the 3D shape parameters of the speech blendshapes to the LRW audio samples by means of a novel time-warping technique, named Deep Canonical Attentional Warping (DCAW), that can simultaneously learn hierarchical non-linear representations and a warping path in an end-to-end manner. We thoroughly evaluate our proposed methods, and show the ability of a deep learning model to synthesise 3D facial motion in handling different speakers and continuous speech signals in uncontrolled conditions1.

Original languageEnglish
Title of host publicationProceedings - 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2020
EditorsVitomir Struc, Francisco Gomez-Fernandez
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages265-272
Number of pages8
ISBN (Electronic)9781728130798
DOIs
StatePublished - Nov 2020
Externally publishedYes
Event15th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2020 - Buenos Aires, Argentina
Duration: 16 Nov 202020 Nov 2020

Publication series

NameProceedings - 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2020

Conference

Conference15th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2020
Country/TerritoryArgentina
CityBuenos Aires
Period16/11/2020/11/20

Fingerprint

Dive into the research topics of 'Synthesising 3D Facial Motion from 'In-the-Wild' Speech'. Together they form a unique fingerprint.

Cite this