HeadOn: Real-time reenactment of human portrait videos

Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, Matthias Niessner

Research output: Contribution to journalArticlepeer-review

85 Scopus citations

Abstract

We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel realtime reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.

Original languageEnglish
Article number164
JournalACM Transactions on Graphics
Volume37
Issue number4
DOIs
StatePublished - 2018

Keywords

  • Face tracking
  • Real-time
  • Reenactment
  • Video-based rendering

Fingerprint

Dive into the research topics of 'HeadOn: Real-time reenactment of human portrait videos'. Together they form a unique fingerprint.

Cite this