FaceVR: Real-time gaze-aware facial reenactment in virtual reality

Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Niebner

Research output: Contribution to journalArticlepeer-review

95 Scopus citations

Abstract

We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3D avatars (Li et al. 2015; Olszewski et al. 2016). Although they achieve good tracking performance, the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform realtime facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly introduced algorithmic components.

Original languageEnglish
Article number25
JournalACM Transactions on Graphics
Volume37
Issue number2
DOIs
StatePublished - Jul 2018

Keywords

  • Eye tracking
  • Face tracking
  • Virtual reality

Fingerprint

Dive into the research topics of 'FaceVR: Real-time gaze-aware facial reenactment in virtual reality'. Together they form a unique fingerprint.

Cite this