Neural Capture and Synthesis Article 2018

FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality

Thumb ticker sm justus thies
Neural Capture and Synthesis, Perceiving Systems
Max Planck Research Group Leader
Thumb  21

We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3d avatars. While they achieve good tracking performance the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly-introduced algorithmic components.

Author(s): Thies, J. and Zollhöfer, M. and Stamminger, M. and Theobalt, C. and Nießner, M.
Journal: ACM Transactions on Graphics 2018 (TOG)
Year: 2018
Bibtex Type: Article (article)
URL: https://justusthies.github.io/posts/facevr/
Electronic Archiving: grant_archive
Links:

BibTex

@article{thies2018facevr,
  title = {FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality},
  journal = {ACM Transactions on Graphics 2018 (TOG)},
  abstract = {We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3d avatars. While they achieve good tracking performance the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly-introduced algorithmic components.},
  year = {2018},
  slug = {thies2018facevr},
  author = {Thies, J. and Zollh{\"o}fer, M. and Stamminger, M. and Theobalt, C. and Nie{\ss}ner, M.},
  url = {https://justusthies.github.io/posts/facevr/}
}