Thumb ticker sm justus thies
Neural Capture and Synthesis, Perceiving Systems
Max Planck Research Group Leader
Thumb  5

We present Neural Voice Puppetry, a novel approach for audio-driven facial video synthesis. Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video of a target person that is in sync with the audio of the source input. This audio-driven facial reenactment is driven by a deep neural network that employs a latent 3D face model space. Through the underlying 3D representation, the model inherently learns temporal stability while we leverage neural rendering to generate photo-realistic output frames. Our approach generalizes across different people, allowing us to synthesize videos of a target actor with the voice of any unknown source actor or even synthetic voices that can be generated utilizing standard text-to-speech approaches. Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head. We demonstrate the capabilities of our method in a series of audio- and text-based puppetry examples. Our method is not only more general than existing works since we are generic to the input person, but we also show superior visual and lip sync quality compared to photo-realistic audio- and video-driven reenactment techniques.

Author(s): Thies, Justus and Elgharib, Mohamed and Tewari, Ayush and Theobalt, Christian and Nießner, Matthias
Links:
Book Title: Computer Vision – ECCV 2020
Year: 2020
Month: August
Publisher: Springer International Publishing
Bibtex Type: Conference Paper (inproceedings)
Address: Cham
Event Place: Glasgow, UK
URL: https://justusthies.github.io/posts/neural-voice-puppetry/
Electronic Archiving: grant_archive

BibTex

@inproceedings{thies2020nvp,
  title = {Neural Voice Puppetry: Audio-driven Facial Reenactment},
  booktitle = {Computer Vision -- ECCV 2020},
  abstract = {We present Neural Voice Puppetry, a novel approach for audio-driven facial video synthesis. Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video of a target person that is in sync with the audio of the source input. This audio-driven facial reenactment is driven by a deep neural network that employs a latent 3D face model space. Through the underlying 3D representation, the model inherently learns temporal stability while we leverage neural rendering to generate photo-realistic output frames. Our approach generalizes across different people, allowing us to synthesize videos of a target actor with the voice of any unknown source actor or even synthetic voices that can be generated utilizing standard text-to-speech approaches. Neural Voice Puppetry has a variety of use-cases, including audio-driven video avatars, video dubbing, and text-driven video synthesis of a talking head. We demonstrate the capabilities of our method in a series of audio- and text-based puppetry examples. Our method is not only more general than existing works since we are generic to the input person, but we also show superior visual and lip sync quality compared to photo-realistic audio- and video-driven reenactment techniques.},
  publisher = {Springer International Publishing},
  address = {Cham},
  month = aug,
  year = {2020},
  slug = {thies2020nvp},
  author = {Thies, Justus and Elgharib, Mohamed and Tewari, Ayush and Theobalt, Christian and Nie{\ss}ner, Matthias},
  url = {https://justusthies.github.io/posts/neural-voice-puppetry/},
  month_numeric = {8}
}