FaceTalk: Audio-driven Motion Diffusion for Neural Parametric Head Models
Detailed facial motion generation from audio using neural parametric head models.
FaceTalk is a generative approach designed for synthesizing high-fidelity 3D motion sequences of talking human heads from input audio. To capture the expressive, detailed nature of human heads, including hair, ears, and finer-scale eye movements, we propose to couple speech signal with the latent space of neural parametric head models.
Diese Website verwendet Cookies, um sicherzustellen, dass Sie die bestmögliche Nutzererfahrung erhalten. Mehr erfahren.