Back

Perceiving Systems Members Publications Website

Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles

Haar%2bteaser
Given a text description, our method produces realistic human 3D strand-based hairstyles. To do that we train the diffusion model in the latent space of a hairstyle using a shared UV space. By leveraging 2D visual question-answering (VQA) systems, we automatically annotate synthetic hair models that are generated from a small set of artist-created hairstyles. We condition our diffusion model on this hairstyle description using a powerful cross-attention procedure. The usage of a 3D strand-based geometry representation allows generated hairstyles to be easily incorporated into existing computer graphics pipelines for simulation and rendering.

Members

Thumb ticker sm eth photo
Perceiving Systems, Human-centric Vision & Learning
  • Doctoral Researcher
Thumb ticker sm headshot2021
Perceiving Systems
Director
Thumb ticker sm justus thies
Neural Capture and Synthesis, Perceiving Systems
Max Planck Research Group Leader

Publications

Perceiving Systems Neural Capture and Synthesis Human-centric Vision & Learning Conference Paper Text-Conditioned Generative Model of 3D Strand-based Human Hairstyles Sklyarova, V., Zakharov, E., Hilliges, O., Black, M. J., Thies, J. In IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), CVPR, June 2024 (Published) ArXiv Code URL BibTeX