Perceiving Systems Conference Paper 2026

NeuralFur: Animal Fur Reconstruction from Multi-view Images

project arXiv code
Thumb ticker sm eth photo
Perceiving Systems, Human-centric Vision & Learning
  • Doctoral Researcher
Thumb ticker sm kberna
Perceiving Systems
  • Guest Scientist
Thumb ticker sm 1733853257338
Perceiving Systems
  • Research Engineer
Thumb ticker sm photo informal
Perceiving Systems
  • Research Engineer
Thumb ticker sm headshot2021
Perceiving Systems
Director
Thumb ticker sm justus thies
Neural Capture and Synthesis, Perceiving Systems
Max Planck Research Group Leader
Thumb xxl neuralfur

Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that could be leveraged to learn a fur prior for different animals. In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given calibrated multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a visual question answering (VQA) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal’s furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VQA to guide the strands' growth direction and their relation to the gravity vector that we incorporate as a loss. With this new schema of using a VQA model to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types.

Award: (Best Paper Nominee)
Author(s): Vanessa Skliarova and Berna Kabadayi and Anastasios Yiannakidis and Giorgio Becherini and Michael J. Black and Justus Thies
Links:
Book Title: Int. Conf. on 3D Vision (3DV)
Year: 2026
Month: March
Day: 20
BibTeX Type: Conference Paper (inproceedings)
State: Accepted
Award Paper: Best Paper Nominee

BibTeX

@inproceedings{NeuralFur26,
  title = {{NeuralFur}: Animal Fur Reconstruction from Multi-view Images},
  aword_paper = {Best Paper Nominee},
  booktitle = {Int.~Conf.~on 3D Vision (3DV)},
  abstract = {Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that could be leveraged to learn a fur prior for different animals. In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given calibrated multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a visual question answering (VQA) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal’s furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VQA to guide the strands' growth direction and their relation to the gravity vector that we incorporate as a loss. With this new schema of using a VQA model to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types.},
  month = mar,
  year = {2026},
  author = {Skliarova, Vanessa and Kabadayi, Berna and Yiannakidis, Anastasios and Becherini, Giorgio and Black, Michael J. and Thies, Justus},
  month_numeric = {3}
}