Perceiving Systems Award
21 March 2026

Best Paper Runner Up

Thumb ticker xxl 20260321 090607
Thumb ticker sm eth photo
Perceiving Systems, Human-centric Vision & Learning
  • Doctoral Researcher
Thumb ticker sm kberna
Perceiving Systems
  • Guest Scientist
Thumb ticker sm 1733853257338
Perceiving Systems
  • Research Engineer
Thumb ticker sm photo informal
Perceiving Systems
  • Research Engineer
Thumb ticker sm headshot2021
Perceiving Systems
Emeritus / Acting Director
Thumb ticker sm justus thies
Neural Capture and Synthesis, Perceiving Systems
Max Planck Research Group Leader

Publications

Perceiving Systems Conference Paper NeuralFur: Animal Fur Reconstruction from Multi-view Images Sklyarova, V., Kabadayi, B., Yiannakidis, A., Becherini, G., Black, M. J., Thies, J. In Int. Conf. on 3D Vision (3DV), March 2026 (Accepted)
Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that could be leveraged to learn a fur prior for different animals. In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given calibrated multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a visual question answering (VQA) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal’s furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VQA to guide the strands' growth direction and their relation to the gravity vector that we incorporate as a loss. With this new schema of using a VQA model to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types.
project arXiv code BibTeX