Back

Perceiving Systems Members Publications Website

AWOL: Analysis WithOut synthesis using Language

We leverage language to control existing 3D parametric models of shape by learning a mapping between the latent space of a vision-language model and the parameter space of the 3D model using only a small set of shape and text pairs [File Icon]. This enables the use of language to generate parameters for objects not seen during training.

Members

Perceiving Systems
  • Guest Scientist
Perceiving Systems
  • Guest Scientist
Perceiving Systems
Emeritus / Acting Director

Publications

Perceiving Systems Conference Paper AWOL: Analysis WithOut synthesis using Language Zuffi, S., Black, M. J. In European Conference on Computer Vision (ECCV 2024), LNCS, Springer Cham, September 2024 (Published)
Many classical parametric 3D shape models exist, but creating novel shapes with such models requires expert knowledge of their parameters. For example, imagine creating a specific type of tree using procedural graphics or a new kind of animal from a statistical shape model. Our key idea is to leverage language to control such existing models to produce novel shapes. This involves learning a mapping between the latent space of a vision-language model and the parameter space of the 3D model, which we do using a small set of shape and text pairs. Our hypothesis is that mapping from language to parameters allows us to generate parameters for objects that were never seen during training. If the mapping between language and parameters is sufficiently smooth, then interpolation or generalization in language should translate appropriately into novel 3D shapes. We test our approach with two very different types of parametric shape models (quadrupeds and arboreal trees). We use a learned statistical shape model of quadrupeds and show that we can use text to generate new animals not present during training. In particular, we demonstrate state-of-the-art shape estimation of 3D dogs. This work also constitutes the first language-driven method for generating 3D trees. Finally, embedding images in the CLIP latent space enables us to generate animals and trees directly from images.
Paper URL BibTeX