Back

Perceiving Systems Members Publications Website

Generating Human Interaction Motions in Scenes with Text Control

TeSMo [File Icon] performs text-controlled, scene-aware, motion generation based on denoising diffusion models, enabling avatars to navigate in a 3D space and interact with objects.

Members

Perceiving Systems
  • Guest Scientist
Neural Capture and Synthesis, Perceiving Systems
Max Planck Research Group Leader
Perceiving Systems
Emeritus / Acting Director

Publications

Perceiving Systems Conference Paper Generating Human Interaction Motions in Scenes with Text Control Yi, H., Thies, J., Black, M. J., Peng, X. B., Rempe, D. In European Conference on Computer Vision (ECCV 2024), 246-263, LNCS, Springer Cham, September 2024 (Published)
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models. Previous text-to-motion methods focus on characters in isolation without considering scenes due to the limited availability of datasets that include motion, text descriptions, and interactive scenes. Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model, emphasizing goal-reaching constraints on large-scale motion-capture datasets. We then enhance this model with a scene-aware component, fine-tuned using data augmented with detailed scene information, including ground plane and object shapes. To facilitate training, we embed annotated navigation and interaction motions within scenes. The proposed method produces realistic and diverse human-object interactions, such as navigation and sitting, in different scenes with various object shapes, orientations, initial body positions, and poses. Extensive experiments demonstrate that our approach surpasses prior techniques in terms of the plausibility of human-scene interactions, as well as the realism and variety of the generated motions.
pdf project DOI URL BibTeX