Back

Perceiving Systems Members Publications Website

MOVER – Reconstructing 3D Scenes and People using Interaction

MOVER [File Icon] exploits human-scene interactions (HSIs) to improve the 3D reconstruction of a scene from a monocular RGB video. Our key idea is that, as a person moves through a scene and interacts with it, we accumulate HSIs across multiple input images, and optimize the 3D scene and human movement to reconstruct a consistent, physically plausible and functional 3D scene layout.

Members

Perceiving Systems
  • Guest Scientist
Perceiving Systems
  • Guest Scientist
Perceiving Systems
  • Guest Scientist
Perceiving Systems
  • Guest Scientist
Perceiving Systems
Ph.D. Student
Perceiving Systems
Guest Scientist
Neural Capture and Synthesis, Perceiving Systems
Max Planck Research Group Leader
Perceiving Systems
Emeritus / Acting Director

Publications

Perceiving Systems Neural Capture and Synthesis Conference Paper Human-Aware Object Placement for Visual Environment Reconstruction Yi, H., Huang, C. P., Tzionas, D., Kocabas, M., Hassan, M., Tang, S., Thies, J., Black, M. J. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), 3949-3960, IEEE, Piscataway, NJ, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), June 2022 (Published)
Humans are in constant contact with the world as they move through it and interact with it. This contact is a vital source of information for understanding 3D humans, 3D scenes, and the interactions between them. In fact, we demonstrate that these human-scene interactions (HSIs) can be leveraged to improve the 3D reconstruction of a scene from a monocular RGB video. Our key idea is that, as a person moves through a scene and interacts with it, we accumulate HSIs across multiple input images, and optimize the 3D scene to reconstruct a consistent, physically plausible and functional 3D scene layout. Our optimization-based approach exploits three types of HSI constraints: (1) humans that move in a scene are occluded or occlude objects, thus, defining the depth ordering of the objects, (2) humans move through free space and do not interpenetrate objects, (3) when humans and objects are in contact, the contact surfaces occupy the same place in space. Using these constraints in an optimization formulation across all observations, we significantly improve the 3D scene layout reconstruction. Furthermore, we show that our scene reconstruction can be used to refine the initial 3D human pose and shape (HPS) estimation. We evaluate the 3D scene layout reconstruction and HPS estimation qualitatively and quantitatively using the PROX and PiGraphs datasets. The code and data are available for research purposes at https://mover.is.tue.mpg.de/.
project arXiv DOI URL BibTeX