Perceiving Systems Conference Paper 2025

BEDLAM2.0: Synthetic humans and cameras in motion

Project Paper Video
Thumb ticker sm tesch
Perceiving Systems
Software Engineer, Real-time Graphics (VR/MR)
Thumb ticker sm photo informal
Perceiving Systems
  • Research Engineer
Thumb ticker sm myface
Perceiving Systems
Master's Student Intern
Thumb ticker sm 1733853257338
Perceiving Systems
  • Research Engineer
Thumb ticker sm mkocabas pp landscape
Perceiving Systems
  • Guest Scientist
Thumb ticker sm img 20181018 102914
Perceiving Systems
  • Guest Scientist
Thumb ticker sm headshot2021
Perceiving Systems
Director
Thumb xxl b2 videothumbnail

Inferring 3D human motion from video remains a challenging problem with many applications. While traditional methods estimate the human in image coordinates, many applications require human motion to be estimated in world coordinates. This is particularly challenging when there is both human and camera motion. Progress on this topic has been limited by the lack of rich video data with ground truth human and camera movement. We address this with BEDLAM2.0, a new dataset that goes beyond the popular BEDLAM dataset in important ways. In addition to introducing more diverse and realistic cameras and camera motions, BEDLAM2.0 increases diversity and realism of body shape, motions, clothing, hair, and 3D environments. Additionally, it adds shoes, which were missing in BEDLAM. BEDLAM has become a key resource for training 3D human pose and motion regressors today and we show that BEDLAM2.0 is significantly better, particularly for training methods that estimate humans in world coordinates. We compare state-of-the art methods trained on BEDLAM and BEDLAM2.0, and find that BEDLAM2.0 significantly improves accuracy over BEDLAM. For research purposes, we provide the rendered videos, ground truth body parameters, and camera motions. We also provide the 3D assets to which we have rights and links to those from third parties.

Author(s): Joachim Tesch and Giorgio Becherini and Prerana Achar and Anastasios Yiannakidis and Muhammed Kocabas and Priyanka Patel and Michael J. Black
Links:
Year: 2025
Month: December
BibTeX Type: Conference Paper (conference)
Event Name: Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track
Event Place: San Diego Convention Center
State: In press
URL: https://bedlam2.is.tuebingen.mpg.de/

BibTeX

@conference{BEDLAM2,
  title = {BEDLAM2.0: Synthetic humans and cameras in motion},
  abstract = {Inferring 3D human motion from video remains a challenging problem with many applications. While traditional methods estimate the human in image coordinates, many applications require human motion to be estimated in world coordinates. This is particularly challenging when there is both human and camera motion. Progress on this topic has been limited by the lack of rich video data with ground truth human and camera movement. We address this with BEDLAM2.0, a new dataset that goes beyond the popular BEDLAM dataset in important ways. In addition to introducing more diverse and realistic cameras and camera motions, BEDLAM2.0 increases diversity and realism of body shape, motions, clothing, hair, and 3D environments. Additionally, it adds shoes, which were missing in BEDLAM. BEDLAM has become a key resource for training 3D human pose and motion regressors today and we show that BEDLAM2.0 is significantly better, particularly for training methods that estimate humans in world coordinates. We compare state-of-the art methods trained on BEDLAM and BEDLAM2.0, and find that BEDLAM2.0 significantly improves accuracy over BEDLAM. For research purposes, we provide the rendered videos, ground truth body parameters, and camera motions. We also provide the 3D assets to which we have rights and links to those from third parties.},
  month = dec,
  year = {2025},
  author = {Tesch, Joachim and Becherini, Giorgio and Achar, Prerana and Yiannakidis, Anastasios and Kocabas, Muhammed and Patel, Priyanka and Black, Michael J.},
  url = {https://bedlam2.is.tuebingen.mpg.de/},
  month_numeric = {12}
}