Perceiving Systems Conference Paper 2025

St4RTrack: Simultaneous 4D Reconstruction and Tracking in the World

pdf arXiv project code demo video
Thumb ticker sm img 4026
Perceiving Systems
  • Doctoral Researcher
Thumb ticker sm headshot2021
Perceiving Systems
Director
Thumb xxl startrack

Dynamic 3D reconstruction and point tracking in videos are typically treated as separate tasks, despite their deep connection. We propose St4RTrack, a feed-forward framework that simultaneously reconstructs and tracks dynamic video content in a world coordinate frame from RGB inputs. This is achieved by predicting two appropriately defined pointmaps for a pair of frames captured at different moments. Specifically, we predict both pointmaps at the same moment, in the same world, capturing both static and dynamic scene geometry while maintaining 3D correspondences. Chaining these predictions through the video sequence with respect to a reference frame naturally computes long-range correspondences, effectively combining 3D reconstruction with 3D tracking. Unlike prior methods that rely heavily on 4D ground truth supervision we employ a novel adaptation scheme based on a reprojection loss. We establish a new extensive benchmark for world-frame reconstruction and tracking, demonstrating the effectiveness and efficiency of our unified, data-driven framework.

Author(s): Haiwen Feng and Junyi Zhang and Qianqian Wang and Yufei Ye and Pengcheng Yu and Michael Black and Trevor Darrell and Angjoo Kanazawa
Links:
Book Title: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)
Year: 2025
Month: October
BibTeX Type: Conference Paper (inproceedings)
State: Published

BibTeX

@inproceedings{st4track:iccv:2025,
  title = {{St4RTrack:} Simultaneous {4D} Reconstruction and Tracking in the World},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  abstract = {Dynamic 3D reconstruction and point tracking in videos are typically treated as separate tasks, despite their deep connection. We propose St4RTrack, a feed-forward framework that simultaneously reconstructs and tracks dynamic video content in a world coordinate frame from RGB inputs. This is achieved by predicting two appropriately defined pointmaps for a pair of frames captured at different moments. Specifically, we predict both pointmaps at the same moment, in the same world, capturing both static and dynamic scene geometry while maintaining 3D correspondences. Chaining these predictions through the video sequence with respect to a reference frame naturally computes long-range correspondences, effectively combining 3D reconstruction with 3D tracking. Unlike prior methods that rely heavily on 4D ground truth supervision we employ a novel adaptation scheme based on a reprojection loss. We establish a new extensive benchmark for world-frame reconstruction and tracking, demonstrating the effectiveness and efficiency of our unified, data-driven framework.},
  month = oct,
  year = {2025},
  author = {Feng, Haiwen and Zhang, Junyi and Wang, Qianqian and Ye, Yufei and Yu, Pengcheng and Black, Michael and Darrell, Trevor and Kanazawa, Angjoo},
  month_numeric = {10}
}