Perceiving Systems Conference Paper 2026

Predicting 4D Hand Trajectory from Monocular Videos

project arXiv code
Thumb ticker sm ye yufei 2019 2 300x450
Perceiving Systems
  • Guest Scientist
Thumb ticker sm photo yao
Perceiving Systems
  • Guest Scientist
Thumb ticker sm avatar 3dv
Perceiving Systems
  • Postdoctoral Researcher
Thumb ticker sm img 4026
Perceiving Systems
  • Guest Scientist
Thumb ticker sm headshot2021
Perceiving Systems
Director
Thumb xxl haptic

We present HAPTIC, an approach that infers coherent 4D hand trajectories from monocular videos. Current video-based hand pose reconstruction methods primarily focus on improving frame-wise 3D pose using adjacent frames rather than studying consistent 4D hand trajectories in space. Despite the additional temporal cues, they generally underperform compared to image-based methods due to the scarcity of annotated video data. To address these issues, we repurpose a state-of-the-art image-based transformer to take in multiple frames and directly predict a coherent trajectory. We introduce two types of lightweight attention layers: cross-view self-attention to fuse temporal information, and global cross-attention to bring in larger spatial context. Our method infers 4D hand trajectories similar to the ground truth while maintaining strong 2D reprojection alignment. We apply the method to both egocentric and allocentric videos. It significantly outperforms existing methods in global trajectory accuracy while being comparable to the state-of-the-art in single-image pose estimation.

Author(s): Yufei Ye and Yao Feng and Omid Taheri and Haiwen Feng and Michael J. Black and Shubham Tulsiani
Links:
Book Title: Int. Conf. on 3D Vision (3DV)
Year: 2026
Month: March
Day: 20
BibTeX Type: Conference Paper (inproceedings)
State: Accepted

BibTeX

@inproceedings{HAPTIC:3DV:26,
  title = {Predicting {4D} Hand Trajectory from Monocular Videos},
  booktitle = {Int.~Conf.~on 3D Vision (3DV)},
  abstract = {We present HAPTIC, an approach that infers coherent 4D hand trajectories from monocular videos. Current video-based hand pose reconstruction methods primarily focus on improving frame-wise 3D pose using adjacent frames rather than studying consistent 4D hand trajectories in space. Despite the additional temporal cues, they generally underperform compared to image-based methods due to the scarcity of annotated video data. To address these issues, we repurpose a state-of-the-art image-based transformer to take in multiple frames and directly predict a coherent trajectory. We introduce two types of lightweight attention layers: cross-view self-attention to fuse temporal information, and global cross-attention to bring in larger spatial context. Our method infers 4D hand trajectories similar to the ground truth while maintaining strong 2D reprojection alignment. We apply the method to both egocentric and allocentric videos. It significantly outperforms existing methods in global trajectory accuracy while being comparable to the state-of-the-art in single-image pose estimation.},
  month = mar,
  year = {2026},
  author = {Ye, Yufei and Feng, Yao and Taheri, Omid and Feng, Haiwen and Black, Michael J. and Tulsiani, Shubham},
  month_numeric = {3}
}