Perceiving Systems Conference Paper 2025

PRIMAL: Physically Reactive and Interactive Motor Model for Avatar Learning

pdf project code video
Thumb ticker sm profile1
Perceiving Systems
Thumb ticker sm photo yao
Perceiving Systems
  • Guest Scientist
Thumb ticker sm cseke alp%c3%a1r   profilk%c3%a9p
Perceiving Systems
  • Guest Scientist
Thumb ticker sm profile
Perceiving Systems
  • Guest Scientist
Thumb ticker sm headshot2021
Perceiving Systems
Director
Thumb xxl primal

We formulate the motor system of an interactive avatar as a generative motion model that can drive the body to move through 3D space in a perpetual, realistic, controllable, and responsive manner. Although human motion generation has been extensively studied, many existing methods lack the responsiveness and realism of real human movements. Inspired by recent advances in foundation models, we propose PRIMAL, which is learned with a two-stage paradigm. In the pretraining stage, the model learns body movements from a large number of sub-second motion segments, providing a generative foundation from which more complex motions are built. This training is fully unsupervised without annotations. Given a single-frame initial state during inference, the pretrained model not only generates unbounded, realistic, and controllable motion, but also enables the avatar to be responsive to induced impulses in real time. In the adaptation phase, we employ a novel ControlNet-like adaptor to fine-tune the base model efficiently, adapting it to new tasks such as few-shot personalized action generation and spatial target reaching. Evaluations show that our proposed method outperforms state-of-the-art baselines. We leverage the model to create a real-time character animation system in Unreal Engine that feels highly responsive and natural.

Author(s): Yan Zhang and Yao Feng and Alpár Cseke and Nitin Saini and Nathan Bajandas and Nicolas Heron and Michael J. Black
Links:
Book Title: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)
Year: 2025
Month: October
BibTeX Type: Conference Paper (inproceedings)
State: Published

BibTeX

@inproceedings{primal:iccv:2025,
  title = {{PRIMAL:} Physically Reactive and Interactive Motor Model for Avatar Learning},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  abstract = {We formulate the motor system of an interactive avatar as a generative motion model that can drive the body to move through 3D space in a perpetual, realistic, controllable, and responsive manner. Although human motion generation has been extensively studied, many existing methods lack the responsiveness and realism of real human movements. Inspired by recent advances in foundation models, we propose PRIMAL, which is learned with a two-stage paradigm. In the pretraining stage, the model learns body movements from a large number of sub-second motion segments, providing a generative foundation from which more complex motions are built. This training is fully unsupervised without annotations. Given a single-frame initial state during inference, the pretrained model not only generates unbounded, realistic, and controllable motion, but also enables the avatar to be responsive to induced impulses in real time. In the adaptation phase, we employ a novel ControlNet-like adaptor to fine-tune the base model efficiently, adapting it to new tasks such as few-shot personalized action generation and spatial target reaching. Evaluations show that our proposed method outperforms state-of-the-art baselines. We leverage the model to create a real-time character animation system in Unreal Engine that feels highly responsive and natural.},
  month = oct,
  year = {2025},
  author = {Zhang, Yan and Feng, Yao and Cseke, Alpár and Saini, Nitin and Bajandas, Nathan and Heron, Nicolas and Black, Michael J.},
  month_numeric = {10}
}