Header logo is

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

2021

Conference Paper

avg

ps


In this paper, we aim to create generalizable and controllable neural signed distance fields (SDFs) that represent clothed humans from monocular depth observations. Recent advances in deep learning, especially neural implicit representations, have enabled human shape reconstruction and controllable avatar generation from different sensor inputs. However, to generate realistic cloth deformations from novel input poses, watertight meshes or dense full-body scans are usually needed as inputs. Furthermore, due to the difficulty of effectively modeling pose-dependent cloth deformations for diverse body shapes and cloth types, existing approaches resort to per-subject/cloth-type optimization from scratch, which is computationally expensive. In contrast, we propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images. We achieve this by using meta-learning to learn an initialization of a hypernetwork that predicts the parameters of neural SDFs. The hypernetwork is conditioned on human poses and represents a clothed neural avatar that deforms non-rigidly according to the input poses. Meanwhile, it is meta-learned to effectively incorporate priors of diverse body shapes and cloth types and thus can be much faster to fine-tune compared to models trained from scratch. We qualitatively and quantitatively show that our approach outperforms state-of-the-art approaches that require complete meshes as inputs while our approach requires only depth frames as inputs and runs orders of magnitudes faster. Furthermore, we demonstrate that our meta-learned hypernetwork is very robust, being the first to generate avatars with realistic dynamic cloth deformations given as few as 8 monocular depth frames.

Author(s): Shaofei Wang and Marko Mihajlovic and Qianli Ma and Andreas Geiger and Siyu Tang
Book Title: Advances in Neural Information Processing Systems 34
Volume: 4
Pages: 2810--2822
Year: 2021
Month: December
Editors: Ranzato, M. and Beygelzimer, A. and Dauphin, Y. and Liang, P. S. and Wortman Vaughan, J.
Publisher: Curran Associates, Inc.

Department(s): Autonomous Vision, Perceiving Systems
Research Project(s): Implicit Representations
Clothing Capture and Modeling
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

Event Name: 35th Conference on Neural Information Processing Systems (NeurIPS 2021)
Event Place: Online

Address: Red Hook, NY
ISBN: 978-1-7138-4539-3
State: Published
URL: https://proceedings.neurips.cc/paper/2021/hash/1680829293f2a8541efa2647a0290f88-Abstract.html

Links: Project page
arXiv

BibTex

@inproceedings{Wang2021NEURIPS,
  title = {MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images},
  author = {Wang, Shaofei and Mihajlovic, Marko and Ma, Qianli and Geiger, Andreas and Tang, Siyu},
  booktitle = {Advances in Neural Information Processing Systems 34},
  volume = {4},
  pages = {2810--2822},
  editors = {Ranzato, M. and Beygelzimer, A. and Dauphin, Y. and Liang, P. S. and Wortman Vaughan, J.},
  publisher = {Curran Associates, Inc.},
  address = {Red Hook, NY},
  month = dec,
  year = {2021},
  doi = {},
  url = {https://proceedings.neurips.cc/paper/2021/hash/1680829293f2a8541efa2647a0290f88-Abstract.html},
  month_numeric = {12}
}