Perceiving Systems Talk Biography
11 January 2022 at 17:00 - 18:00 | Remote talk via Zoom

Animatable humans from monocular RGB(D) videos

ORGANIZERS
Thumb ticker sm avatar
Perceiving Systems
  • Guest Scientist
Screen shot 2021 12 08 at 9.06.20 pm

We aim to reconstruct animatable humans from monocular RGB(D) videos. Learning user-controlled representation under novel poses remains a challenging problem. To tackle this problem, I will introduce two related methods. First, to reconstruct animatable photo-realistic humans, we integrate observations across frames and encode the appearance at each individual frame by utilizing the human pose and point clouds as the input. In addition, we utilize a temporal transformer to integrate the features of points in the unseen frames and the tracked points in a handful of automatically-selected key frames. Second, to reconstruct the animatable 3D geometry, we propose to represent the human with clothes in the canonical space by learning the implicit functions and utilize the LBS to animate the learned geometry.

Speaker Biography

Tiantian Wang (University of California, Merced)

Ph.D. student

Tiantian Wang is a forth-year Ph.D. student of Vision and Learning Lab in the Department of Electrical Science and Computer Engineering, advised by Prof. Ming-Hsuan Yang. Her research interests include deep learning, machine learning and their applications in computer vision and computer graphics.