Human Pose, Shape and Action
3D Pose from Images
2D Pose from Images
Beyond Motion Capture
Action and Behavior
Body Perception
Body Applications
Pose and Motion Priors
Clothing Models (2011-2015)
Reflectance Filtering
Learning on Manifolds
Markerless Animal Motion Capture
Multi-Camera Capture
2D Pose from Optical Flow
Body Perception
Neural Prosthetics and Decoding
Part-based Body Models
Intrinsic Depth
Lie Bodies
Layers, Time and Segmentation
Understanding Action Recognition (JHMDB)
Intrinsic Video
Intrinsic Images
Action Recognition with Tracking
Neural Control of Grasping
Flowing Puppets
Faces
Deformable Structures
Model-based Anthropometry
Modeling 3D Human Breathing
Optical flow in the LGN
FlowCap
Smooth Loops from Unconstrained Video
PCA Flow
Efficient and Scalable Inference
Motion Blur in Layers
Facade Segmentation
Smooth Metric Learning
Robust PCA
3D Recognition
Object Detection
Beyond Mocap

To understand human and animal movement, we want to capture it, model it, and then simulate it. Most methods for capturing human motion are restricted to laboratory environments and/or limited volumes. Most do not take into account the complex and rich environment in which humans usually operate. Nor do they capture the kinds of everyday motions that people typically perform. To enable the capture of natural human behavior, we have to move motion capture out of the laboratory and into the world.
To that end, we pursue different technologies. In the lab, we automate the mocap process and make it easy to extract detailed and realistic 3D humans from mocap data. Our work here is focusing on extracting expressive SMPL-X bodies with detailed face and hand motion. We also capture objects and scenes to put human motion in context.
To move outside the lab, we use inertial measurement units (IMUs) worn on the body. These give information about pose and movement but putting on a full suite of sensors is impractical. We have developed methods to estimate full body motion from as few as six sensors worn on the legs, wrists, belt and head. Our most recent methods use deep neural networks to estimate pose from IMU measurements in real time in unconstrained scenarios.
IMUs however suffer from drift so we combine IMU data with a single hand-held video. In video we can detect the 2D joints of the body and use this to eliminate drift. We associate the IMU data with 2D data and solve for the transformation between the sensors. With this technology, we crated the popular 3DPW dataset, which contains video sequences with high-quality reference 3D poses. 3DPW is widely used to train and test video-based human pose and shape methods.
To go fully markerless, we have developed flying motion capture systems that work autonomously outside. Multiple micro-aerial vehicles coordinate their activities to detect and track a person, while estimating their 3D location. All processing is done onboard. We then use the captured video offline to estimate the 3D human pose and motion. The challenge here is to deal with noise in the camera calibration since the location of the flying cameras is only approximate.
Most recently, we have developed autonomous control algorithms for lighter-than-air vehicles. Such blimps have advantages over common multi-copters but are more complex to control. We are developing our blimp-based system to capture animal movement in natural conditions.
Our ongoing work is looking at capturing much more about humans -- their speech, gaze, interactions with objects, etc. The goal is always to track natural human behavior in settings that are as realistic as possible while making the process as lightweight and unobtrusive as possible.