TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning 3D Human Dynamics from Video

Learning 3D Human Dynamics from Video

Angjoo Kanazawa, Jason Y. Zhang, Panna Felsen, Jitendra Malik

2018-12-04CVPR 2019 63D Human Pose Estimation3D Human DynamicsHuman Dynamics
PaperPDFCode(official)

Abstract

From an image of a person in action, we can easily guess the 3D motion of the person in the immediate past and future. This is because we have a mental model of 3D human dynamics that we have acquired from observing visual sequences of humans in motion. We present a framework that can similarly learn a representation of 3D dynamics of humans from video via a simple but effective temporal encoding of image features. At test time, from video, the learned temporal representation give rise to smooth 3D mesh predictions. From a single image, our model can recover the current 3D mesh as well as its 3D past and future motion. Our approach is designed so it can learn from videos with 2D pose annotations in a semi-supervised manner. Though annotated data is always limited, there are millions of videos uploaded daily on the Internet. In this work, we harvest this Internet-scale source of unlabeled data by training our model on unlabeled video with pseudo-ground truth 2D pose obtained from an off-the-shelf 2D pose detector. Our experiments show that adding more videos with pseudo-ground truth 2D pose monotonically improves 3D prediction performance. We evaluate our model, Human Mesh and Motion Recovery (HMMR), on the recent challenging dataset of 3D Poses in the Wild and obtain state-of-the-art performance on the 3D prediction task without any fine-tuning. The project website with video, code, and data can be found at https://akanazawa.github.io/human_dynamics/.

Results

TaskDatasetMetricValueModel
3D Human Pose Estimation3DPWAcceleration Error15.2HMMR (T=20)
3D Human Pose Estimation3DPWMPJPE116.5HMMR (T=20)
3D Human Pose Estimation3DPWPA-MPJPE72.6HMMR (T=20)
Pose Estimation3DPWAcceleration Error15.2HMMR (T=20)
Pose Estimation3DPWMPJPE116.5HMMR (T=20)
Pose Estimation3DPWPA-MPJPE72.6HMMR (T=20)
3D3DPWAcceleration Error15.2HMMR (T=20)
3D3DPWMPJPE116.5HMMR (T=20)
3D3DPWPA-MPJPE72.6HMMR (T=20)
1 Image, 2*2 Stitchi3DPWAcceleration Error15.2HMMR (T=20)
1 Image, 2*2 Stitchi3DPWMPJPE116.5HMMR (T=20)
1 Image, 2*2 Stitchi3DPWPA-MPJPE72.6HMMR (T=20)

Related Papers

LLMs are Introvert2025-07-08Systematic Comparison of Projection Methods for Monocular 3D Human Pose Estimation on Fisheye Images2025-06-24ExtPose: Robust and Coherent Pose Estimation by Extending ViTs2025-06-18PoseGRAF: Geometric-Reinforced Adaptive Fusion for Monocular 3D Human Pose Estimation2025-06-17Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation2025-06-03DTRT: Enhancing Human Intent Estimation and Role Allocation for Physical Human-Robot Collaboration2025-05-23UPTor: Unified 3D Human Pose Dynamics and Trajectory Prediction for Human-Robot Interaction2025-05-20PoseBench3D: A Cross-Dataset Analysis Framework for 3D Human Pose Estimation2025-05-16