TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Local Recurrent Models for Human Mesh Recovery

Learning Local Recurrent Models for Human Mesh Recovery

Runze Li, Srikrishna Karanam, Ren Li, Terrence Chen, Bir Bhanu, Ziyan Wu

2021-07-273D Human Pose Estimation3D Human Shape EstimationHuman Mesh Recovery
PaperPDF

Abstract

We consider the problem of estimating frame-level full human body meshes given a video of a person with natural motion dynamics. While much progress in this field has been in single image-based mesh estimation, there has been a recent uptick in efforts to infer mesh dynamics from video given its role in alleviating issues such as depth ambiguity and occlusions. However, a key limitation of existing work is the assumption that all the observed motion dynamics can be modeled using one dynamical/recurrent model. While this may work well in cases with relatively simplistic dynamics, inference with in-the-wild videos presents many challenges. In particular, it is typically the case that different body parts of a person undergo different dynamics in the video, e.g., legs may move in a way that may be dynamically different from hands (e.g., a person dancing). To address these issues, we present a new method for video mesh recovery that divides the human mesh into several local parts following the standard skeletal model. We then model the dynamics of each local part with separate recurrent models, with each model conditioned appropriately based on the known kinematic structure of the human body. This results in a structure-informed local recurrent learning architecture that can be trained in an end-to-end fashion with available annotations. We conduct a variety of experiments on standard video mesh recovery benchmark datasets such as Human3.6M, MPI-INF-3DHP, and 3DPW, demonstrating the efficacy of our design of modeling local dynamics as well as establishing state-of-the-art results based on standard evaluation metrics.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationMPI-INF-3DHPMPJPE94.6LMR
3D Human Pose EstimationMPI-INF-3DHPPA-MPJPE62.4LMR
3D Human Pose Estimation3DPWAcceleration Error15.6LMR
3D Human Pose Estimation3DPWMPJPE81.7LMR
3D Human Pose Estimation3DPWMPVPE93.6LMR
3D Human Pose Estimation3DPWPA-MPJPE51.2LMR
Pose EstimationMPI-INF-3DHPMPJPE94.6LMR
Pose EstimationMPI-INF-3DHPPA-MPJPE62.4LMR
Pose Estimation3DPWAcceleration Error15.6LMR
Pose Estimation3DPWMPJPE81.7LMR
Pose Estimation3DPWMPVPE93.6LMR
Pose Estimation3DPWPA-MPJPE51.2LMR
3DMPI-INF-3DHPMPJPE94.6LMR
3DMPI-INF-3DHPPA-MPJPE62.4LMR
3D3DPWAcceleration Error15.6LMR
3D3DPWMPJPE81.7LMR
3D3DPWMPVPE93.6LMR
3D3DPWPA-MPJPE51.2LMR
1 Image, 2*2 StitchiMPI-INF-3DHPMPJPE94.6LMR
1 Image, 2*2 StitchiMPI-INF-3DHPPA-MPJPE62.4LMR
1 Image, 2*2 Stitchi3DPWAcceleration Error15.6LMR
1 Image, 2*2 Stitchi3DPWMPJPE81.7LMR
1 Image, 2*2 Stitchi3DPWMPVPE93.6LMR
1 Image, 2*2 Stitchi3DPWPA-MPJPE51.2LMR

Related Papers

Systematic Comparison of Projection Methods for Monocular 3D Human Pose Estimation on Fisheye Images2025-06-24ExtPose: Robust and Coherent Pose Estimation by Extending ViTs2025-06-18PoseGRAF: Geometric-Reinforced Adaptive Fusion for Monocular 3D Human Pose Estimation2025-06-17MetricHMR: Metric Human Mesh Recovery from Monocular Images2025-06-11Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation2025-06-03UPTor: Unified 3D Human Pose Dynamics and Trajectory Prediction for Human-Robot Interaction2025-05-20PoseBench3D: A Cross-Dataset Analysis Framework for 3D Human Pose Estimation2025-05-16ADHMR: Aligning Diffusion-based Human Mesh Recovery via Direct Preference Optimization2025-05-15