TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Probabilistic Modeling for Human Mesh Recovery

Probabilistic Modeling for Human Mesh Recovery

Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, Kostas Daniilidis

2021-08-26ICCV 2021 103D Human Pose EstimationMulti-Hypotheses 3D Human Pose EstimationHuman Mesh Recovery3D Human Reconstruction
PaperPDFCode(official)

Abstract

This paper focuses on the problem of 3D human reconstruction from 2D evidence. Although this is an inherently ambiguous problem, the majority of recent works avoid the uncertainty modeling and typically regress a single estimate for a given input. In contrast to that, in this work, we propose to embrace the reconstruction ambiguity and we recast the problem as learning a mapping from the input to a distribution of plausible 3D poses. Our approach is based on the normalizing flows model and offers a series of advantages. For conventional applications, where a single 3D estimate is required, our formulation allows for efficient mode computation. Using the mode leads to performance that is comparable with the state of the art among deterministic unimodal regression models. Simultaneously, since we have access to the likelihood of each sample, we demonstrate that our model is useful in a series of downstream tasks, where we leverage the probabilistic nature of the prediction as a tool for more accurate estimation. These tasks include reconstruction from multiple uncalibrated views, as well as human model fitting, where our model acts as a powerful image-based prior for mesh recovery. Our results validate the importance of probabilistic modeling, and indicate state-of-the-art performance across a variety of settings. Code and models are available at: https://www.seas.upenn.edu/~nkolot/projects/prohmr.

Results

TaskDatasetMetricValueModel
3D Human Pose Estimation3DPWPA-MPJPE55.1ProHMR + fitting
3D Human Pose Estimation3DPWPA-MPJPE59.9Biggs [3]
3D Human Pose Estimation3DPWPA-MPJPE65ProHMR
3D Human Pose EstimationAH36MBest-Hypothesis PMPJPE (n = 25)60.1ProHMR
3D Human Pose EstimationAH36MH36M PMPJPE (n = 1)41.2ProHMR
3D Human Pose EstimationAH36MH36M PMPJPE (n = 25)36.8ProHMR
3D Human Pose EstimationAH36MMost-Likely Hypothesis PMPJPE (n = 1)67.3ProHMR
3D Human Pose EstimationAH36MBest-Hypothesis PMPJPE (n = 25)82.6ProHMR (2D Vis, by MHEntropy)
3D Human Pose EstimationAH36MH36M PMPJPE (n = 25)64.3ProHMR (2D Vis, by MHEntropy)
Pose Estimation3DPWPA-MPJPE55.1ProHMR + fitting
Pose Estimation3DPWPA-MPJPE59.9Biggs [3]
Pose Estimation3DPWPA-MPJPE65ProHMR
Pose EstimationAH36MBest-Hypothesis PMPJPE (n = 25)60.1ProHMR
Pose EstimationAH36MH36M PMPJPE (n = 1)41.2ProHMR
Pose EstimationAH36MH36M PMPJPE (n = 25)36.8ProHMR
Pose EstimationAH36MMost-Likely Hypothesis PMPJPE (n = 1)67.3ProHMR
Pose EstimationAH36MBest-Hypothesis PMPJPE (n = 25)82.6ProHMR (2D Vis, by MHEntropy)
Pose EstimationAH36MH36M PMPJPE (n = 25)64.3ProHMR (2D Vis, by MHEntropy)
3D3DPWPA-MPJPE55.1ProHMR + fitting
3D3DPWPA-MPJPE59.9Biggs [3]
3D3DPWPA-MPJPE65ProHMR
3DAH36MBest-Hypothesis PMPJPE (n = 25)60.1ProHMR
3DAH36MH36M PMPJPE (n = 1)41.2ProHMR
3DAH36MH36M PMPJPE (n = 25)36.8ProHMR
3DAH36MMost-Likely Hypothesis PMPJPE (n = 1)67.3ProHMR
3DAH36MBest-Hypothesis PMPJPE (n = 25)82.6ProHMR (2D Vis, by MHEntropy)
3DAH36MH36M PMPJPE (n = 25)64.3ProHMR (2D Vis, by MHEntropy)
1 Image, 2*2 Stitchi3DPWPA-MPJPE55.1ProHMR + fitting
1 Image, 2*2 Stitchi3DPWPA-MPJPE59.9Biggs [3]
1 Image, 2*2 Stitchi3DPWPA-MPJPE65ProHMR
1 Image, 2*2 StitchiAH36MBest-Hypothesis PMPJPE (n = 25)60.1ProHMR
1 Image, 2*2 StitchiAH36MH36M PMPJPE (n = 1)41.2ProHMR
1 Image, 2*2 StitchiAH36MH36M PMPJPE (n = 25)36.8ProHMR
1 Image, 2*2 StitchiAH36MMost-Likely Hypothesis PMPJPE (n = 1)67.3ProHMR
1 Image, 2*2 StitchiAH36MBest-Hypothesis PMPJPE (n = 25)82.6ProHMR (2D Vis, by MHEntropy)
1 Image, 2*2 StitchiAH36MH36M PMPJPE (n = 25)64.3ProHMR (2D Vis, by MHEntropy)

Related Papers

Systematic Comparison of Projection Methods for Monocular 3D Human Pose Estimation on Fisheye Images2025-06-24ExtPose: Robust and Coherent Pose Estimation by Extending ViTs2025-06-18PoseGRAF: Geometric-Reinforced Adaptive Fusion for Monocular 3D Human Pose Estimation2025-06-17PF-LHM: 3D Animatable Avatar Reconstruction from Pose-free Articulated Human Images2025-06-16SMPL Normal Map Is All You Need for Single-view Textured Human Reconstruction2025-06-15MetricHMR: Metric Human Mesh Recovery from Monocular Images2025-06-11Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation2025-06-03HumanRAM: Feed-forward Human Reconstruction and Animation Model using Transformers2025-06-03