TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Lifting from the Deep: Convolutional 3D Pose Estimation fr...

Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image

Denis Tome, Chris Russell, Lourdes Agapito

2017-01-01CVPR 2017 73D Human Pose EstimationWeakly-supervised 3D Human Pose EstimationMonocular 3D Human Pose EstimationPose Estimation3D Pose Estimation
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations. The entire process is trained end-to-end, is extremely efficient and obtains state- of-the-art results on Human3.6M outperforming previous approaches both on 2D and 3D errors.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)88.39Projected-pose belief maps + 2D fusion layers
3D Human Pose EstimationHuman3.6MFrames Needed1Projected-pose belief maps + 2D fusion layers
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)88.4Tome et al.
3D Human Pose EstimationHuman3.6MNumber of Frames Per View1Tome et al.
3D Human Pose EstimationHuman3.6MNumber of Views1Tome et al.
Pose EstimationHuman3.6MAverage MPJPE (mm)88.39Projected-pose belief maps + 2D fusion layers
Pose EstimationHuman3.6MFrames Needed1Projected-pose belief maps + 2D fusion layers
Pose EstimationHuman3.6MAverage MPJPE (mm)88.4Tome et al.
Pose EstimationHuman3.6MNumber of Frames Per View1Tome et al.
Pose EstimationHuman3.6MNumber of Views1Tome et al.
3DHuman3.6MAverage MPJPE (mm)88.39Projected-pose belief maps + 2D fusion layers
3DHuman3.6MFrames Needed1Projected-pose belief maps + 2D fusion layers
3DHuman3.6MAverage MPJPE (mm)88.4Tome et al.
3DHuman3.6MNumber of Frames Per View1Tome et al.
3DHuman3.6MNumber of Views1Tome et al.
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)88.39Projected-pose belief maps + 2D fusion layers
1 Image, 2*2 StitchiHuman3.6MFrames Needed1Projected-pose belief maps + 2D fusion layers
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)88.4Tome et al.
1 Image, 2*2 StitchiHuman3.6MNumber of Frames Per View1Tome et al.
1 Image, 2*2 StitchiHuman3.6MNumber of Views1Tome et al.

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16