TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation

SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation

Jianan Zhen, Qi Fang, Jiaming Sun, Wentao Liu, Wei Jiang, Hujun Bao, Xiaowei Zhou

2020-08-26ECCV 2020 83D Human Pose EstimationPose Estimation3D Depth Estimation3D Multi-Person Pose Estimation (root-relative)3D Multi-Person Pose Estimation (absolute)3D Pose Estimation2D Pose Estimation3D Multi-Person Pose Estimation
PaperPDFCode

Abstract

Recovering multi-person 3D poses with absolute scales from a single RGB image is a challenging problem due to the inherent depth and scale ambiguity from a single view. Addressing this ambiguity requires to aggregate various cues over the entire image, such as body sizes, scene layouts, and inter-person relationships. However, most previous methods adopt a top-down scheme that first performs 2D pose detection and then regresses the 3D pose and scale for each detected person individually, ignoring global contextual cues. In this paper, we propose a novel system that first regresses a set of 2.5D representations of body parts and then reconstructs the 3D absolute poses based on these 2.5D representations with a depth-aware part association algorithm. Such a single-shot bottom-up scheme allows the system to better learn and reason about the inter-person depth relationship, improving both 3D and 2D pose estimation. The experiments demonstrate that the proposed approach achieves the state-of-the-art performance on the CMU Panoptic and MuPoTS-3D datasets and is applicable to in-the-wild videos.

Results

TaskDatasetMetricValueModel
3D Multi-Person Pose Estimation (root-relative)MuPoTS-3D3DPCK73.5SMAP
3D Human Pose EstimationPanopticAverage MPJPE (mm)61.8SMAP
3D Human Pose EstimationMuPoTS-3D3DPCK35.4SMAP
3D Human Pose EstimationMuPoTS-3D3DPCK73.5SMAP
3D Multi-Person Pose Estimation (absolute)MuPoTS-3D3DPCK35.4SMAP
Pose EstimationPanopticAverage MPJPE (mm)61.8SMAP
Pose EstimationMuPoTS-3D3DPCK35.4SMAP
Pose EstimationMuPoTS-3D3DPCK73.5SMAP
3DPanopticAverage MPJPE (mm)61.8SMAP
3DMuPoTS-3D3DPCK35.4SMAP
3DMuPoTS-3D3DPCK73.5SMAP
3D Multi-Person Pose EstimationPanopticAverage MPJPE (mm)61.8SMAP
3D Multi-Person Pose EstimationMuPoTS-3D3DPCK35.4SMAP
3D Multi-Person Pose EstimationMuPoTS-3D3DPCK73.5SMAP
1 Image, 2*2 StitchiPanopticAverage MPJPE (mm)61.8SMAP
1 Image, 2*2 StitchiMuPoTS-3D3DPCK35.4SMAP
1 Image, 2*2 StitchiMuPoTS-3D3DPCK73.5SMAP

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16