TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Monocular 3D Multi-Person Pose Estimation by Integrating T...

Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

Yu Cheng, Bo wang, Bo Yang, Robby T. Tan

2021-04-05CVPR 2021 1Human DetectionPose EstimationMulti-Person Pose Estimation3D Multi-Person Pose Estimation (root-relative)3D Multi-Person Pose Estimation (absolute)3D Multi-Person Pose Estimation
PaperPDFCode(official)

Abstract

In monocular video 3D multi-person pose estimation, inter-person occlusion and close interactions can cause human detection to be erroneous and human-joints grouping to be unreliable. Existing top-down methods rely on human detection and thus suffer from these problems. Existing bottom-up methods do not use human detection, but they process all persons at once at the same scale, causing them to be sensitive to multiple-persons scale variations. To address these challenges, we propose the integration of top-down and bottom-up approaches to exploit their strengths. Our top-down network estimates human joints from all persons instead of one in an image patch, making it robust to possible erroneous bounding boxes. Our bottom-up network incorporates human-detection based normalized heatmaps, allowing the network to be more robust in handling scale variations. Finally, the estimated 3D poses from the top-down and bottom-up networks are fed into our integration network for final 3D poses. Besides the integration of top-down and bottom-up networks, unlike existing pose discriminators that are designed solely for single person, and consequently cannot assess natural inter-person interactions, we propose a two-person pose discriminator that enforces natural two-person interactions. Lastly, we also apply a semi-supervised method to overcome the 3D ground-truth data scarcity. Our quantitative and qualitative evaluations show the effectiveness of our method compared to the state-of-the-art baselines.

Results

TaskDatasetMetricValueModel
3D Multi-Person Pose Estimation (root-relative)MuPoTS-3D3DPCK89.6TDBU_Net
3D Human Pose EstimationMuPoTS-3D3DPCK48TDBU_Net
3D Human Pose EstimationMuPoTS-3D3DPCK89.6TDBU_Net
3D Multi-Person Pose Estimation (absolute)MuPoTS-3D3DPCK48TDBU_Net
Pose EstimationMuPoTS-3D3DPCK48TDBU_Net
Pose EstimationMuPoTS-3D3DPCK89.6TDBU_Net
3DMuPoTS-3D3DPCK48TDBU_Net
3DMuPoTS-3D3DPCK89.6TDBU_Net
3D Multi-Person Pose EstimationMuPoTS-3D3DPCK48TDBU_Net
3D Multi-Person Pose EstimationMuPoTS-3D3DPCK89.6TDBU_Net
1 Image, 2*2 StitchiMuPoTS-3D3DPCK48TDBU_Net
1 Image, 2*2 StitchiMuPoTS-3D3DPCK89.6TDBU_Net

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16