TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Direct Multi-view Multi-person 3D Pose Estimation

Direct Multi-view Multi-person 3D Pose Estimation

Tao Wang, Jianfeng Zhang, Yujun Cai, Shuicheng Yan, Jiashi Feng

2021-11-07NeurIPS 2021 12Pose Estimation3D Pose Estimation3D Multi-Person Pose Estimation
PaperPDFCode(official)Code

Abstract

We present Multi-view Pose transformer (MvP) for estimating multi-person 3D poses from multi-view images. Instead of estimating 3D joint locations from costly volumetric representation or reconstructing the per-person 3D pose from multiple detected 2D poses as in previous methods, MvP directly regresses the multi-person 3D poses in a clean and efficient way, without relying on intermediate tasks. Specifically, MvP represents skeleton joints as learnable query embeddings and let them progressively attend to and reason over the multi-view information from the input images to directly regress the actual 3D joint locations. To improve the accuracy of such a simple pipeline, MvP presents a hierarchical scheme to concisely represent query embeddings of multi-person skeleton joints and introduces an input-dependent query adaptation approach. Further, MvP designs a novel geometrically guided attention mechanism, called projective attention, to more precisely fuse the cross-view information for each joint. MvP also introduces a RayConv operation to integrate the view-dependent camera geometry into the feature representations for augmenting the projective attention. We show experimentally that our MvP model outperforms the state-of-the-art methods on several benchmarks while being much more efficient. Notably, it achieves 92.3% AP25 on the challenging Panoptic dataset, improving upon the previous best approach [36] by 9.8%. MvP is general and also extendable to recovering human mesh represented by the SMPL model, thus useful for modeling multi-person body shapes. Code and models are available at https://github.com/sail-sg/mvp.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationPanopticAverage MPJPE (mm)15.8MvP
3D Human Pose EstimationShelfPCP3D97.4MvP
3D Human Pose EstimationCampusPCP3D96.6MvP
Pose EstimationPanopticAverage MPJPE (mm)15.8MvP
Pose EstimationShelfPCP3D97.4MvP
Pose EstimationCampusPCP3D96.6MvP
3DPanopticAverage MPJPE (mm)15.8MvP
3DShelfPCP3D97.4MvP
3DCampusPCP3D96.6MvP
3D Multi-Person Pose EstimationPanopticAverage MPJPE (mm)15.8MvP
3D Multi-Person Pose EstimationShelfPCP3D97.4MvP
3D Multi-Person Pose EstimationCampusPCP3D96.6MvP
1 Image, 2*2 StitchiPanopticAverage MPJPE (mm)15.8MvP
1 Image, 2*2 StitchiShelfPCP3D97.4MvP
1 Image, 2*2 StitchiCampusPCP3D96.6MvP

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16