TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unsupervised Cross-Modal Alignment for Multi-Person 3D Pos...

Unsupervised Cross-Modal Alignment for Multi-Person 3D Pose Estimation

Jogendra Nath Kundu, Ambareesh Revanur, Govind Vitthal Waghmare, Rahul Mysore Venkatesh, R. Venkatesh Babu

2020-08-04ECCV 2020 83D Human Pose EstimationUnsupervised 3D Multi-Person Pose Estimationcross-modal alignmentUnsupervised 3D Human Pose EstimationPose Estimation3D Pose Estimation2D Pose Estimation3D Multi-Person Pose Estimation
PaperPDFCode

Abstract

We present a deployment friendly, fast bottom-up framework for multi-person 3D human pose estimation. We adopt a novel neural representation of multi-person 3D pose which unifies the position of person instances with their corresponding 3D pose representation. This is realized by learning a generative pose embedding which not only ensures plausible 3D pose predictions, but also eliminates the usual keypoint grouping operation as employed in prior bottom-up approaches. Further, we propose a practical deployment paradigm where paired 2D or 3D pose annotations are unavailable. In the absence of any paired supervision, we leverage a frozen network, as a teacher model, which is trained on an auxiliary task of multi-person 2D pose estimation. We cast the learning as a cross-modal alignment problem and propose training objectives to realize a shared latent space between two diverse modalities. We aim to enhance the model's ability to perform beyond the limiting teacher network by enriching the latent-to-3D pose mapping using artificially synthesized multi-person 3D scene samples. Our approach not only generalizes to in-the-wild images, but also yields a superior trade-off between speed and performance, compared to prior top-down approaches. Our approach also yields state-of-the-art multi-person 3D pose estimation performance among the bottom-up approaches under consistent supervision levels.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationMuPoTS-3D3DPCK78.4Unsupervised Cross-Modal Alignment
Pose EstimationMuPoTS-3D3DPCK78.4Unsupervised Cross-Modal Alignment
3DMuPoTS-3D3DPCK78.4Unsupervised Cross-Modal Alignment
3D Multi-Person Pose EstimationMuPoTS-3D3DPCK78.4Unsupervised Cross-Modal Alignment
1 Image, 2*2 StitchiMuPoTS-3D3DPCK78.4Unsupervised Cross-Modal Alignment

Related Papers

Transformer-based Spatial Grounding: A Comprehensive Survey2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16