TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Uncertainty-Aware Adaptation for Self-Supervised 3D Human ...

Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose Estimation

Jogendra Nath Kundu, Siddharth Seth, Pradyumna YM, Varun Jampani, Anirban Chakraborty, R. Venkatesh Babu

2022-03-29CVPR 2022 13D Human Pose EstimationWeakly-supervised 3D Human Pose EstimationMonocular 3D Human Pose EstimationUnsupervised 3D Human Pose EstimationPose EstimationUnsupervised Domain AdaptationDomain Adaptation
PaperPDF

Abstract

The advances in monocular 3D human pose estimation are dominated by supervised techniques that require large-scale 2D/3D pose annotations. Such methods often behave erratically in the absence of any provision to discard unfamiliar out-of-distribution data. To this end, we cast the 3D human pose learning as an unsupervised domain adaptation problem. We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations; a) model-free joint localization and b) model-based parametric regression. Such a design allows us to derive suitable measures to quantify prediction uncertainty at both pose and joint level granularity. While supervising only on labeled synthetic samples, the adaptation process aims to minimize the uncertainty for the unlabeled target images while maximizing the same for an extreme out-of-distribution dataset (backgrounds). Alongside synthetic-to-real 3D pose adaptation, the joint-uncertainties allow expanding the adaptation to work on in-the-wild images even in the presence of occlusion and truncation scenarios. We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)59.4Uncertainty-Aware Adaptation
3D Human Pose EstimationHuman3.6MPA-MPJPE49.6Uncertainty-Aware Adaptation
3D ReconstructionHuman3.6MMPJPE103.2Uncertainty-Aware Adaptation
3D ReconstructionHuman3.6MPA-MPJPE88.9Uncertainty-Aware Adaptation
Pose EstimationHuman3.6MAverage MPJPE (mm)59.4Uncertainty-Aware Adaptation
Pose EstimationHuman3.6MPA-MPJPE49.6Uncertainty-Aware Adaptation
3DHuman3.6MAverage MPJPE (mm)59.4Uncertainty-Aware Adaptation
3DHuman3.6MPA-MPJPE49.6Uncertainty-Aware Adaptation
3DHuman3.6MMPJPE103.2Uncertainty-Aware Adaptation
3DHuman3.6MPA-MPJPE88.9Uncertainty-Aware Adaptation
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)59.4Uncertainty-Aware Adaptation
1 Image, 2*2 StitchiHuman3.6MPA-MPJPE49.6Uncertainty-Aware Adaptation

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16