TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DECA: Deep viewpoint-Equivariant human pose estimation usi...

DECA: Deep viewpoint-Equivariant human pose estimation using Capsule Autoencoders

Nicola Garau, Niccolò Bisagno, Piotr Bródka, Nicola Conci

2021-08-19ICCV 2021 103D Human Pose EstimationMonocular 3D Human Pose EstimationPose Estimation
PaperPDFCode(official)

Abstract

Human Pose Estimation (HPE) aims at retrieving the 3D position of human joints from images or videos. We show that current 3D HPE methods suffer a lack of viewpoint equivariance, namely they tend to fail or perform poorly when dealing with viewpoints unseen at training time. Deep learning methods often rely on either scale-invariant, translation-invariant, or rotation-invariant operations, such as max-pooling. However, the adoption of such procedures does not necessarily improve viewpoint generalization, rather leading to more data-dependent methods. To tackle this issue, we propose a novel capsule autoencoder network with fast Variational Bayes capsule routing, named DECA. By modeling each joint as a capsule entity, combined with the routing algorithm, our approach can preserve the joints' hierarchical and geometrical structure in the feature space, independently from the viewpoint. By achieving viewpoint equivariance, we drastically reduce the network data dependency at training time, resulting in an improved ability to generalize for unseen viewpoints. In the experimental validation, we outperform other methods on depth images from both seen and unseen viewpoints, both top-view, and front-view. In the RGB domain, the same network gives state-of-the-art results on the challenging viewpoint transfer task, also establishing a new framework for top-view HPE. The code can be found at https://github.com/mmlab-cv/DECA.

Results

TaskDatasetMetricValueModel
Pose EstimationITOP top-viewMean mAP86.92DECA-D3
Pose EstimationITOP top-viewMean mAP86.92DECA-D3
Pose Estimation ITOP front-viewMean mAP88.75DECA-D3
3DITOP top-viewMean mAP86.92DECA-D3
3DITOP top-viewMean mAP86.92DECA-D3
3D ITOP front-viewMean mAP88.75DECA-D3
1 Image, 2*2 StitchiITOP top-viewMean mAP86.92DECA-D3
1 Image, 2*2 StitchiITOP top-viewMean mAP86.92DECA-D3
1 Image, 2*2 Stitchi ITOP front-viewMean mAP88.75DECA-D3

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16