TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A2J: Anchor-to-Joint Regression Network for 3D Articulated...

A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation from a Single Depth Image

Fu Xiong, Boshen Zhang, Yang Xiao, Zhiguo Cao, Taidong Yu, Joey Tianyi Zhou, Junsong Yuan

2019-08-27ICCV 2019 10Pose EstimationDepth Estimation3D Pose EstimationHand Pose Estimation
PaperPDFCodeCode(official)

Abstract

For 3D hand and body pose estimation task in depth image, a novel anchor-based approach termed Anchor-to-Joint regression network (A2J) with the end-to-end learning ability is proposed. Within A2J, anchor points able to capture global-local spatial context information are densely set on depth image as local regressors for the joints. They contribute to predict the positions of the joints in ensemble way to enhance generalization ability. The proposed 3D articulated pose estimation paradigm is different from the state-of-the-art encoder-decoder based FCN, 3D CNN and point-set based manners. To discover informative anchor points towards certain joint, anchor proposal procedure is also proposed for A2J. Meanwhile 2D CNN (i.e., ResNet-50) is used as backbone network to drive A2J, without using time-consuming 3D convolutional or deconvolutional layers. The experiments on 3 hand datasets and 2 body datasets verify A2J's superiority. Meanwhile, A2J is of high running speed around 100 FPS on single NVIDIA 1080Ti GPU.

Results

TaskDatasetMetricValueModel
Depth EstimationNYU-Depth V2mAP8.61A2J
HandK2HPDPDJ@5mm76.3A2J
HandICVL HandsAverage 3D Error6.461A2J
HandICVL HandsFPS105.06A2J
HandNYU HandsAverage 3D Error8.61A2J
HandNYU HandsFPS105.06A2J
HandHANDS 2017Average 3D Error8.57A2J
Pose Estimation ITOP front-viewMean mAP88A2J
Pose EstimationK2HPDFPS93.78A2J
Pose EstimationK2HPDPDJ@5mm76.3A2J
Pose EstimationICVL HandsAverage 3D Error6.461A2J
Pose EstimationICVL HandsFPS105.06A2J
Pose EstimationNYU HandsAverage 3D Error8.61A2J
Pose EstimationNYU HandsFPS105.06A2J
Pose EstimationHANDS 2017Average 3D Error8.57A2J
Hand Pose EstimationK2HPDPDJ@5mm76.3A2J
Hand Pose EstimationICVL HandsAverage 3D Error6.461A2J
Hand Pose EstimationICVL HandsFPS105.06A2J
Hand Pose EstimationNYU HandsAverage 3D Error8.61A2J
Hand Pose EstimationNYU HandsFPS105.06A2J
Hand Pose EstimationHANDS 2017Average 3D Error8.57A2J
3D ITOP front-viewMean mAP88A2J
3DK2HPDFPS93.78A2J
3DK2HPDPDJ@5mm76.3A2J
3DICVL HandsAverage 3D Error6.461A2J
3DICVL HandsFPS105.06A2J
3DNYU HandsAverage 3D Error8.61A2J
3DNYU HandsFPS105.06A2J
3DHANDS 2017Average 3D Error8.57A2J
3DNYU-Depth V2mAP8.61A2J
3D Pose EstimationK2HPDFPS93.78A2J
1 Image, 2*2 Stitchi ITOP front-viewMean mAP88A2J
1 Image, 2*2 StitchiK2HPDFPS93.78A2J
1 Image, 2*2 StitchiK2HPDPDJ@5mm76.3A2J
1 Image, 2*2 StitchiICVL HandsAverage 3D Error6.461A2J
1 Image, 2*2 StitchiICVL HandsFPS105.06A2J
1 Image, 2*2 StitchiNYU HandsAverage 3D Error8.61A2J
1 Image, 2*2 StitchiNYU HandsFPS105.06A2J
1 Image, 2*2 StitchiHANDS 2017Average 3D Error8.57A2J

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16