TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurat...

V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map

Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee

2017-11-20CVPR 2018 63D Human Pose Estimation3D Hand Pose EstimationPose EstimationHand Pose Estimation
PaperPDFCodeCode(official)CodeCodeCode

Abstract

Most of the existing deep learning-based methods for 3D hand and human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of keypoints, such as hand or human body joints, via 2D convolutional neural networks (CNNs). The first weakness of this approach is the presence of perspective distortion in the 2D depth map. While the depth map is intrinsically 3D data, many previous methods treat depth maps as 2D images that can distort the shape of the actual object through projection from 3D to 2D space. This compels the network to perform perspective distortion-invariant estimation. The second weakness of the conventional approach is that directly regressing 3D coordinates from a 2D image is a highly non-linear mapping, which causes difficulty in the learning procedure. To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint. We design our model as a 3D CNN that provides accurate estimates while running in real-time. Our system outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge. The code is available in https://github.com/mks0601/V2V-PoseNet_RELEASE.

Results

TaskDatasetMetricValueModel
HandMSRA HandsAverage 3D Error7.49V2V-PoseNet
HandICVL HandsAverage 3D Error6.28V2V-PoseNet
HandNYU HandsAverage 3D Error8.42V2V-PoseNet
HandHANDS 2017Average 3D Error9.95V2V-PoseNet
Pose EstimationITOP top-viewMean mAP83.44V2V-PoseNet
Pose Estimation ITOP front-viewMean mAP88.74V2V-PoseNet
Pose EstimationMSRA HandsAverage 3D Error7.49V2V-PoseNet
Pose EstimationICVL HandsAverage 3D Error6.28V2V-PoseNet
Pose EstimationNYU HandsAverage 3D Error8.42V2V-PoseNet
Pose EstimationHANDS 2017Average 3D Error9.95V2V-PoseNet
Hand Pose EstimationMSRA HandsAverage 3D Error7.49V2V-PoseNet
Hand Pose EstimationICVL HandsAverage 3D Error6.28V2V-PoseNet
Hand Pose EstimationNYU HandsAverage 3D Error8.42V2V-PoseNet
Hand Pose EstimationHANDS 2017Average 3D Error9.95V2V-PoseNet
3DITOP top-viewMean mAP83.44V2V-PoseNet
3D ITOP front-viewMean mAP88.74V2V-PoseNet
3DMSRA HandsAverage 3D Error7.49V2V-PoseNet
3DICVL HandsAverage 3D Error6.28V2V-PoseNet
3DNYU HandsAverage 3D Error8.42V2V-PoseNet
3DHANDS 2017Average 3D Error9.95V2V-PoseNet
1 Image, 2*2 StitchiITOP top-viewMean mAP83.44V2V-PoseNet
1 Image, 2*2 Stitchi ITOP front-viewMean mAP88.74V2V-PoseNet
1 Image, 2*2 StitchiMSRA HandsAverage 3D Error7.49V2V-PoseNet
1 Image, 2*2 StitchiICVL HandsAverage 3D Error6.28V2V-PoseNet
1 Image, 2*2 StitchiNYU HandsAverage 3D Error8.42V2V-PoseNet
1 Image, 2*2 StitchiHANDS 2017Average 3D Error9.95V2V-PoseNet

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16