TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Camera-Space Hand Mesh Recovery via Semantic Aggregation a...

Camera-Space Hand Mesh Recovery via Semantic Aggregation and Adaptive 2D-1D Registration

Xingyu Chen, Yufeng Liu, Chongyang Ma, Jianlong Chang, Huayan Wang, Tian Chen, Xiaoyan Guo, Pengfei Wan, Wen Zheng

2021-03-04CVPR 2021 13D Hand Pose Estimation
PaperPDFCode(official)

Abstract

Recent years have witnessed significant progress in 3D hand mesh recovery. Nevertheless, because of the intrinsic 2D-to-3D ambiguity, recovering camera-space 3D information from a single RGB image remains challenging. To tackle this problem, we divide camera-space mesh recovery into two sub-tasks, i.e., root-relative mesh recovery and root recovery. First, joint landmarks and silhouette are extracted from a single input image to provide 2D cues for the 3D tasks. In the root-relative mesh recovery task, we exploit semantic relations among joints to generate a 3D mesh from the extracted 2D cues. Such generated 3D mesh coordinates are expressed relative to a root position, i.e., wrist of the hand. In the root recovery task, the root position is registered to the camera space by aligning the generated 3D mesh back to 2D cues, thereby completing cameraspace 3D mesh recovery. Our pipeline is novel in that (1) it explicitly makes use of known semantic relations among joints and (2) it exploits 1D projections of the silhouette and mesh to achieve robust registration. Extensive experiments on popular datasets such as FreiHAND, RHD, and Human3.6M demonstrate that our approach achieves stateof-the-art performance on both root-relative mesh recovery and root recovery. Our code is publicly available at https://github.com/SeanChenxy/HandMesh.

Results

TaskDatasetMetricValueModel
HandFreiHANDPA-F@15mm0.977CMR
HandFreiHANDPA-F@5mm0.715CMR
HandFreiHANDPA-MPJPE6.9CMR
HandFreiHANDPA-MPVPE7CMR
Pose EstimationFreiHANDPA-F@15mm0.977CMR
Pose EstimationFreiHANDPA-F@5mm0.715CMR
Pose EstimationFreiHANDPA-MPJPE6.9CMR
Pose EstimationFreiHANDPA-MPVPE7CMR
Hand Pose EstimationFreiHANDPA-F@15mm0.977CMR
Hand Pose EstimationFreiHANDPA-F@5mm0.715CMR
Hand Pose EstimationFreiHANDPA-MPJPE6.9CMR
Hand Pose EstimationFreiHANDPA-MPVPE7CMR
3DFreiHANDPA-F@15mm0.977CMR
3DFreiHANDPA-F@5mm0.715CMR
3DFreiHANDPA-MPJPE6.9CMR
3DFreiHANDPA-MPVPE7CMR
3D Hand Pose EstimationFreiHANDPA-F@15mm0.977CMR
3D Hand Pose EstimationFreiHANDPA-F@5mm0.715CMR
3D Hand Pose EstimationFreiHANDPA-MPJPE6.9CMR
3D Hand Pose EstimationFreiHANDPA-MPVPE7CMR
1 Image, 2*2 StitchiFreiHANDPA-F@15mm0.977CMR
1 Image, 2*2 StitchiFreiHANDPA-F@5mm0.715CMR
1 Image, 2*2 StitchiFreiHANDPA-MPJPE6.9CMR
1 Image, 2*2 StitchiFreiHANDPA-MPVPE7CMR

Related Papers

ExtPose: Robust and Coherent Pose Estimation by Extending ViTs2025-06-18Monocular 3D Hand Pose Estimation with Implicit Camera Alignment2025-06-10OccRobNet : Occlusion Robust Network for Accurate 3D Interacting Hand-Object Pose Estimation2025-03-27Analyzing the Synthetic-to-Real Domain Gap in 3D Hand Pose Estimation2025-03-25SiMHand: Mining Similar Hands for Large-Scale 3D Hand Pose Pre-training2025-02-21HaWoR: World-Space Hand Motion Reconstruction from Egocentric Videos2025-01-06BIGS: Bimanual Category-agnostic Interaction Reconstruction from Monocular Videos via 3D Gaussian Splatting2025-01-01MMHMR: Generative Masked Modeling for Hand Mesh Recovery2024-12-18