TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Double-chain Constraints for 3D Human Pose Estimation in I...

Double-chain Constraints for 3D Human Pose Estimation in Images and Videos

Hongbo Kang, Yong Wang, Mengyuan Liu, Doudou Wu, Peng Liu, Wenming Yang

2023-08-103D Human Pose EstimationMonocular 3D Human Pose EstimationPose Estimation
PaperPDFCode(official)

Abstract

Reconstructing 3D poses from 2D poses lacking depth information is particularly challenging due to the complexity and diversity of human motion. The key is to effectively model the spatial constraints between joints to leverage their inherent dependencies. Thus, we propose a novel model, called Double-chain Graph Convolutional Transformer (DC-GCT), to constrain the pose through a double-chain design consisting of local-to-global and global-to-local chains to obtain a complex representation more suitable for the current human pose. Specifically, we combine the advantages of GCN and Transformer and design a Local Constraint Module (LCM) based on GCN and a Global Constraint Module (GCM) based on self-attention mechanism as well as a Feature Interaction Module (FIM). The proposed method fully captures the multi-level dependencies between human body joints to optimize the modeling capability of the model. Moreover, we propose a method to use temporal information into the single-frame model by guiding the video sequence embedding through the joint embedding of the target frame, with negligible increase in computational cost. Experimental results demonstrate that DC-GCT achieves state-of-the-art performance on two challenging datasets (Human3.6M and MPI-INF-3DHP). Notably, our model achieves state-of-the-art performance on all action categories in the Human3.6M dataset using detected 2D poses from CPN, and our code is available at: https://github.com/KHB1698/DC-GCT.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationMPI-INF-3DHPAUC55.9DC-GCT
3D Human Pose EstimationMPI-INF-3DHPPCK87.5DC-GCT
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)46.1DC-GCT(T=1)
Pose EstimationMPI-INF-3DHPAUC55.9DC-GCT
Pose EstimationMPI-INF-3DHPPCK87.5DC-GCT
Pose EstimationHuman3.6MAverage MPJPE (mm)46.1DC-GCT(T=1)
3DMPI-INF-3DHPAUC55.9DC-GCT
3DMPI-INF-3DHPPCK87.5DC-GCT
3DHuman3.6MAverage MPJPE (mm)46.1DC-GCT(T=1)
1 Image, 2*2 StitchiMPI-INF-3DHPAUC55.9DC-GCT
1 Image, 2*2 StitchiMPI-INF-3DHPPCK87.5DC-GCT
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)46.1DC-GCT(T=1)

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16