TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/GLA-GCN: Global-local Adaptive Graph Convolutional Network...

GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video

Bruce X. B. Yu, Zhi Zhang, Yongxu Liu, Sheng-hua Zhong, Yan Liu, Chang Wen Chen

2023-07-12ICCV 2023 13D Human Pose EstimationPose Estimation
PaperPDFCode(official)

Abstract

3D human pose estimation has been researched for decades with promising fruits. 3D human pose lifting is one of the promising research directions toward the task where both estimated pose and ground truth pose data are used for training. Existing pose lifting works mainly focus on improving the performance of estimated pose, but they usually underperform when testing on the ground truth pose data. We observe that the performance of the estimated pose can be easily improved by preparing good quality 2D pose, such as fine-tuning the 2D pose or using advanced 2D pose detectors. As such, we concentrate on improving the 3D human pose lifting via ground truth data for the future improvement of more quality estimated pose data. Towards this goal, a simple yet effective model called Global-local Adaptive Graph Convolutional Network (GLA-GCN) is proposed in this work. Our GLA-GCN globally models the spatiotemporal structure via a graph representation and backtraces local joint features for 3D human pose estimation via individually connected layers. To validate our model design, we conduct extensive experiments on three benchmark datasets: Human3.6M, HumanEva-I, and MPI-INF-3DHP. Experimental results show that our GLA-GCN implemented with ground truth 2D poses significantly outperforms state-of-the-art methods (e.g., up to around 3%, 17%, and 14% error reductions on Human3.6M, HumanEva-I, and MPI-INF-3DHP, respectively). GitHub: https://github.com/bruceyo/GLA-GCN.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationHumanEva-IMean Reconstruction Error (mm)9.2GLA-GCN (T=27, GT)
3D Human Pose EstimationMPI-INF-3DHPAUC79.12GLA-GCN (T=81)
3D Human Pose EstimationMPI-INF-3DHPMPJPE27.76GLA-GCN (T=81)
3D Human Pose EstimationMPI-INF-3DHPPCK98.53GLA-GCN (T=81)
Pose EstimationHumanEva-IMean Reconstruction Error (mm)9.2GLA-GCN (T=27, GT)
Pose EstimationMPI-INF-3DHPAUC79.12GLA-GCN (T=81)
Pose EstimationMPI-INF-3DHPMPJPE27.76GLA-GCN (T=81)
Pose EstimationMPI-INF-3DHPPCK98.53GLA-GCN (T=81)
3DHumanEva-IMean Reconstruction Error (mm)9.2GLA-GCN (T=27, GT)
3DMPI-INF-3DHPAUC79.12GLA-GCN (T=81)
3DMPI-INF-3DHPMPJPE27.76GLA-GCN (T=81)
3DMPI-INF-3DHPPCK98.53GLA-GCN (T=81)
1 Image, 2*2 StitchiHumanEva-IMean Reconstruction Error (mm)9.2GLA-GCN (T=27, GT)
1 Image, 2*2 StitchiMPI-INF-3DHPAUC79.12GLA-GCN (T=81)
1 Image, 2*2 StitchiMPI-INF-3DHPMPJPE27.76GLA-GCN (T=81)
1 Image, 2*2 StitchiMPI-INF-3DHPPCK98.53GLA-GCN (T=81)

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16