TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Pyramid-structured Long-range Dependencies for 3D...

Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation

Mingjie Wei, Xuemei Xie, Yutong Zhong, Guangming Shi

2025-06-03IEEE Transactions on Multimedia 2025 13D Human Pose EstimationPose EstimationGraph Attention
PaperPDFCode(official)

Abstract

Action coordination in human structure is indispensable for the spatial constraints of 2D joints to recover 3D pose. Usually, action coordination is represented as a long-range dependence among body parts. However, there are two main challenges in modeling long-range dependencies. First, joints should not only be constrained by other individual joints but also be modulated by the body parts. Second, existing methods make networks deeper to learn dependencies between non-linked parts. They introduce uncorrelated noise and increase the model size. In this paper, we utilize a pyramid structure to better learn potential long-range dependencies. It can capture the correlation across joints and groups, which complements the context of the human sub-structure. In an effective cross-scale way, it captures the pyramid-structured long-range dependence. Specifically, we propose a novel Pyramid Graph Attention (PGA) module to capture long-range cross-scale dependencies. It concatenates information from various scales into a compact sequence, and then computes the correlation between scales in parallel. Combining PGA with graph convolution modules, we develop a Pyramid Graph Transformer (PGFormer) for 3D human pose estimation, which is a lightweight multi-scale transformer architecture. It encapsulates human sub-structures into self-attention by pooling. Extensive experiments show that our approach achieves lower error and smaller model size than state-of-the-art methods on Human3.6M and MPI-INF-3DHP datasets. The code is available at https://github.com/MingjieWe/PGFormer.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)49.2DiffPyramid (CPN)
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)49.5PGFormer (CPN)
Pose EstimationHuman3.6MAverage MPJPE (mm)49.2DiffPyramid (CPN)
Pose EstimationHuman3.6MAverage MPJPE (mm)49.5PGFormer (CPN)
3DHuman3.6MAverage MPJPE (mm)49.2DiffPyramid (CPN)
3DHuman3.6MAverage MPJPE (mm)49.5PGFormer (CPN)
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)49.2DiffPyramid (CPN)
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)49.5PGFormer (CPN)

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16