TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SMPLer: Taming Transformers for Monocular 3D Human Shape a...

SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation

Xiangyu Xu, Lijuan Liu, Shuicheng Yan

2024-04-233D Human Pose EstimationPose Estimation
PaperPDFCode(official)

Abstract

Existing Transformers for monocular 3D human shape and pose estimation typically have a quadratic computation and memory complexity with respect to the feature length, which hinders the exploitation of fine-grained information in high-resolution features that is beneficial for accurate reconstruction. In this work, we propose an SMPL-based Transformer framework (SMPLer) to address this issue. SMPLer incorporates two key ingredients: a decoupled attention operation and an SMPL-based target representation, which allow effective utilization of high-resolution features in the Transformer. In addition, based on these two designs, we also introduce several novel modules including a multi-scale attention and a joint-aware attention to further boost the reconstruction performance. Extensive experiments demonstrate the effectiveness of SMPLer against existing 3D human shape and pose estimation methods both quantitatively and qualitatively. Notably, the proposed algorithm achieves an MPJPE of 45.2 mm on the Human3.6M dataset, improving upon Mesh Graphormer by more than 10% with fewer than one-third of the parameters. Code and pretrained models are available at https://github.com/xuxy09/SMPLer.

Results

TaskDatasetMetricValueModel
3D Human Pose Estimation3DPWMPJPE73.7SMPLer-L
3D Human Pose Estimation3DPWMPVPE82SMPLer-L
3D Human Pose Estimation3DPWPA-MPJPE43.4SMPLer-L
Pose Estimation3DPWMPJPE73.7SMPLer-L
Pose Estimation3DPWMPVPE82SMPLer-L
Pose Estimation3DPWPA-MPJPE43.4SMPLer-L
3D3DPWMPJPE73.7SMPLer-L
3D3DPWMPVPE82SMPLer-L
3D3DPWPA-MPJPE43.4SMPLer-L
1 Image, 2*2 Stitchi3DPWMPJPE73.7SMPLer-L
1 Image, 2*2 Stitchi3DPWMPVPE82SMPLer-L
1 Image, 2*2 Stitchi3DPWPA-MPJPE43.4SMPLer-L

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16