TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/3D Human Pose Estimation using Spatio-Temporal Networks wi...

3D Human Pose Estimation using Spatio-Temporal Networks with Explicit Occlusion Training

Yu Cheng, Bo Yang, Bo wang, Robby T. Tan

2020-04-07AAAI Conference on Artificial Intelligence, AAAI 2020 2020 23D Human Pose EstimationMonocular 3D Human Pose EstimationPose Estimation
PaperPDF

Abstract

Estimating 3D poses from a monocular video is still a challenging task, despite the significant progress that has been made in recent years. Generally, the performance of existing methods drops when the target person is too small/large, or the motion is too fast/slow relative to the scale and speed of the training data. Moreover, to our knowledge, many of these methods are not designed or trained under severe occlusion explicitly, making their performance on handling occlusion compromised. Addressing these problems, we introduce a spatio-temporal network for robust 3D human pose estimation. As humans in videos may appear in different scales and have various motion speeds, we apply multi-scale spatial features for 2D joints or keypoints prediction in each individual frame, and multi-stride temporal convolutional net-works (TCNs) to estimate 3D joints or keypoints. Furthermore, we design a spatio-temporal discriminator based on body structures as well as limb motions to assess whether the predicted pose forms a valid pose and a valid movement. During training, we explicitly mask out some keypoints to simulate various occlusion cases, from minor to severe occlusion, so that our network can learn better and becomes robust to various degrees of occlusion. As there are limited 3D ground-truth data, we further utilize 2D video data to inject a semi-supervised learning capability to our network. Experiments on public datasets validate the effectiveness of our method, and our ablation studies show the strengths of our network\'s individual submodules.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationHumanEva-IMean Reconstruction Error (mm)13.5Spatio-Temporal Network (T=128)
3D Human Pose EstimationMPI-INF-3DHPPCK84.1Spatio-Temporal Network (T=128)
3D Human Pose Estimation3DPWPA-MPJPE71.8Spatio-Temporal Network (T=128)
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
3D Human Pose EstimationHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
3D Human Pose EstimationHuman3.6MFrames Needed128Spatio-Temporal Network (T=128)
3D Human Pose EstimationHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)
Pose EstimationHumanEva-IMean Reconstruction Error (mm)13.5Spatio-Temporal Network (T=128)
Pose EstimationMPI-INF-3DHPPCK84.1Spatio-Temporal Network (T=128)
Pose Estimation3DPWPA-MPJPE71.8Spatio-Temporal Network (T=128)
Pose EstimationHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
Pose EstimationHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)
Pose EstimationHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
Pose EstimationHuman3.6MFrames Needed128Spatio-Temporal Network (T=128)
Pose EstimationHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)
3DHumanEva-IMean Reconstruction Error (mm)13.5Spatio-Temporal Network (T=128)
3DMPI-INF-3DHPPCK84.1Spatio-Temporal Network (T=128)
3D3DPWPA-MPJPE71.8Spatio-Temporal Network (T=128)
3DHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
3DHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)
3DHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
3DHuman3.6MFrames Needed128Spatio-Temporal Network (T=128)
3DHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)
1 Image, 2*2 StitchiHumanEva-IMean Reconstruction Error (mm)13.5Spatio-Temporal Network (T=128)
1 Image, 2*2 StitchiMPI-INF-3DHPPCK84.1Spatio-Temporal Network (T=128)
1 Image, 2*2 Stitchi3DPWPA-MPJPE71.8Spatio-Temporal Network (T=128)
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
1 Image, 2*2 StitchiHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)40.1Spatio-Temporal Network (T=128)
1 Image, 2*2 StitchiHuman3.6MFrames Needed128Spatio-Temporal Network (T=128)
1 Image, 2*2 StitchiHuman3.6MPA-MPJPE30.7Spatio-Temporal Network (T=128)

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16