TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Estimating Egocentric 3D Human Pose in Global Space

Estimating Egocentric 3D Human Pose in Global Space

Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Christian Theobalt

2021-04-27ICCV 2021 103D Human Pose EstimationEgocentric Pose EstimationPose Estimation
PaperPDFCode(official)

Abstract

Egocentric 3D human pose estimation using a single fisheye camera has become popular recently as it allows capturing a wide range of daily activities in unconstrained environments, which is difficult for traditional outside-in motion capture with external cameras. However, existing methods have several limitations. A prominent problem is that the estimated poses lie in the local coordinate system of the fisheye camera, rather than in the world coordinate system, which is restrictive for many applications. Furthermore, these methods suffer from limited accuracy and temporal instability due to ambiguities caused by the monocular setup and the severe occlusion in a strongly distorted egocentric perspective. To tackle these limitations, we present a new method for egocentric global 3D body pose estimation using a single head-mounted fisheye camera. To achieve accurate and temporally stable global poses, a spatio-temporal optimization is performed over a sequence of frames by minimizing heatmap reprojection errors and enforcing local and global body motion priors learned from a mocap dataset. Experimental results show that our approach outperforms state-of-the-art methods both quantitatively and qualitatively.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationGlobalEgoMocap Test DatasetAverage MPJPE (mm)82.06GlobalEgoMocap
3D Human Pose EstimationGlobalEgoMocap Test DatasetPA-MPJPE62.07GlobalEgoMocap
3D Human Pose EstimationSceneEgoAverage MPJPE (mm)183GlobalEgoMocap
3D Human Pose EstimationSceneEgoPA-MPJPE106.2GlobalEgoMocap
Pose EstimationGlobalEgoMocap Test DatasetAverage MPJPE (mm)82.06GlobalEgoMocap
Pose EstimationGlobalEgoMocap Test DatasetPA-MPJPE62.07GlobalEgoMocap
Pose EstimationSceneEgoAverage MPJPE (mm)183GlobalEgoMocap
Pose EstimationSceneEgoPA-MPJPE106.2GlobalEgoMocap
3DGlobalEgoMocap Test DatasetAverage MPJPE (mm)82.06GlobalEgoMocap
3DGlobalEgoMocap Test DatasetPA-MPJPE62.07GlobalEgoMocap
3DSceneEgoAverage MPJPE (mm)183GlobalEgoMocap
3DSceneEgoPA-MPJPE106.2GlobalEgoMocap
1 Image, 2*2 StitchiGlobalEgoMocap Test DatasetAverage MPJPE (mm)82.06GlobalEgoMocap
1 Image, 2*2 StitchiGlobalEgoMocap Test DatasetPA-MPJPE62.07GlobalEgoMocap
1 Image, 2*2 StitchiSceneEgoAverage MPJPE (mm)183GlobalEgoMocap
1 Image, 2*2 StitchiSceneEgoPA-MPJPE106.2GlobalEgoMocap

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16