TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Scene-aware Egocentric 3D Human Pose Estimation

Scene-aware Egocentric 3D Human Pose Estimation

Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Diogo Luvizon, Christian Theobalt

2022-12-20CVPR 2023 13D Human Pose EstimationEgocentric Pose EstimationPose EstimationDepth Estimation
PaperPDFCode(official)

Abstract

Egocentric 3D human pose estimation with a single head-mounted fisheye camera has recently attracted attention due to its numerous applications in virtual and augmented reality. Existing methods still struggle in challenging poses where the human body is highly occluded or is closely interacting with the scene. To address this issue, we propose a scene-aware egocentric pose estimation method that guides the prediction of the egocentric pose with scene constraints. To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network. Next, we propose a scene-aware pose estimation network that projects the 2D image features and estimated depth map of the scene into a voxel space and regresses the 3D pose with a V2V network. The voxel-based feature representation provides the direct geometric connection between 2D image features and scene geometry, and further facilitates the V2V network to constrain the predicted pose based on the estimated scene geometry. To enable the training of the aforementioned networks, we also generated a synthetic dataset, called EgoGTA, and an in-the-wild dataset based on EgoPW, called EgoPW-Scene. The experimental results of our new evaluation sequences show that the predicted 3D egocentric poses are accurate and physically plausible in terms of human-scene interaction, demonstrating that our method outperforms the state-of-the-art methods both quantitatively and qualitatively.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationGlobalEgoMocap Test DatasetAverage MPJPE (mm)76.5SceneEgo
3D Human Pose EstimationGlobalEgoMocap Test DatasetPA-MPJPE61.92SceneEgo
3D Human Pose EstimationSceneEgoAverage MPJPE (mm)118.5SceneEgo
3D Human Pose EstimationSceneEgoPA-MPJPE92.75SceneEgo
Pose EstimationGlobalEgoMocap Test DatasetAverage MPJPE (mm)76.5SceneEgo
Pose EstimationGlobalEgoMocap Test DatasetPA-MPJPE61.92SceneEgo
Pose EstimationSceneEgoAverage MPJPE (mm)118.5SceneEgo
Pose EstimationSceneEgoPA-MPJPE92.75SceneEgo
3DGlobalEgoMocap Test DatasetAverage MPJPE (mm)76.5SceneEgo
3DGlobalEgoMocap Test DatasetPA-MPJPE61.92SceneEgo
3DSceneEgoAverage MPJPE (mm)118.5SceneEgo
3DSceneEgoPA-MPJPE92.75SceneEgo
1 Image, 2*2 StitchiGlobalEgoMocap Test DatasetAverage MPJPE (mm)76.5SceneEgo
1 Image, 2*2 StitchiGlobalEgoMocap Test DatasetPA-MPJPE61.92SceneEgo
1 Image, 2*2 StitchiSceneEgoAverage MPJPE (mm)118.5SceneEgo
1 Image, 2*2 StitchiSceneEgoPA-MPJPE92.75SceneEgo

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16