TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EgoPoseFormer: A Simple Baseline for Stereo Egocentric 3D ...

EgoPoseFormer: A Simple Baseline for Stereo Egocentric 3D Human Pose Estimation

Chenhongyi Yang, Anastasia Tkach, Shreyas Hampali, Linguang Zhang, Elliot J. Crowley, Cem Keskin

2024-03-263D Human Pose EstimationEgocentric Pose EstimationPose Estimation
PaperPDFCode(official)

Abstract

We present EgoPoseFormer, a simple yet effective transformer-based model for stereo egocentric human pose estimation. The main challenge in egocentric pose estimation is overcoming joint invisibility, which is caused by self-occlusion or a limited field of view (FOV) of head-mounted cameras. Our approach overcomes this challenge by incorporating a two-stage pose estimation paradigm: in the first stage, our model leverages the global information to estimate each joint's coarse location, then in the second stage, it employs a DETR style transformer to refine the coarse locations by exploiting fine-grained stereo visual features. In addition, we present a Deformable Stereo Attention operation to enable our transformer to effectively process multi-view features, which enables it to accurately localize each joint in the 3D world. We evaluate our method on the stereo UnrealEgo dataset and show it significantly outperforms previous approaches while being computationally efficient: it improves MPJPE by 27.4mm (45% improvement) with only 7.9% model parameters and 13.1% FLOPs compared to the state-of-the-art. Surprisingly, with proper training settings, we find that even our first-stage pose proposal network can achieve superior performance compared to previous arts. We also show that our method can be seamlessly extended to monocular settings, which achieves state-of-the-art performance on the SceneEgo dataset, improving MPJPE by 25.5mm (21% improvement) compared to the best existing method with only 60.7% model parameters and 36.4% FLOPs. Code is available at: https://github.com/ChenhongyiYang/egoposeformer .

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationSceneEgoAverage MPJPE (mm)93EgoPoseFormer
3D Human Pose EstimationSceneEgoPA-MPJPE74.3EgoPoseFormer
3D Human Pose EstimationUnrealEgoAverage MPJPE (mm)33.4EgoPoseFormer
3D Human Pose EstimationUnrealEgoPA-MPJPE32.7EgoPoseFormer
Pose EstimationSceneEgoAverage MPJPE (mm)93EgoPoseFormer
Pose EstimationSceneEgoPA-MPJPE74.3EgoPoseFormer
Pose EstimationUnrealEgoAverage MPJPE (mm)33.4EgoPoseFormer
Pose EstimationUnrealEgoPA-MPJPE32.7EgoPoseFormer
3DSceneEgoAverage MPJPE (mm)93EgoPoseFormer
3DSceneEgoPA-MPJPE74.3EgoPoseFormer
3DUnrealEgoAverage MPJPE (mm)33.4EgoPoseFormer
3DUnrealEgoPA-MPJPE32.7EgoPoseFormer
1 Image, 2*2 StitchiSceneEgoAverage MPJPE (mm)93EgoPoseFormer
1 Image, 2*2 StitchiSceneEgoPA-MPJPE74.3EgoPoseFormer
1 Image, 2*2 StitchiUnrealEgoAverage MPJPE (mm)33.4EgoPoseFormer
1 Image, 2*2 StitchiUnrealEgoPA-MPJPE32.7EgoPoseFormer

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16