TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BioPose: Biomechanically-accurate 3D Pose Estimation from ...

BioPose: Biomechanically-accurate 3D Pose Estimation from Monocular Videos

Farnoosh Koleini, Muhammad Usama Saleem, Pu Wang, Hongfei Xue, Ahmed Helmy, Abbey Fenwick

2025-01-143D Human Pose EstimationPose EstimationHuman Mesh Recovery3D Pose Estimation
PaperPDF

Abstract

Recent advancements in 3D human pose estimation from single-camera images and videos have relied on parametric models, like SMPL. However, these models oversimplify anatomical structures, limiting their accuracy in capturing true joint locations and movements, which reduces their applicability in biomechanics, healthcare, and robotics. Biomechanically accurate pose estimation, on the other hand, typically requires costly marker-based motion capture systems and optimization techniques in specialized labs. To bridge this gap, we propose BioPose, a novel learning-based framework for predicting biomechanically accurate 3D human pose directly from monocular videos. BioPose includes three key components: a Multi-Query Human Mesh Recovery model (MQ-HMR), a Neural Inverse Kinematics (NeurIK) model, and a 2D-informed pose refinement technique. MQ-HMR leverages a multi-query deformable transformer to extract multi-scale fine-grained image features, enabling precise human mesh recovery. NeurIK treats the mesh vertices as virtual markers, applying a spatial-temporal network to regress biomechanically accurate 3D poses under anatomical constraints. To further improve 3D pose estimations, a 2D-informed refinement step optimizes the query tokens during inference by aligning the 3D structure with 2D pose observations. Experiments on benchmark datasets demonstrate that BioPose significantly outperforms state-of-the-art methods. Project website: \url{https://m-usamasaleem.github.io/publication/BioPose/BioPose.html}.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationEMDBAverage MPJPE (mm)92.5BioPose
3D Human Pose EstimationEMDBAverage MPJPE-PA (mm)52.1BioPose
3D Human Pose EstimationEMDBAverage MVE (mm)98.9BioPose
3D Human Pose Estimation3DPWMPJPE69BioPose
3D Human Pose Estimation3DPWMPVPE79.8BioPose
3D Human Pose Estimation3DPWPA-MPJPE39.5BioPose
Pose EstimationEMDBAverage MPJPE (mm)92.5BioPose
Pose EstimationEMDBAverage MPJPE-PA (mm)52.1BioPose
Pose EstimationEMDBAverage MVE (mm)98.9BioPose
Pose Estimation3DPWMPJPE69BioPose
Pose Estimation3DPWMPVPE79.8BioPose
Pose Estimation3DPWPA-MPJPE39.5BioPose
3DEMDBAverage MPJPE (mm)92.5BioPose
3DEMDBAverage MPJPE-PA (mm)52.1BioPose
3DEMDBAverage MVE (mm)98.9BioPose
3D3DPWMPJPE69BioPose
3D3DPWMPVPE79.8BioPose
3D3DPWPA-MPJPE39.5BioPose
1 Image, 2*2 StitchiEMDBAverage MPJPE (mm)92.5BioPose
1 Image, 2*2 StitchiEMDBAverage MPJPE-PA (mm)52.1BioPose
1 Image, 2*2 StitchiEMDBAverage MVE (mm)98.9BioPose
1 Image, 2*2 Stitchi3DPWMPJPE69BioPose
1 Image, 2*2 Stitchi3DPWMPVPE79.8BioPose
1 Image, 2*2 Stitchi3DPWPA-MPJPE39.5BioPose

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16