TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/NIKI: Neural Inverse Kinematics with Invertible Neural Net...

NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D Human Pose and Shape Estimation

Jiefeng Li, Siyuan Bian, Qi Liu, Jiasheng Tang, Fan Wang, Cewu Lu

2023-05-15CVPR 2023 13D Human Pose EstimationPose Estimation3D human pose and shape estimation
PaperPDFCode(official)

Abstract

With the progress of 3D human pose and shape estimation, state-of-the-art methods can either be robust to occlusions or obtain pixel-aligned accuracy in non-occlusion cases. However, they cannot obtain robustness and mesh-image alignment at the same time. In this work, we present NIKI (Neural Inverse Kinematics with Invertible Neural Network), which models bi-directional errors to improve the robustness to occlusions and obtain pixel-aligned accuracy. NIKI can learn from both the forward and inverse processes with invertible networks. In the inverse process, the model separates the error from the plausible 3D pose manifold for a robust 3D human pose estimation. In the forward process, we enforce the zero-error boundary conditions to improve the sensitivity to reliable joint positions for better mesh-image alignment. Furthermore, NIKI emulates the analytical inverse kinematics algorithms with the twist-and-swing decomposition for better interpretability. Experiments on standard and occlusion-specific benchmarks demonstrate the effectiveness of NIKI, where we exhibit robust and well-aligned results simultaneously. Code is available at https://github.com/Jeff-sjtu/NIKI

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationAGORAB-MPJPE67.3NIKI (Twist-and-Swing)
3D Human Pose EstimationAGORAB-MVE63.9NIKI (Twist-and-Swing)
3D Human Pose EstimationAGORAB-NMJE74NIKI (Twist-and-Swing)
3D Human Pose EstimationAGORAB-NMVE70.2NIKI (Twist-and-Swing)
3D Human Pose Estimation3DPWMPJPE71.3NIKI (Twist-and-Swing)
3D Human Pose Estimation3DPWMPVPE86.6NIKI (Twist-and-Swing)
3D Human Pose Estimation3DPWPA-MPJPE40.6NIKI (Twist-and-Swing)
Pose EstimationAGORAB-MPJPE67.3NIKI (Twist-and-Swing)
Pose EstimationAGORAB-MVE63.9NIKI (Twist-and-Swing)
Pose EstimationAGORAB-NMJE74NIKI (Twist-and-Swing)
Pose EstimationAGORAB-NMVE70.2NIKI (Twist-and-Swing)
Pose Estimation3DPWMPJPE71.3NIKI (Twist-and-Swing)
Pose Estimation3DPWMPVPE86.6NIKI (Twist-and-Swing)
Pose Estimation3DPWPA-MPJPE40.6NIKI (Twist-and-Swing)
3DAGORAB-MPJPE67.3NIKI (Twist-and-Swing)
3DAGORAB-MVE63.9NIKI (Twist-and-Swing)
3DAGORAB-NMJE74NIKI (Twist-and-Swing)
3DAGORAB-NMVE70.2NIKI (Twist-and-Swing)
3D3DPWMPJPE71.3NIKI (Twist-and-Swing)
3D3DPWMPVPE86.6NIKI (Twist-and-Swing)
3D3DPWPA-MPJPE40.6NIKI (Twist-and-Swing)
1 Image, 2*2 StitchiAGORAB-MPJPE67.3NIKI (Twist-and-Swing)
1 Image, 2*2 StitchiAGORAB-MVE63.9NIKI (Twist-and-Swing)
1 Image, 2*2 StitchiAGORAB-NMJE74NIKI (Twist-and-Swing)
1 Image, 2*2 StitchiAGORAB-NMVE70.2NIKI (Twist-and-Swing)
1 Image, 2*2 Stitchi3DPWMPJPE71.3NIKI (Twist-and-Swing)
1 Image, 2*2 Stitchi3DPWMPVPE86.6NIKI (Twist-and-Swing)
1 Image, 2*2 Stitchi3DPWPA-MPJPE40.6NIKI (Twist-and-Swing)

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16