PoseRN: A 2D pose refinement network for bias-free multi-view 3D human pose estimation

Akihiko Sayo, Diego Thomas, Hiroshi Kawasaki, Yuta Nakashima, Katsushi Ikeuchi

Abstract

We propose a new 2D pose refinement network that learns to predict the human bias in the estimated 2D pose. There are biases in 2D pose estimations that are due to differences between annotations of 2D joint locations based on annotators' perception and those defined by motion capture (MoCap) systems. These biases are crafted into publicly available 2D pose datasets and cannot be removed with existing error reduction approaches. Our proposed pose refinement network allows us to efficiently remove the human bias in the estimated 2D poses and achieve highly accurate multi-view 3D human pose estimation.

Results

TaskDatasetMetricValueModel
3D Human Pose EstimationHuman3.6MAverage MPJPE (mm)38.4PoseRN
Pose EstimationHuman3.6MAverage MPJPE (mm)38.4PoseRN
3DHuman3.6MAverage MPJPE (mm)38.4PoseRN
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm)38.4PoseRN

Related Papers