Matteo Fabbri, Fabio Lanzi, Simone Calderara, Stefano Alletto, Rita Cucchiara
In this paper we present a novel approach for bottom-up multi-person 3D human pose estimation from monocular RGB images. We propose to use high resolution volumetric heatmaps to model joint locations, devising a simple and effective compression method to drastically reduce the size of this representation. At the core of the proposed method lies our Volumetric Heatmap Autoencoder, a fully-convolutional network tasked with the compression of ground-truth heatmaps into a dense intermediate representation. A second model, the Code Predictor, is then trained to predict these codes, which can be decompressed at test time to re-obtain the original representation. Our experimental evaluation shows that our method performs favorably when compared to state of the art on both multi-person and single-person 3D human pose estimation datasets and, thanks to our novel compression strategy, can process full-HD images at the constant runtime of 8 fps regardless of the number of subjects in the scene. Code and models available at https://github.com/fabbrimatteo/LoCO .
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| 3D Human Pose Estimation | Panoptic | Average MPJPE (mm) | 69 | LoCO |
| 3D Human Pose Estimation | Human3.6M | Average MPJPE (mm) | 51.1 | LoCO |
| 3D Human Pose Estimation | Human3.6M | PA-MPJPE | 43.4 | LoCO |
| Pose Estimation | Panoptic | Average MPJPE (mm) | 69 | LoCO |
| Pose Estimation | Human3.6M | Average MPJPE (mm) | 51.1 | LoCO |
| Pose Estimation | Human3.6M | PA-MPJPE | 43.4 | LoCO |
| 3D | Panoptic | Average MPJPE (mm) | 69 | LoCO |
| 3D | Human3.6M | Average MPJPE (mm) | 51.1 | LoCO |
| 3D | Human3.6M | PA-MPJPE | 43.4 | LoCO |
| 1 Image, 2*2 Stitchi | Panoptic | Average MPJPE (mm) | 69 | LoCO |
| 1 Image, 2*2 Stitchi | Human3.6M | Average MPJPE (mm) | 51.1 | LoCO |
| 1 Image, 2*2 Stitchi | Human3.6M | PA-MPJPE | 43.4 | LoCO |