TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Implicit 3D Human Mesh Recovery using Consistency with Pos...

Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape from Unseen-view

Hanbyel Cho, Yooshin Cho, Jaesung Ahn, Junmo Kim

2023-06-30CVPR 2023 13D Human Pose EstimationSelf-Supervised LearningHuman Mesh Recovery
PaperPDF

Abstract

From an image of a person, we can easily infer the natural 3D pose and shape of the person even if ambiguity exists. This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference. However, existing human mesh recovery methods only consider the direction in which the image was taken due to their structural limitations. Hence, we propose "Implicit 3D Human Mesh Recovery (ImpHMR)" that can implicitly imagine a person in 3D space at the feature-level via Neural Feature Fields. In ImpHMR, feature fields are generated by CNN-based image encoder for a given image. Then, the 2D feature map is volume-rendered from the feature field for a given viewing direction, and the pose and shape parameters are regressed from the feature. To utilize consistency with pose and shape from unseen-view, if there are 3D labels, the model predicts results including the silhouette from an arbitrary direction and makes it equal to the rotated ground-truth. In the case of only 2D labels, we perform self-supervised learning through the constraint that the pose and shape parameters inferred from different directions should be the same. Extensive evaluations show the efficacy of the proposed method.

Results

TaskDatasetMetricValueModel
3D Human Pose Estimation3DPWMPJPE74.3ImpHMR
3D Human Pose Estimation3DPWMPVPE87.1ImpHMR
3D Human Pose Estimation3DPWPA-MPJPE45.4ImpHMR
Pose Estimation3DPWMPJPE74.3ImpHMR
Pose Estimation3DPWMPVPE87.1ImpHMR
Pose Estimation3DPWPA-MPJPE45.4ImpHMR
3D3DPWMPJPE74.3ImpHMR
3D3DPWMPVPE87.1ImpHMR
3D3DPWPA-MPJPE45.4ImpHMR
1 Image, 2*2 Stitchi3DPWMPJPE74.3ImpHMR
1 Image, 2*2 Stitchi3DPWMPVPE87.1ImpHMR
1 Image, 2*2 Stitchi3DPWPA-MPJPE45.4ImpHMR

Related Papers

A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder2025-07-14Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08World4Drive: End-to-End Autonomous Driving via Intention-aware Physical Latent World Model2025-07-01ShapeEmbed: a self-supervised learning framework for 2D contour quantification2025-07-01RetFiner: A Vision-Language Refinement Scheme for Retinal Foundation Models2025-06-27Boosting Generative Adversarial Transferability with Self-supervised Vision Transformer Features2025-06-26Hybrid Deep Learning and Signal Processing for Arabic Dialect Recognition in Low-Resource Settings2025-06-26