TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PaMIR: Parametric Model-Conditioned Implicit Representatio...

PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction

Zerong Zheng, Tao Yu, Yebin Liu, Qionghai Dai

2020-07-08Lifelike 3D Human Generation
PaperPDFCode(official)

Abstract

Modeling 3D humans accurately and robustly from a single image is very challenging, and the key for such an ill-posed problem is the 3D representation of the human models. To overcome the limitations of regular 3D representations, we propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function. In our PaMIR-based reconstruction framework, a novel deep neural network is proposed to regularize the free-form deep implicit function using the semantic features of the parametric model, which improves the generalization ability under the scenarios of challenging poses and various clothing topologies. Moreover, a novel depth-ambiguity-aware training loss is further integrated to resolve depth ambiguities and enable successful surface detail reconstruction with imperfect body reference. Finally, we propose a body reference optimization method to improve the parametric model estimation accuracy and to enhance the consistency between the parametric model and the implicit function. With the PaMIR representation, our framework can be easily extended to multi-image input scenarios without the need of multi-camera calibration and pose synchronization. Experimental results demonstrate that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.

Results

TaskDatasetMetricValueModel
ReconstructionCustomHumansChamfer Distance P-to-S2.181PaMIR
ReconstructionCustomHumansChamfer Distance S-to-P2.507PaMIR
ReconstructionCustomHumansNormal Consistency0.813PaMIR
ReconstructionCustomHumansf-Score35.847PaMIR
ReconstructionCAPEChamfer (cm)2.122PaMIR
ReconstructionCAPENC0.088PaMIR
ReconstructionCAPEP2S (cm)1.495PaMIR
Reconstruction4D-DRESSChamfer (cm)2.52PaMIR_Inner
Reconstruction4D-DRESSIoU0.706PaMIR_Inner
Reconstruction4D-DRESSNormal Consistency0.805PaMIR_Inner
Reconstruction4D-DRESSChamfer (cm)2.608PaMIR_Outer
Reconstruction4D-DRESSIoU0.715PaMIR_Outer
Reconstruction4D-DRESSNormal Consistency0.777PaMIR_Outer
Lifelike 3D Human GenerationTHuman2.0 DatasetCLIP Similarity0.8861PaMIR
Lifelike 3D Human GenerationTHuman2.0 DatasetLPIPS0.1461PaMIR
Lifelike 3D Human GenerationTHuman2.0 DatasetPSNR16.6267PaMIR
Lifelike 3D Human GenerationTHuman2.0 DatasetSSIM0.8924PaMIR

Related Papers

Human-VDM: Learning Single-Image 3D Human Gaussian Splatting from Video Diffusion Models2024-09-04Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail2024-03-18SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction2023-12-10SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion2023-11-27PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization2019-05-13