TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ICON: Implicit Clothed humans Obtained from Normals

ICON: Implicit Clothed humans Obtained from Normals

Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, Michael J. Black

2021-12-16CVPR 2022 13D Human Pose Estimation3D Human Shape EstimationMonocular 3D Human Pose Estimation3D Human Reconstruction
PaperPDFCode(official)Code

Abstract

Current methods for learning realistic and animatable 3D clothed avatars need either posed 3D scans or 2D images with carefully controlled user poses. In contrast, our goal is to learn an avatar from only 2D images of people in unconstrained poses. Given a set of images, our method estimates a detailed 3D surface from each image and then combines these into an animatable avatar. Implicit functions are well suited to the first task, as they can capture details like hair and clothes. Current methods, however, are not robust to varied human poses and often produce 3D surfaces with broken or disembodied limbs, missing details, or non-human shapes. The problem is that these methods use global feature encoders that are sensitive to global pose. To address this, we propose ICON ("Implicit Clothed humans Obtained from Normals"), which, instead, uses local features. ICON has two main modules, both of which exploit the SMPL(-X) body model. First, ICON infers detailed clothed-human normals (front/back) conditioned on the SMPL(-X) normals. Second, a visibility-aware implicit surface regressor produces an iso-surface of a human occupancy field. Importantly, at inference time, a feedback loop alternates between refining the SMPL(-X) mesh using the inferred clothed normals and then refining the normals. Given multiple reconstructed frames of a subject in varied poses, we use SCANimate to produce an animatable avatar from them. Evaluation on the AGORA and CAPE datasets shows that ICON outperforms the state of the art in reconstruction, even with heavily limited training data. Additionally, it is much more robust to out-of-distribution samples, e.g., in-the-wild poses/images and out-of-frame cropping. ICON takes a step towards robust 3D clothed human reconstruction from in-the-wild images. This enables creating avatars directly from video with personalized and natural pose-dependent cloth deformation.

Results

TaskDatasetMetricValueModel
ReconstructionCustomHumansChamfer Distance P-to-S2.256ICON
ReconstructionCustomHumansChamfer Distance S-to-P2.795ICON
ReconstructionCustomHumansNormal Consistency0.791ICON
ReconstructionCustomHumansf-Score30.437ICON
ReconstructionCAPEChamfer (cm)1.142ICON
ReconstructionCAPENC0.066ICON
ReconstructionCAPEP2S (cm)1.065ICON
Reconstruction4D-DRESSChamfer (cm)2.473ICON_Inner
Reconstruction4D-DRESSIoU0.752ICON_Inner
Reconstruction4D-DRESSNormal Consistency0.798ICON_Inner
Reconstruction4D-DRESSChamfer (cm)2.832ICON_Outer
Reconstruction4D-DRESSIoU0.756ICON_Outer
Reconstruction4D-DRESSNormal Consistency0.762ICON_Outer

Related Papers

Systematic Comparison of Projection Methods for Monocular 3D Human Pose Estimation on Fisheye Images2025-06-24ExtPose: Robust and Coherent Pose Estimation by Extending ViTs2025-06-18PoseGRAF: Geometric-Reinforced Adaptive Fusion for Monocular 3D Human Pose Estimation2025-06-17PF-LHM: 3D Animatable Avatar Reconstruction from Pose-free Articulated Human Images2025-06-16SMPL Normal Map Is All You Need for Single-view Textured Human Reconstruction2025-06-15Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation2025-06-03HumanRAM: Feed-forward Human Reconstruction and Animation Model using Transformers2025-06-03UPTor: Unified 3D Human Pose Dynamics and Trajectory Prediction for Human-Robot Interaction2025-05-20