TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SNARF: Differentiable Forward Skinning for Animating Non-R...

SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes

Xu Chen, Yufeng Zheng, Michael J. Black, Otmar Hilliges, Andreas Geiger

2021-04-08ICCV 2021 103D Human Reconstruction
PaperPDFCode(official)

Abstract

Neural implicit surface representations have emerged as a promising paradigm to capture 3D shapes in a continuous and resolution-independent manner. However, adapting them to articulated shapes is non-trivial. Existing approaches learn a backward warp field that maps deformed to canonical points. However, this is problematic since the backward warp field is pose dependent and thus requires large amounts of data to learn. To address this, we introduce SNARF, which combines the advantages of linear blend skinning (LBS) for polygonal meshes with those of neural implicit surfaces by learning a forward deformation field without direct supervision. This deformation field is defined in canonical, pose-independent space, allowing for generalization to unseen poses. Learning the deformation field from posed meshes alone is challenging since the correspondences of deformed points are defined implicitly and may not be unique under changes of topology. We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding. We derive analytical gradients via implicit differentiation, enabling end-to-end training from 3D meshes with bone transformations. Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy. We demonstrate our method in challenging scenarios on (clothed) 3D humans in diverse and unseen poses.

Results

TaskDatasetMetricValueModel
Reconstruction4D-DRESSChamfer (cm)1.158SNARF_Inner
Reconstruction4D-DRESSIoU0.907SNARF_Inner
Reconstruction4D-DRESSNormal Consistency0.843SNARF_Inner
Reconstruction4D-DRESSChamfer (cm)1.248SNARF_Outer
Reconstruction4D-DRESSIoU0.93SNARF_Outer
Reconstruction4D-DRESSNormal Consistency0.827SNARF_Outer

Related Papers

PF-LHM: 3D Animatable Avatar Reconstruction from Pose-free Articulated Human Images2025-06-16SMPL Normal Map Is All You Need for Single-view Textured Human Reconstruction2025-06-15HumanRAM: Feed-forward Human Reconstruction and Animation Model using Transformers2025-06-03Link to the Past: Temporal Propagation for Fast 3D Human Reconstruction from Monocular Video2025-05-12DeClotH: Decomposable 3D Cloth and Human Body Reconstruction from a Single Image2025-03-25CHROME: Clothed Human Reconstruction with Occlusion-Resilience and Multiview-Consistency from a Single Image2025-03-19LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds2025-03-13MVD-HuGaS: Human Gaussians from a Single Image via 3D Human Multi-view Diffusion Prior2025-03-11