TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SelfRecon: Self Reconstruction Your Digital Avatar from Mo...

SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video

Boyi Jiang, Yang Hong, Hujun Bao, Juyong Zhang

2022-01-30CVPR 2022 1Neural Rendering3D Human Reconstruction
PaperPDFCode(official)

Abstract

We propose SelfRecon, a clothed human body reconstruction method that combines implicit and explicit representations to recover space-time coherent geometries from a monocular self-rotating human video. Explicit methods require a predefined template mesh for a given sequence, while the template is hard to acquire for a specific subject. Meanwhile, the fixed topology limits the reconstruction accuracy and clothing types. Implicit representation supports arbitrary topology and can represent high-fidelity geometry shapes due to its continuous nature. However, it is difficult to integrate multi-frame information to produce a consistent registration sequence for downstream applications. We propose to combine the advantages of both representations. We utilize differential mask loss of the explicit mesh to obtain the coherent overall shape, while the details on the implicit surface are refined with the differentiable neural rendering. Meanwhile, the explicit mesh is updated periodically to adjust its topology changes, and a consistency loss is designed to match both representations. Compared with existing methods, SelfRecon can produce high-fidelity surfaces for arbitrary clothed humans with self-supervised optimization. Extensive experimental results demonstrate its effectiveness on real captured monocular videos. The source code is available at https://github.com/jby1993/SelfReconCode.

Results

TaskDatasetMetricValueModel
Reconstruction4D-DRESSChamfer (cm)3.014SelfRecon_Outer
Reconstruction4D-DRESSIoU0.787SelfRecon_Outer
Reconstruction4D-DRESSNormal Consistency0.725SelfRecon_Outer
Reconstruction4D-DRESSChamfer (cm)3.18SelfRecon_Inner
Reconstruction4D-DRESSIoU0.754SelfRecon_Inner
Reconstruction4D-DRESSNormal Consistency0.729SelfRecon_Inner

Related Papers

HiNeuS: High-fidelity Neural Surface Mitigating Low-texture and Reflective Ambiguity2025-06-30R3eVision: A Survey on Robust Rendering, Restoration, and Enhancement for 3D Low-Level Vision2025-06-19Audio-Visual Driven Compression for Low-Bitrate Talking Head Videos2025-06-16PF-LHM: 3D Animatable Avatar Reconstruction from Pose-free Articulated Human Images2025-06-16SMPL Normal Map Is All You Need for Single-view Textured Human Reconstruction2025-06-15Gaussian Herding across Pens: An Optimal Transport Perspective on Global Gaussian Reduction for 3DGS2025-06-11R3D2: Realistic 3D Asset Insertion via Diffusion for Autonomous Driving Simulation2025-06-09Unifying Appearance Codes and Bilateral Grids for Driving Scene Gaussian Splatting2025-06-05