TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Person-Specific Animatable Face Models from In-th...

Learning Person-Specific Animatable Face Models from In-the-Wild Images via a Shared Base Model

Yuxiang Mao, Zhenfeng Fan, Zhijie Zhang, Zhiheng Zhang, Shihong Xia

2025-01-01CVPR 2025 1Face AlignmentFace Reconstruction3D Face Reconstruction
PaperPDFCode(official)

Abstract

Training a generic 3D face reconstruction model in a self-supervised manner using large-scale, in-the-wild 2D face image datasets enhances robustness to varying lighting conditions and occlusions while allowing the model to capture animatable wrinkle details across diverse facial expressions. However, a generic model often fails to adequately represent the unique characteristics of specific individuals. In this paper, we propose a method to train a generic base model and then transfer it to yield person-specific models by integrating lightweight adapters within the large-parameter ViT-MAE base model. These person-specific models excel at capturing individual facial shapes and detailed features while preserving the robustness and prior knowledge of detail variations from the base model. During training, we introduce a silhouette vertex re-projection loss to address boundary "landmark marching" issues on the 3D face caused by pose variations. Additionally, we employ an innovative teacher-student loss to leverage the inherent strengths of UNet in feature boundary localization for training our detail MAE. Quantitative and qualitative experiments demonstrate that our approach achieves state-of-the-art performance in face alignment, detail accuracy, and richness. The source code is available at https://github.com/danielmao2000/person-specific-animatable-face.

Related Papers

Towards Large-Scale Pose-Invariant Face Recognition Using Face Defrontalization2025-06-04LAFR: Efficient Diffusion-based Blind Face Restoration via Latent Codebook Alignment Adapter2025-05-29HonestFace: Towards Honest Face Restoration with One-Step Diffusion Model2025-05-24TokBench: Evaluating Your Visual Tokenizer before Visual Generation2025-05-233D Face Reconstruction Error Decomposed: A Modular Benchmark for Fair and Fast Method Evaluation2025-05-23Multimodal Emotion Coupling via Speech-to-Facial and Bodily Gestures in Dyadic Interaction2025-05-08Pixel3DMM: Versatile Screen-Space Priors for Single-Image 3D Face Reconstruction2025-05-01SocioVerse: A World Model for Social Simulation Powered by LLM Agents and A Pool of 10 Million Real-World Users2025-04-14