TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SynthDistill: Face Recognition with Knowledge Distillation...

SynthDistill: Face Recognition with Knowledge Distillation from Synthetic Data

Hatef Otroshi Shahreza, Anjith George, Sébastien Marcel

2023-08-28Face RecognitionSynthetic Face RecognitionLightweight Face RecognitionKnowledge Distillation
PaperPDFCode(official)Code(official)

Abstract

State-of-the-art face recognition networks are often computationally expensive and cannot be used for mobile applications. Training lightweight face recognition models also requires large identity-labeled datasets. Meanwhile, there are privacy and ethical concerns with collecting and using large face recognition datasets. While generating synthetic datasets for training face recognition models is an alternative option, it is challenging to generate synthetic data with sufficient intra-class variations. In addition, there is still a considerable gap between the performance of models trained on real and synthetic data. In this paper, we propose a new framework (named SynthDistill) to train lightweight face recognition models by distilling the knowledge of a pretrained teacher face recognition model using synthetic data. We use a pretrained face generator network to generate synthetic face images and use the synthesized images to learn a lightweight student network. We use synthetic face images without identity labels, mitigating the problems in the intra-class variation generation of synthetic datasets. Instead, we propose a novel dynamic sampling strategy from the intermediate latent space of the face generator network to include new variations of the challenging images while further exploring new face images in the training batch. The results on five different face recognition datasets demonstrate the superiority of our lightweight model compared to models trained on previous synthetic datasets, achieving a verification accuracy of 99.52% on the LFW dataset with a lightweight network. The results also show that our proposed framework significantly reduces the gap between training with real and synthetic data. The source code for replicating the experiments is publicly released.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingCPLFWAccuracy0.87SynthDistill
Facial Recognition and ModellingLFWAccuracy0.9952SynthDistill
Facial Recognition and ModellingCALFWAccuracy0.9457SynthDistill
Facial Recognition and ModellingAgeDB-30Accuracy0.9493SynthDistill
Facial Recognition and ModellingCFP-FPAccuracy0.9089SynthDistill
Face ReconstructionCPLFWAccuracy0.87SynthDistill
Face ReconstructionLFWAccuracy0.9952SynthDistill
Face ReconstructionCALFWAccuracy0.9457SynthDistill
Face ReconstructionAgeDB-30Accuracy0.9493SynthDistill
Face ReconstructionCFP-FPAccuracy0.9089SynthDistill
Face RecognitionCPLFWAccuracy0.87SynthDistill
Face RecognitionLFWAccuracy0.9952SynthDistill
Face RecognitionCALFWAccuracy0.9457SynthDistill
Face RecognitionAgeDB-30Accuracy0.9493SynthDistill
Face RecognitionCFP-FPAccuracy0.9089SynthDistill
3DCPLFWAccuracy0.87SynthDistill
3DLFWAccuracy0.9952SynthDistill
3DCALFWAccuracy0.9457SynthDistill
3DAgeDB-30Accuracy0.9493SynthDistill
3DCFP-FPAccuracy0.9089SynthDistill
3D Face ModellingCPLFWAccuracy0.87SynthDistill
3D Face ModellingLFWAccuracy0.9952SynthDistill
3D Face ModellingCALFWAccuracy0.9457SynthDistill
3D Face ModellingAgeDB-30Accuracy0.9493SynthDistill
3D Face ModellingCFP-FPAccuracy0.9089SynthDistill
3D Face ReconstructionCPLFWAccuracy0.87SynthDistill
3D Face ReconstructionLFWAccuracy0.9952SynthDistill
3D Face ReconstructionCALFWAccuracy0.9457SynthDistill
3D Face ReconstructionAgeDB-30Accuracy0.9493SynthDistill
3D Face ReconstructionCFP-FPAccuracy0.9089SynthDistill

Related Papers

ProxyFusion: Face Feature Aggregation Through Sparse Experts2025-09-24Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Non-Adaptive Adversarial Face Generation2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14