TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Facial Landmark Points Detection Using Knowledge Distillat...

Facial Landmark Points Detection Using Knowledge Distillation-Based Neural Networks

Ali Pourramezan Fard, Mohammad H. Mahoor

2021-11-13Face AlignmentFacial Landmark DetectionKnowledge Distillation
PaperPDFCode(official)

Abstract

Facial landmark detection is a vital step for numerous facial image analysis applications. Although some deep learning-based methods have achieved good performances in this task, they are often not suitable for running on mobile devices. Such methods rely on networks with many parameters, which makes the training and inference time-consuming. Training lightweight neural networks such as MobileNets are often challenging, and the models might have low accuracy. Inspired by knowledge distillation (KD), this paper presents a novel loss function to train a lightweight Student network (e.g., MobileNetV2) for facial landmark detection. We use two Teacher networks, a Tolerant-Teacher and a Tough-Teacher in conjunction with the Student network. The Tolerant-Teacher is trained using Soft-landmarks created by active shape models, while the Tough-Teacher is trained using the ground truth (aka Hard-landmarks) landmark points. To utilize the facial landmark points predicted by the Teacher networks, we define an Assistive Loss (ALoss) for each Teacher network. Moreover, we define a loss function called KD-Loss that utilizes the facial landmark points predicted by the two pre-trained Teacher networks (EfficientNet-b3) to guide the lightweight Student network towards predicting the Hard-landmarks. Our experimental results on three challenging facial datasets show that the proposed architecture will result in a better-trained Student network that can extract facial landmark points with high accuracy.

Results

TaskDatasetMetricValueModel
Facial Recognition and Modelling300WNME_inter-ocular (%, Challenge)6.13MobileNetV2+KD-Loss
Facial Recognition and Modelling300WNME_inter-ocular (%, Common)3.56MobileNetV2+KD-Loss
Facial Recognition and Modelling300WNME_inter-ocular (%, Full)4.06MobileNetV2+KD-Loss
Face Reconstruction300WNME_inter-ocular (%, Challenge)6.13MobileNetV2+KD-Loss
Face Reconstruction300WNME_inter-ocular (%, Common)3.56MobileNetV2+KD-Loss
Face Reconstruction300WNME_inter-ocular (%, Full)4.06MobileNetV2+KD-Loss
3D300WNME_inter-ocular (%, Challenge)6.13MobileNetV2+KD-Loss
3D300WNME_inter-ocular (%, Common)3.56MobileNetV2+KD-Loss
3D300WNME_inter-ocular (%, Full)4.06MobileNetV2+KD-Loss
3D Face Modelling300WNME_inter-ocular (%, Challenge)6.13MobileNetV2+KD-Loss
3D Face Modelling300WNME_inter-ocular (%, Common)3.56MobileNetV2+KD-Loss
3D Face Modelling300WNME_inter-ocular (%, Full)4.06MobileNetV2+KD-Loss
3D Face Reconstruction300WNME_inter-ocular (%, Challenge)6.13MobileNetV2+KD-Loss
3D Face Reconstruction300WNME_inter-ocular (%, Common)3.56MobileNetV2+KD-Loss
3D Face Reconstruction300WNME_inter-ocular (%, Full)4.06MobileNetV2+KD-Loss

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift2025-07-11SFedKD: Sequential Federated Learning with Discrepancy-Aware Multi-Teacher Knowledge Distillation2025-07-11