TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Wing Loss for Robust Facial Landmark Localisation with Con...

Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural Networks

Zhen-Hua Feng, Josef Kittler, Muhammad Awais, Patrik Huber, Xiao-Jun Wu

2017-11-17CVPR 2018 6Face AlignmentData Augmentation
PaperPDFCodeCodeCodeCodeCodeCode

Abstract

We present a new loss function, namely Wing loss, for robust facial landmark localisation with Convolutional Neural Networks (CNNs). We first compare and analyse different loss functions including L2, L1 and smooth L1. The analysis of these loss functions suggests that, for the training of a CNN-based localisation model, more attention should be paid to small and medium range errors. To this end, we design a piece-wise loss function. The new loss amplifies the impact of errors from the interval (-w, w) by switching from L1 loss to a modified logarithm function. To address the problem of under-representation of samples with large out-of-plane head rotations in the training set, we propose a simple but effective boosting strategy, referred to as pose-based data balancing. In particular, we deal with the data imbalance problem by duplicating the minority training samples and perturbing them by injecting random image rotation, bounding box translation and other data augmentation approaches. Last, the proposed approach is extended to create a two-stage framework for robust facial landmark localisation. The experimental results obtained on AFLW and 300W demonstrate the merits of the Wing loss function, and prove the superiority of the proposed method over the state-of-the-art approaches.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingCOFWNME (inter-ocular)5.07Wing (Feng et al., 2018)
Facial Recognition and ModellingAFLW-19AUC_box@0.07 (%, Full)53.5Wing
Facial Recognition and ModellingAFLW-19NME_box (%, Full)3.56Wing
Facial Recognition and ModellingAFLW-19NME_diag (%, Full)1.65Wing
Facial Recognition and Modelling300WNME_inter-pupil (%, Challenge)7.18Wing
Facial Recognition and Modelling300WNME_inter-pupil (%, Common)3.27Wing
Facial Recognition and Modelling300WNME_inter-pupil (%, Full)4.04Wing
Facial Recognition and ModellingWFLWAUC@10 (inter-ocular)55.4Wing
Facial Recognition and ModellingWFLWFR@10 (inter-ocular)6Wing
Facial Recognition and ModellingWFLWNME (inter-ocular)5.11Wing
Face ReconstructionCOFWNME (inter-ocular)5.07Wing (Feng et al., 2018)
Face Reconstruction300WNME_inter-pupil (%, Challenge)7.18Wing
Face Reconstruction300WNME_inter-pupil (%, Common)3.27Wing
Face Reconstruction300WNME_inter-pupil (%, Full)4.04Wing
Face ReconstructionAFLW-19AUC_box@0.07 (%, Full)53.5Wing
Face ReconstructionAFLW-19NME_box (%, Full)3.56Wing
Face ReconstructionAFLW-19NME_diag (%, Full)1.65Wing
Face ReconstructionWFLWAUC@10 (inter-ocular)55.4Wing
Face ReconstructionWFLWFR@10 (inter-ocular)6Wing
Face ReconstructionWFLWNME (inter-ocular)5.11Wing
3DCOFWNME (inter-ocular)5.07Wing (Feng et al., 2018)
3D300WNME_inter-pupil (%, Challenge)7.18Wing
3D300WNME_inter-pupil (%, Common)3.27Wing
3D300WNME_inter-pupil (%, Full)4.04Wing
3DAFLW-19AUC_box@0.07 (%, Full)53.5Wing
3DAFLW-19NME_box (%, Full)3.56Wing
3DAFLW-19NME_diag (%, Full)1.65Wing
3DWFLWAUC@10 (inter-ocular)55.4Wing
3DWFLWFR@10 (inter-ocular)6Wing
3DWFLWNME (inter-ocular)5.11Wing
3D Face ModellingCOFWNME (inter-ocular)5.07Wing (Feng et al., 2018)
3D Face ModellingAFLW-19AUC_box@0.07 (%, Full)53.5Wing
3D Face ModellingAFLW-19NME_box (%, Full)3.56Wing
3D Face ModellingAFLW-19NME_diag (%, Full)1.65Wing
3D Face Modelling300WNME_inter-pupil (%, Challenge)7.18Wing
3D Face Modelling300WNME_inter-pupil (%, Common)3.27Wing
3D Face Modelling300WNME_inter-pupil (%, Full)4.04Wing
3D Face ModellingWFLWAUC@10 (inter-ocular)55.4Wing
3D Face ModellingWFLWFR@10 (inter-ocular)6Wing
3D Face ModellingWFLWNME (inter-ocular)5.11Wing
3D Face ReconstructionCOFWNME (inter-ocular)5.07Wing (Feng et al., 2018)
3D Face ReconstructionAFLW-19AUC_box@0.07 (%, Full)53.5Wing
3D Face ReconstructionAFLW-19NME_box (%, Full)3.56Wing
3D Face ReconstructionAFLW-19NME_diag (%, Full)1.65Wing
3D Face Reconstruction300WNME_inter-pupil (%, Challenge)7.18Wing
3D Face Reconstruction300WNME_inter-pupil (%, Common)3.27Wing
3D Face Reconstruction300WNME_inter-pupil (%, Full)4.04Wing
3D Face ReconstructionWFLWAUC@10 (inter-ocular)55.4Wing
3D Face ReconstructionWFLWFR@10 (inter-ocular)6Wing
3D Face ReconstructionWFLWNME (inter-ocular)5.11Wing

Related Papers

Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Data Augmentation in Time Series Forecasting through Inverted Framework2025-07-15Iceberg: Enhancing HLS Modeling with Synthetic Data2025-07-14AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)2025-07-13FreeAudio: Training-Free Timing Planning for Controllable Long-Form Text-to-Audio Generation2025-07-11DS@GT at CheckThat! 2025: Detecting Subjectivity via Transfer-Learning and Corrective Data Augmentation2025-07-08