TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning One Class Representations for Face Presentation A...

Learning One Class Representations for Face Presentation Attack Detection using Multi-channel Convolutional Neural Networks

Anjith George, Sebastien Marcel

2020-07-22Face RecognitionFace Anti-SpoofingOne-class classifierFace Presentation Attack Detection
PaperPDFCodeCodeCode

Abstract

Face recognition has evolved as a widely used biometric modality. However, its vulnerability against presentation attacks poses a significant security threat. Though presentation attack detection (PAD) methods try to address this issue, they often fail in generalizing to unseen attacks. In this work, we propose a new framework for PAD using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network (MCCNN). A novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks. A one-class Gaussian Mixture Model is used on top of these embeddings for the PAD task. The proposed framework introduces a novel approach to learn a robust PAD system from bonafide and available (known) attack classes. This is particularly important as collecting bonafide data and simpler attacks are much easier than collecting a wide variety of expensive attacks. The proposed system is evaluated on the publicly available WMCA multi-channel face PAD database, which contains a wide variety of 2D and 3D attacks. Further, we have performed experiments with MLFP and SiW-M datasets using RGB channels only. Superior performance in unseen attack protocols shows the effectiveness of the proposed approach. Software, data, and protocols to reproduce the results are made available publicly.

Results

TaskDatasetMetricValueModel
Depth EstimationMLFPHTER3.4MCCNN (BCE+OCCL)-GMM
Facial Recognition and ModellingMLFPHTER3.4MCCNN (BCE+OCCL)-GMM
Visual OdometryMLFPHTER3.4MCCNN (BCE+OCCL)-GMM
Face ReconstructionMLFPHTER3.4MCCNN (BCE+OCCL)-GMM
Spoof DetectionWMCAACER0.097MCCNN(BCE+OCCL)-GMM
3DMLFPHTER3.4MCCNN (BCE+OCCL)-GMM
3D Face ModellingMLFPHTER3.4MCCNN (BCE+OCCL)-GMM
3D Face ReconstructionMLFPHTER3.4MCCNN (BCE+OCCL)-GMM
Depth And Camera MotionMLFPHTER3.4MCCNN (BCE+OCCL)-GMM

Related Papers

ProxyFusion: Face Feature Aggregation Through Sparse Experts2025-09-24Non-Adaptive Adversarial Face Generation2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15Multi-Modal Face Anti-Spoofing via Cross-Modal Feature Transitions2025-07-08Face mask detection project report.2025-07-02On the Burstiness of Faces in Set2025-06-25Identifying Physically Realizable Triggers for Backdoored Face Recognition Networks2025-06-24