TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Searching Central Difference Convolutional Networks for Fa...

Searching Central Difference Convolutional Networks for Face Anti-Spoofing

Zitong Yu, Chenxu Zhao, Zezheng Wang, Yunxiao Qin, Zhuo Su, Xiaobai Li, Feng Zhou, Guoying Zhao

2020-03-09CVPR 2020 6Face RecognitionFace Anti-SpoofingNeural Architecture Search
PaperPDFCodeCodeCode(official)CodeCode

Abstract

Face anti-spoofing (FAS) plays a vital role in face recognition systems. Most state-of-the-art FAS methods 1) rely on stacked convolutions and expert-designed network, which is weak in describing detailed fine-grained information and easily being ineffective when the environment varies (e.g., different illumination), and 2) prefer to use long sequence as input to extract dynamic features, making them difficult to deploy into scenarios which need quick response. Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information. A network built with CDC, called the Central Difference Convolutional Network (CDCN), is able to provide more robust modeling capacity than its counterpart built with vanilla convolution. Furthermore, over a specifically designed CDC search space, Neural Architecture Search (NAS) is utilized to discover a more powerful network structure (CDCN++), which can be assembled with Multiscale Attention Fusion Module (MAFM) for further boosting performance. Comprehensive experiments are performed on six benchmark datasets to show that 1) the proposed method not only achieves superior performance on intra-dataset testing (especially 0.2% ACER in Protocol-1 of OULU-NPU dataset), 2) it also generalizes well on cross-dataset testing (particularly 6.5% HTER from CASIA-MFSD to Replay-Attack datasets). The codes are available at \href{https://github.com/ZitongYu/CDCN}{https://github.com/ZitongYu/CDCN}.

Results

TaskDatasetMetricValueModel
Depth EstimationOULU-NPUACER6.9CDCN
Depth EstimationSiW (Protocol 3)ACER40.2CDCN++
Facial Recognition and ModellingOULU-NPUACER6.9CDCN
Facial Recognition and ModellingSiW (Protocol 3)ACER40.2CDCN++
Visual OdometryOULU-NPUACER6.9CDCN
Visual OdometrySiW (Protocol 3)ACER40.2CDCN++
Face ReconstructionOULU-NPUACER6.9CDCN
Face ReconstructionSiW (Protocol 3)ACER40.2CDCN++
3DOULU-NPUACER6.9CDCN
3DSiW (Protocol 3)ACER40.2CDCN++
3D Face ModellingOULU-NPUACER6.9CDCN
3D Face ModellingSiW (Protocol 3)ACER40.2CDCN++
3D Face ReconstructionOULU-NPUACER6.9CDCN
3D Face ReconstructionSiW (Protocol 3)ACER40.2CDCN++
Depth And Camera MotionOULU-NPUACER6.9CDCN
Depth And Camera MotionSiW (Protocol 3)ACER40.2CDCN++

Related Papers

ProxyFusion: Face Feature Aggregation Through Sparse Experts2025-09-24DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17Non-Adaptive Adversarial Face Generation2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15Multi-Modal Face Anti-Spoofing via Cross-Modal Feature Transitions2025-07-08Face mask detection project report.2025-07-02On the Burstiness of Faces in Set2025-06-25