TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Anomaly Detection via Reverse Distillation from One-Class ...

Anomaly Detection via Reverse Distillation from One-Class Embedding

Hanqiu Deng, Xingyu Li

2022-01-26CVPR 2022 1Anomaly SegmentationUnsupervised Anomaly DetectionAnomaly DetectionNovelty DetectionKnowledge DistillationAnomaly Classification
PaperPDFCodeCodeCodeCodeCode(official)

Abstract

Knowledge distillation (KD) achieves promising results on the challenging problem of unsupervised anomaly detection (AD).The representation discrepancy of anomalies in the teacher-student (T-S) model provides essential evidence for AD. However, using similar or identical architectures to build the teacher and student models in previous studies hinders the diversity of anomalous representations. To tackle this problem, we propose a novel T-S model consisting of a teacher encoder and a student decoder and introduce a simple yet effective "reverse distillation" paradigm accordingly. Instead of receiving raw images directly, the student network takes teacher model's one-class embedding as input and targets to restore the teacher's multiscale representations. Inherently, knowledge distillation in this study starts from abstract, high-level presentations to low-level features. In addition, we introduce a trainable one-class bottleneck embedding (OCBE) module in our T-S model. The obtained compact embedding effectively preserves essential information on normal patterns, but abandons anomaly perturbations. Extensive experimentation on AD and one-class novelty detection benchmarks shows that our method surpasses SOTA performance, demonstrating our proposed approach's effectiveness and generalizability.

Results

TaskDatasetMetricValueModel
Anomaly DetectionFashion-MNISTROC AUC95Reverse Distillation
Anomaly DetectionAeBAD-VDetection AUROC71ReverseDistillation
Anomaly DetectionAeBAD-SDetection AUROC81ReverseDistillation
Anomaly DetectionAeBAD-SSegmentation AUPRO85.6ReverseDistillation
Anomaly DetectionOne-class CIFAR-10AUROC86.5Reverse Distillation
Anomaly DetectionMVTec ADDetection AUROC98.5Reverse Distillation
Anomaly DetectionMVTec ADSegmentation AUPRO93.9Reverse Distillation
Anomaly DetectionMVTec ADSegmentation AUROC97.8Reverse Distillation
Anomaly DetectionVisASegmentation AUPRO (until 30% FPR)70.9Reverse Distillation
Anomaly DetectionMVTec LOCO ADAvg. Detection AUROC78.7RD4AD
Anomaly DetectionMVTec LOCO ADDetection AUROC (only logical)69.4RD4AD
Anomaly DetectionMVTec LOCO ADDetection AUROC (only structural)88RD4AD
Anomaly DetectionMVTec LOCO ADSegmentation AU-sPRO (until FPR 5%)63.7RD4AD
Anomaly DetectionGoodsADAUPR68.2RD4AD
Anomaly DetectionGoodsADAUROC66.5RD4AD
2D ClassificationGoodsADAUPR68.2RD4AD
2D ClassificationGoodsADAUROC66.5RD4AD
Anomaly ClassificationGoodsADAUPR68.2RD4AD
Anomaly ClassificationGoodsADAUROC66.5RD4AD

Related Papers

Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems2025-07-21Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-213DKeyAD: High-Resolution 3D Point Cloud Anomaly Detection via Keypoint-Guided Point Clustering2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17A Privacy-Preserving Framework for Advertising Personalization Incorporating Federated Learning and Differential Privacy2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Bridge Feature Matching and Cross-Modal Alignment with Mutual-filtering for Zero-shot Anomaly Detection2025-07-15