TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unlocking the Potential of Reverse Distillation for Anomal...

Unlocking the Potential of Reverse Distillation for Anomaly Detection

Xinyue Liu, Jianyuan Wang, Biao Leng, Shuo Zhang

2024-12-10Unsupervised Anomaly DetectionAnomaly DetectionKnowledge Distillation
PaperPDFCode(official)

Abstract

Knowledge Distillation (KD) is a promising approach for unsupervised Anomaly Detection (AD). However, the student network's over-generalization often diminishes the crucial representation differences between teacher and student in anomalous regions, leading to detection failures. To addresses this problem, the widely accepted Reverse Distillation (RD) paradigm designs the asymmetry teacher and student, using an encoder as teacher and a decoder as student. Yet, the design of RD does not ensure that the teacher encoder effectively distinguishes between normal and abnormal features or that the student decoder generates anomaly-free features. Additionally, the absence of skip connections results in a loss of fine details during feature reconstruction. To address these issues, we propose RD with Expert, which introduces a novel Expert-Teacher-Student network for simultaneous distillation of both the teacher encoder and student decoder. The added expert network enhances the student's ability to generate normal features and optimizes the teacher's differentiation between normal and abnormal features, reducing missed detections. Additionally, Guided Information Injection is designed to filter and transfer features from teacher to student, improving detail reconstruction and minimizing false positives. Experiments on several benchmarks prove that our method outperforms existing unsupervised AD methods under RD paradigm, fully unlocking RD's potential.

Results

TaskDatasetMetricValueModel
Anomaly DetectionBTADDetection AUROC93.9URD
Anomaly DetectionBTADSegmentation AP65.2URD
Anomaly DetectionBTADSegmentation AUPRO78.5URD
Anomaly DetectionBTADSegmentation AUROC98.1URD
Anomaly DetectionMVTec ADDetection AUROC99.2URD
Anomaly DetectionMVTec ADSegmentation AP72.4URD
Anomaly DetectionMVTec ADSegmentation AUPRO96.3URD
Anomaly DetectionMVTec ADSegmentation AUROC99URD
Anomaly DetectionVisADetection AUROC96.5URD
Anomaly DetectionVisASegmentation AUPRO (until 30% FPR)95.1URD
Anomaly DetectionVisASegmentation AUROC99.1URD

Related Papers

Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems2025-07-21Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-213DKeyAD: High-Resolution 3D Point Cloud Anomaly Detection via Keypoint-Guided Point Clustering2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17A Privacy-Preserving Framework for Advertising Personalization Incorporating Federated Learning and Differential Privacy2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Bridge Feature Matching and Cross-Modal Alignment with Mutual-filtering for Zero-shot Anomaly Detection2025-07-15