TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PANDA: Adapting Pretrained Features for Anomaly Detection ...

PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation

Tal Reiss, Niv Cohen, Liron Bergman, Yedid Hoshen

2020-10-12CVPR 2021 1Continual LearningMulti-class ClassificationAnomaly SegmentationAnomaly DetectionTransfer Learning
PaperPDFCode(official)

Abstract

Anomaly detection methods require high-quality features. In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning. Surprisingly, a very promising direction, using pretrained deep features, has been mostly overlooked. In this paper, we first empirically establish the perhaps expected, but unreported result, that combining pretrained features with simple anomaly detection and segmentation methods convincingly outperforms, much more complex, state-of-the-art methods. In order to obtain further performance gains in anomaly detection, we adapt pretrained features to the target distribution. Although transfer learning methods are well established in multi-class classification problems, the one-class classification (OCC) setting is not as well explored. It turns out that naive adaptation methods, which typically work well in supervised learning, often result in catastrophic collapse (feature deterioration) and reduce performance in OCC settings. A popular OCC method, DeepSVDD, advocates using specialized architectures, but this limits the adaptation performance gain. We propose two methods for combating collapse: i) a variant of early stopping that dynamically learns the stopping iteration ii) elastic regularization inspired by continual learning. Our method, PANDA, outperforms the state-of-the-art in the OCC, outlier exposure and anomaly segmentation settings by large margins.

Results

TaskDatasetMetricValueModel
Anomaly DetectionHyper-Kvasir DatasetAUC0.937PANDA
Anomaly DetectionFashion-MNISTROC AUC95.6PANDA
Anomaly DetectionFashion-MNISTROC AUC92.8Self-Supervised One-class SVM, RBF kernel
Anomaly DetectionFashion-MNISTROC AUC91.8PANDA-OE
Anomaly DetectionFashion-MNISTROC AUC84.8Self-Supervised DeepSVDD
Anomaly DetectionOne-class CIFAR-100AUROC97.3PANDA-OE
Anomaly DetectionOne-class CIFAR-100AUROC94.1PANDA
Anomaly DetectionOne-class CIFAR-100AUROC80.1Self-Supervised Multi-Head RotNet
Anomaly DetectionOne-class CIFAR-100AUROC67Self-Supervised DeepSVDD
Anomaly DetectionOne-class CIFAR-100AUROC62.6Self-Supervised One-class SVM, RBF kernel
Anomaly DetectionCats-and-DogsROC AUC97.3PANDA
Anomaly DetectionCats-and-DogsROC AUC94.5PANDA-OE
Anomaly DetectionCats-and-DogsROC AUC51.7Self-Supervised One-class SVM, RBF kernel
Anomaly DetectionCats-and-DogsROC AUC50.5Self-Supervised DeepSVDD
Anomaly DetectionOne-class CIFAR-10AUROC98.9PANDA-OE
Anomaly DetectionOne-class CIFAR-10AUROC96.2PANDA
Anomaly DetectionOne-class CIFAR-10AUROC64.8Self-Supervised DeepSVDD
Anomaly DetectionOne-class CIFAR-10AUROC64.7Self-Supervised One-class SVM, RBF kernel
Anomaly DetectionDIORROC AUC95.9PANDA-OE
Anomaly DetectionDIORROC AUC94.3PANDA
Anomaly DetectionDIORROC AUC70.7Self-Supervised One-class SVM, RBF kernel
Anomaly DetectionDIORROC AUC70Self-Supervised DeepSVDD

Related Papers

Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems2025-07-21RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-183DKeyAD: High-Resolution 3D Point Cloud Anomaly Detection via Keypoint-Guided Point Clustering2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16Information-Theoretic Generalization Bounds of Replay-based Continual Learning2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16