TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SSD: A Unified Framework for Self-Supervised Outlier Detec...

SSD: A Unified Framework for Self-Supervised Outlier Detection

Vikash Sehwag, Mung Chiang, Prateek Mittal

2021-03-22ICLR 2021 1Representation LearningOut of Distribution (OOD) DetectionOutlier DetectionAnomaly DetectionOut-of-Distribution Detection
PaperPDFCodeCodeCode(official)

Abstract

We ask the following question: what training information is required to design an effective outlier/out-of-distribution (OOD) detector, i.e., detecting samples that lie far away from the training distribution? Since unlabeled data is easily accessible for many applications, the most compelling approach is to develop detectors based on only unlabeled in-distribution data. However, we observe that most existing detectors based on unlabeled data perform poorly, often equivalent to a random prediction. In contrast, existing state-of-the-art OOD detectors achieve impressive performance but require access to fine-grained data labels for supervised training. We propose SSD, an outlier detector based on only unlabeled in-distribution data. We use self-supervised representation learning followed by a Mahalanobis distance based detection in the feature space. We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin. Additionally, SSD even achieves performance on par, and sometimes even better, with supervised training based detectors. Finally, we expand our detection framework with two key extensions. First, we formulate few-shot OOD detection, in which the detector has access to only one to five samples from each class of the targeted OOD dataset. Second, we extend our framework to incorporate training data labels, if available. We find that our novel detection framework based on SSD displays enhanced performance with these extensions, and achieves state-of-the-art performance. Our code is publicly available at https://github.com/inspire-group/SSD.

Results

TaskDatasetMetricValueModel
Anomaly DetectionAnomaly Detection on Unlabeled CIFAR-10 vs LSUN (Fix)ROC-AUC96.5SSD
Anomaly DetectionUnlabeled CIFAR-10 vs CIFAR-100AUROC89.6SSD
Anomaly DetectionOne-class CIFAR-10AUROC90SSD
Out-of-Distribution DetectionImageNet-1k vs TexturesAUROC85.4SSD
Out-of-Distribution DetectionImageNet-1k vs TexturesFPR9557.2SSD

Related Papers

Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems2025-07-21Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-173DKeyAD: High-Resolution 3D Point Cloud Anomaly Detection via Keypoint-Guided Point Clustering2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16