Jia Guo, Shuai Lu, Weihang Zhang, Fang Chen, Hongen Liao, Huiqi Li
Recent studies highlighted a practical setting of unsupervised anomaly detection (UAD) that builds a unified model for multi-class images. Despite various advancements addressing this challenging task, the detection performance under the multi-class setting still lags far behind state-of-the-art class-separated models. Our research aims to bridge this substantial performance gap. In this paper, we introduce a minimalistic reconstruction-based anomaly detection framework, namely Dinomaly, which leverages pure Transformer architectures without relying on complex designs, additional modules, or specialized tricks. Given this powerful framework consisted of only Attentions and MLPs, we found four simple components that are essential to multi-class anomaly detection: (1) Foundation Transformers that extracts universal and discriminative features, (2) Noisy Bottleneck where pre-existing Dropouts do all the noise injection tricks, (3) Linear Attention that naturally cannot focus, and (4) Loose Reconstruction that does not force layer-to-layer and point-by-point reconstruction. Extensive experiments are conducted across popular anomaly detection benchmarks including MVTec-AD, VisA, and Real-IAD. Our proposed Dinomaly achieves impressive image-level AUROC of 99.6%, 98.7%, and 89.3% on the three datasets respectively, which is not only superior to state-of-the-art multi-class UAD methods, but also achieves the most advanced class-separated UAD records.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Anomaly Detection | MPDD | Detection AUROC | 97.2 | Dinomaly |
| Anomaly Detection | MPDD | Segmentation AUROC | 99.1 | Dinomaly |
| Anomaly Detection | MVTec AD | Detection AUROC | 99.77 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Segmentation AP | 70.53 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Segmentation AUPRO | 95.09 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Segmentation AUROC | 98.54 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Detection AUROC | 99.6 | Dinomaly ViT-B (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Segmentation AP | 69.29 | Dinomaly ViT-B (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Segmentation AUPRO | 94.79 | Dinomaly ViT-B (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Segmentation AUROC | 98.35 | Dinomaly ViT-B (model-unified multi-class) |
| Anomaly Detection | VisA | Detection AUROC | 98.9 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | VisA | F1-Score | 96.1 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | VisA | Segmentation AUPRO | 94.8 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | VisA | Segmentation AUPRO (until 30% FPR) | 94.8 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | VisA | Segmentation AUROC | 99.1 | Dinomaly ViT-L (model-unified multi-class) |
| Anomaly Detection | MVTec AD | Detection AUROC | 99.8 | Dinomaly-Large |
| Anomaly Detection | MVTec AD | Segmentation AUROC | 98.5 | Dinomaly-Large |
| Anomaly Detection | MVTec AD | Detection AUROC | 99.6 | Dinomaly-Base |
| Anomaly Detection | MVTec AD | Segmentation AUROC | 98.4 | Dinomaly-Base |