TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Methodology/Domain Adaptation/ImageNet-C

Domain Adaptation on ImageNet-C

Metric: mean Corruption Error (mCE) (lower is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕mean Corruption Error (mCE)▲AugmentationsPaperDate↕Code
1EfficientNet-L2+RPL22YesIf your data distribution shifts, use self-learn...2021-04-27Code
2EfficientNet-L2+ENT23YesIf your data distribution shifts, use self-learn...2021-04-27Code
3DINOv2 (ViT-g/14, frozen model, linear eval)28.2YesDINOv2: Learning Robust Visual Features without ...2023-04-14Code
4CAFormer-B36 (IN21K, 384)30.8YesMetaFormer Baselines for Vision2022-10-24Code
5MAE+DAT (ViT-H)31.4NoEnhance the Visual Representation via Discrete A...2022-09-16Code
6DINOv2 (ViT-L/14, frozen model, linear eval)31.5YesDINOv2: Learning Robust Visual Features without ...2023-04-14Code
7CAFormer-B36 (IN21K)31.8YesMetaFormer Baselines for Vision2022-10-24Code
8MAE (ViT-H)33.8NoMasked Autoencoders Are Scalable Vision Learners2021-11-11Code
9ResNeXt101 32x8d + DeepAug + Augmix + RPL34.8NoIf your data distribution shifts, use self-learn...2021-04-27Code
10ConvFormer-B36 (IN21K)35YesMetaFormer Baselines for Vision2022-10-24Code
11ResNeXt101 32x8d + DeepAug + Augmix + ENT35.5NoIf your data distribution shifts, use self-learn...2021-04-27Code
12FAN-L-Hybrid (IN-22k)35.8YesUnderstanding The Robustness in Vision Transform...2022-04-26Code
13Pyramid Adversarial Training Improves ViT (Im21k)36.8YesPyramid Adversarial Training Improves ViT Perfor...2021-11-30Code
14ResNeXt101+DeepAug+AugMix, BatchNorm Adaptation, full adaptation38NoImproving robustness against common corruptions ...2020-06-30Code
15VOLO-D5+HAT38.4NoImproving Vision Transformers by Revisiting High...2022-04-03Code
16DiscreteViT (Im21k)38.74YesDiscrete Representations Strengthen Vision Trans...2021-11-20Code
17ConvNeXt-XL (Im21k) (augmentation overlap with ImageNet-C)38.8YesA ConvNet for the 2020s2022-01-10Code
18GPaCo (ViT-L)39NoGeneralized Parametric Contrastive Learning2022-09-26Code
19ResNeXt101+DeepAug+AugMix, BatchNorm Adaptation, 8 samples40.7NoImproving robustness against common corruptions ...2020-06-30Code
20ResNeXt101 32x8d + IG-3.5B + ENT40.8YesIf your data distribution shifts, use self-learn...2021-04-27Code
21ResNeXt101 32x8d + IG-3.5B + RPL40.9YesIf your data distribution shifts, use self-learn...2021-04-27Code
22FAN-B-Hybrid (IN-22k)41YesUnderstanding The Robustness in Vision Transform...2022-04-26Code
23Pyramid Adversarial Training Improves ViT41.42NoPyramid Adversarial Training Improves ViT Perfor...2021-11-30Code
24FAN-L-Hybrid+STL42.1NoFully Attentional Networks with Self-emerging To...2024-01-08Code
25QualNet (ResNeXt101)42.5No--Code
26CAFormer-B3642.6NoMetaFormer Baselines for Vision2022-10-24Code
27DINOv2 (ViT-B/14, frozen model, linear eval)42.7YesDINOv2: Learning Robust Visual Features without ...2023-04-14Code
28FAN-L-Hybrid43NoUnderstanding The Robustness in Vision Transform...2022-04-26Code
29ResNeXt101 32x8d + RPL43.2NoIf your data distribution shifts, use self-learn...2021-04-27Code
30ResNeXt101 32x8d + ENT44.3NoIf your data distribution shifts, use self-learn...2021-04-27Code
31ResNet50+DeepAug+AugMix, BatchNorm Adaptation, full adaptation45.4NoImproving robustness against common corruptions ...2020-06-30Code
32DrViT46.22NoDiscrete Representations Strengthen Vision Trans...2021-11-20Code
33DiscreteViT46.22NoDiscrete Representations Strengthen Vision Trans...2021-11-20Code
34ConvFormer-B3646.3NoMetaFormer Baselines for Vision2022-10-24Code
35RVT-B*46.8NoTowards Robust Vision Transformer2021-05-17Code
36ResNet50+DeepAug+AugMix, BatchNorm Adaptation, 8 samples48.4NoImproving robustness against common corruptions ...2020-06-30Code
37Sequencer2D-L48.9NoSequencer: Deep LSTM for Image Classification2022-05-04Code
38RVT-S*49.4NoTowards Robust Vision Transformer2021-05-17Code
39ResNet-50 (PushPull-Conv) + PRIME49.95NoPushPull-Net: Inhibition-driven ResNet robust to...2024-08-07Code
40ResNet50 + RPL50.5NoIf your data distribution shifts, use self-learn...2021-04-27Code
41QualNet (ResNet-50)50.6No--Code
42PRIME + DeepAugment (ResNet-50)51.3NoPRIME: A few primitives can boost robustness to ...2021-12-27Code
43ResNet50 + ENT51.6NoIf your data distribution shifts, use self-learn...2021-04-27Code
44GFNet-S53.8NoGlobal Filter Networks for Image Classification2021-07-01Code
45DINOv2 (ViT-S/14, frozen model, linear eval)54.4YesDINOv2: Learning Robust Visual Features without ...2023-04-14Code
46PRIME with JSD (ResNet-50)55.5NoPRIME: A few primitives can boost robustness to ...2021-12-27Code
47RVT-Ti*57NoTowards Robust Vision Transformer2021-05-17Code
48PRIME (ResNet-50)57.5NoPRIME: A few primitives can boost robustness to ...2021-12-27Code
49APR-SP + DeepAugment (ResNet-50)57.5NoAmplitude-Phase Recombination: Rethinking Robust...2021-08-19Code
50DeepAugment (ResNet-50)60.4NoThe Many Faces of Robustness: A Critical Analysi...2020-06-29Code
51ResNet50 (baseline), BatchNorm Adaptation, full adaptation62.2NoImproving robustness against common corruptions ...2020-06-30Code
52ResNet50 (baseline), BatchNorm Adaptation, 8 samples65NoImproving robustness against common corruptions ...2020-06-30Code
53APR-SP (ResNet-50)65NoAmplitude-Phase Recombination: Rethinking Robust...2021-08-19Code
54AugMix (ResNet-50)65.3NoAugMix: A Simple Data Processing Method to Impro...2019-12-05Code
55Stylized ImageNet (ResNet-50)69.3YesImageNet-trained CNNs are biased towards texture...2018-11-29Code
56Group-wise Inhibition (ResNet-50)69.6NoGroup-wise Inhibition based Feature Regularizati...2021-03-03Code
57ResNet-5076.7NoBenchmarking Neural Network Robustness to Common...2019-03-28Code