TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PRIME: A few primitives can boost robustness to common cor...

PRIME: A few primitives can boost robustness to common corruptions

Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

2021-12-27Image ClassificationData AugmentationDomain Generalization
PaperPDFCode(official)

Abstract

Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to unforeseen corruptions of their data. To fix this vulnerability, prior works have built complex data augmentation strategies, combining multiple methods to enrich the training data. However, introducing intricate design choices or heuristics makes it hard to understand which elements of these methods are indeed crucial for improving robustness. In this work, we take a step back and follow a principled approach to achieve robustness to common corruptions. We propose PRIME, a general data augmentation scheme that relies on simple yet rich families of max-entropy image transformations. PRIME outperforms the prior art in terms of corruption robustness, while its simplicity and plug-and-play nature enable combination with other methods to further boost their robustness. We analyze PRIME to shed light on the importance of the mixing strategy on synthesizing corrupted images, and to reveal the robustness-accuracy trade-offs arising in the context of common corruptions. Finally, we show that the computational efficiency of our method allows it to be easily used in both on-line and off-line data augmentation schemes.

Results

TaskDatasetMetricValueModel
Domain AdaptationImageNet-RTop-1 Error Rate53.7PRIME with JSD (ResNet-50)
Domain AdaptationImageNet-RTop-1 Error Rate57.1PRIME (ResNet-50)
Domain AdaptationImageNet-CTop 1 Accuracy59.9PRIME + DeepAugment (ResNet-50)
Domain AdaptationImageNet-Cmean Corruption Error (mCE)51.3PRIME + DeepAugment (ResNet-50)
Domain AdaptationImageNet-CTop 1 Accuracy56.4PRIME with JSD (ResNet-50)
Domain AdaptationImageNet-Cmean Corruption Error (mCE)55.5PRIME with JSD (ResNet-50)
Domain AdaptationImageNet-CTop 1 Accuracy55PRIME (ResNet-50)
Domain AdaptationImageNet-Cmean Corruption Error (mCE)57.5PRIME (ResNet-50)
Domain GeneralizationImageNet-RTop-1 Error Rate53.7PRIME with JSD (ResNet-50)
Domain GeneralizationImageNet-RTop-1 Error Rate57.1PRIME (ResNet-50)
Domain GeneralizationImageNet-CTop 1 Accuracy59.9PRIME + DeepAugment (ResNet-50)
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)51.3PRIME + DeepAugment (ResNet-50)
Domain GeneralizationImageNet-CTop 1 Accuracy56.4PRIME with JSD (ResNet-50)
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)55.5PRIME with JSD (ResNet-50)
Domain GeneralizationImageNet-CTop 1 Accuracy55PRIME (ResNet-50)
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)57.5PRIME (ResNet-50)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17