TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Towards Robust Vision Transformer

Towards Robust Vision Transformer

Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He, Hui Xue

2021-05-17CVPR 2022 1Image ClassificationDomain Generalization
PaperPDFCode(official)Code(official)

Abstract

Recent advances on Vision Transformer (ViT) and its improved variants have shown that self-attention-based networks surpass traditional Convolutional Neural Networks (CNNs) in most vision tasks. However, existing ViTs focus on the standard accuracy and computation cost, lacking the investigation of the intrinsic influence on model robustness and generalization. In this work, we conduct systematic evaluation on components of ViTs in terms of their impact on robustness to adversarial examples, common corruptions and distribution shifts. We find some components can be harmful to robustness. By using and combining robust components as building blocks of ViTs, we propose Robust Vision Transformer (RVT), which is a new vision transformer and has superior performance with strong robustness. We further propose two new plug-and-play techniques called position-aware attention scaling and patch-wise augmentation to augment our RVT, which we abbreviate as RVT*. The experimental results on ImageNet and six robustness benchmarks show the advanced robustness and generalization ability of RVT compared with previous ViTs and state-of-the-art CNNs. Furthermore, RVT-S* also achieves Top-1 rank on multiple robustness leaderboards including ImageNet-C and ImageNet-Sketch. The code will be available at \url{https://github.com/alibaba/easyrobust}.

Results

TaskDatasetMetricValueModel
Domain AdaptationImageNet-RTop-1 Error Rate51.3RVT-B*
Domain AdaptationImageNet-RTop-1 Error Rate52.3RVT-S*
Domain AdaptationImageNet-RTop-1 Error Rate56.1RVT-Ti*
Domain AdaptationImageNet-ATop-1 accuracy %28.5RVT-B*
Domain AdaptationImageNet-ATop-1 accuracy %25.7RVT-S*
Domain AdaptationImageNet-ATop-1 accuracy %14.4RVT-Ti*
Domain AdaptationImageNet-Cmean Corruption Error (mCE)46.8RVT-B*
Domain AdaptationImageNet-Cmean Corruption Error (mCE)49.4RVT-S*
Domain AdaptationImageNet-Cmean Corruption Error (mCE)57RVT-Ti*
Image ClassificationImageNetGFLOPs17.7RVT-B*
Image ClassificationImageNetGFLOPs4.7RVT-S*
Image ClassificationImageNetGFLOPs1.3RVT-Ti*
Domain GeneralizationImageNet-RTop-1 Error Rate51.3RVT-B*
Domain GeneralizationImageNet-RTop-1 Error Rate52.3RVT-S*
Domain GeneralizationImageNet-RTop-1 Error Rate56.1RVT-Ti*
Domain GeneralizationImageNet-ATop-1 accuracy %28.5RVT-B*
Domain GeneralizationImageNet-ATop-1 accuracy %25.7RVT-S*
Domain GeneralizationImageNet-ATop-1 accuracy %14.4RVT-Ti*
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)46.8RVT-B*
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)49.4RVT-S*
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)57RVT-Ti*

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17