Li Yuan, Qibin Hou, Zihang Jiang, Jiashi Feng, Shuicheng Yan
Visual recognition has been dominated by convolutional neural networks (CNNs) for years. Though recently the prevailing vision transformers (ViTs) have shown great potential of self-attention based models in ImageNet classification, their performance is still inferior to that of the latest SOTA CNNs if no extra data are provided. In this work, we try to close the performance gap and demonstrate that attention-based models are indeed able to outperform CNNs. We find a major factor limiting the performance of ViTs for ImageNet classification is their low efficacy in encoding fine-level features into the token representations. To resolve this, we introduce a novel outlook attention and present a simple and general architecture, termed Vision Outlooker (VOLO). Unlike self-attention that focuses on global dependency modeling at a coarse level, the outlook attention efficiently encodes finer-level features and contexts into tokens, which is shown to be critically beneficial to recognition performance but largely ignored by the self-attention. Experiments show that our VOLO achieves 87.1% top-1 accuracy on ImageNet-1K classification, which is the first model exceeding 87% accuracy on this competitive benchmark, without using any extra training data In addition, the pre-trained VOLO transfers well to downstream tasks, such as semantic segmentation. We achieve 84.3% mIoU score on the cityscapes validation set and 54.3% on the ADE20K validation set. Code is available at \url{https://github.com/sail-sg/volo}.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Domain Adaptation | VizWiz-Classification | Accuracy - All Images | 57.2 | VOLO-D5 |
| Domain Adaptation | VizWiz-Classification | Accuracy - Clean Images | 59.7 | VOLO-D5 |
| Domain Adaptation | VizWiz-Classification | Accuracy - Corrupted Images | 51.8 | VOLO-D5 |
| Semantic Segmentation | Graz-02 | Pixel Accuracy | 85 | VOLO-D5 |
| Semantic Segmentation | Cityscapes val | mIoU | 84.3 | VOLO-D4 (MS, ImageNet1k pretrain) |
| Semantic Segmentation | ADE20K | Validation mIoU | 54.3 | VOLO-D5 |
| Image Classification | ImageNet V2 | Top 1 Accuracy | 78 | VOLO-D5 |
| Image Classification | ImageNet V2 | Top 1 Accuracy | 77.8 | VOLO-D4 |
| Image Classification | VizWiz-Classification | Accuracy | 57.2 | VOLO-D5 |
| Image Classification | ImageNet | GFLOPs | 412 | VOLO-D5 |
| Image Classification | ImageNet | GFLOPs | 197 | VOLO-D4 |
| Image Classification | ImageNet | GFLOPs | 67.9 | VOLO-D3 |
| Domain Generalization | VizWiz-Classification | Accuracy - All Images | 57.2 | VOLO-D5 |
| Domain Generalization | VizWiz-Classification | Accuracy - Clean Images | 59.7 | VOLO-D5 |
| Domain Generalization | VizWiz-Classification | Accuracy - Corrupted Images | 51.8 | VOLO-D5 |
| 10-shot image generation | Graz-02 | Pixel Accuracy | 85 | VOLO-D5 |
| 10-shot image generation | Cityscapes val | mIoU | 84.3 | VOLO-D4 (MS, ImageNet1k pretrain) |
| 10-shot image generation | ADE20K | Validation mIoU | 54.3 | VOLO-D5 |