TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Patch Slimming for Efficient Vision Transformers

Patch Slimming for Efficient Vision Transformers

Yehui Tang, Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chao Xu, DaCheng Tao

2021-06-05CVPR 2022 1
PaperPDF

Abstract

This paper studies the efficiency problem for visual transformers by excavating redundant calculation in given networks. The recent transformer architecture has demonstrated its effectiveness for achieving excellent performance on a series of computer vision tasks. However, similar to that of convolutional neural networks, the huge computational cost of vision transformers is still a severe issue. Considering that the attention mechanism aggregates different patches layer-by-layer, we present a novel patch slimming approach that discards useless patches in a top-down paradigm. We first identify the effective patches in the last layer and then use them to guide the patch selection process of previous layers. For each layer, the impact of a patch on the final output feature is approximated and patches with less impact will be removed. Experimental results on benchmark datasets demonstrate that the proposed method can significantly reduce the computational costs of vision transformers without affecting their performances. For example, over 45% FLOPs of the ViT-Ti model can be reduced with only 0.2% top-1 accuracy drop on the ImageNet dataset.

Results

TaskDatasetMetricValueModel
Image ClassificationImageNet-1K (With LV-ViT-S)GFLOPs4.5DPS-LV-ViT-S
Image ClassificationImageNet-1K (With LV-ViT-S)Top 1 Accuracy82.9DPS-LV-ViT-S
Image ClassificationImageNet-1K (With LV-ViT-S)GFLOPs4.7PS-LV-ViT-S
Image ClassificationImageNet-1K (With LV-ViT-S)Top 1 Accuracy82.4PS-LV-ViT-S
Image ClassificationImageNet-1K (with DeiT-S)GFLOPs2.4DPS-ViT
Image ClassificationImageNet-1K (with DeiT-S)Top 1 Accuracy79.5DPS-ViT
Image ClassificationImageNet-1K (with DeiT-S)GFLOPs2.6PS-ViT
Image ClassificationImageNet-1K (with DeiT-S)Top 1 Accuracy79.4PS-ViT
Image ClassificationImageNet-1K (with DeiT-T)GFLOPs0.6DPS-ViT
Image ClassificationImageNet-1K (with DeiT-T)Top 1 Accuracy72.1DPS-ViT
Image ClassificationImageNet-1K (with DeiT-T)GFLOPs0.7PS-ViT
Image ClassificationImageNet-1K (with DeiT-T)Top 1 Accuracy72PS-ViT