TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Not All Patches are What You Need: Expediting Vision Trans...

Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations

Youwei Liang, Chongjian Ge, Zhan Tong, Yibing Song, Jue Wang, Pengtao Xie

2022-02-16All
PaperPDFCode(official)Code

Abstract

Vision Transformers (ViTs) take all the image patches as tokens and construct multi-head self-attention (MHSA) among them. Complete leverage of these image tokens brings redundant computations since not all the tokens are attentive in MHSA. Examples include that tokens containing semantically meaningless or distractive image backgrounds do not positively contribute to the ViT predictions. In this work, we propose to reorganize image tokens during the feed-forward process of ViT models, which is integrated into ViT during training. For each forward inference, we identify the attentive image tokens between MHSA and FFN (i.e., feed-forward network) modules, which is guided by the corresponding class token attention. Then, we reorganize image tokens by preserving attentive image tokens and fusing inattentive ones to expedite subsequent MHSA and FFN computations. To this end, our method EViT improves ViTs from two perspectives. First, under the same amount of input image tokens, our method reduces MHSA and FFN computation for efficient inference. For instance, the inference speed of DeiT-S is increased by 50% while its recognition accuracy is decreased by only 0.3% for ImageNet classification. Second, by maintaining the same computational cost, our method empowers ViTs to take more image tokens as input for recognition accuracy improvement, where the image tokens are from higher resolution images. An example is that we improve the recognition accuracy of DeiT-S by 1% for ImageNet classification at the same computational cost of a vanilla DeiT-S. Meanwhile, our method does not introduce more parameters to ViTs. Experiments on the standard benchmarks show the effectiveness of our method. The code is available at https://github.com/youweiliang/evit

Results

TaskDatasetMetricValueModel
Image ClassificationImageNet-1K (With LV-ViT-S)GFLOPs4.7EViT (70%)
Image ClassificationImageNet-1K (With LV-ViT-S)Top 1 Accuracy83EViT (70%)
Image ClassificationImageNet-1K (With LV-ViT-S)GFLOPs3.9EViT (50%)
Image ClassificationImageNet-1K (With LV-ViT-S)Top 1 Accuracy82.5EViT (50%)
Image ClassificationImageNet-1K (with DeiT-S)GFLOPs3.5EViT (80%)
Image ClassificationImageNet-1K (with DeiT-S)Top 1 Accuracy79.8EViT (80%)
Image ClassificationImageNet-1K (with DeiT-S)GFLOPs4EViT (90%)
Image ClassificationImageNet-1K (with DeiT-S)Top 1 Accuracy79.8EViT (90%)
Image ClassificationImageNet-1K (with DeiT-S)GFLOPs3EViT (70%)
Image ClassificationImageNet-1K (with DeiT-S)Top 1 Accuracy79.5EViT (70%)
Image ClassificationImageNet-1K (with DeiT-S)GFLOPs2.6EViT (60%)
Image ClassificationImageNet-1K (with DeiT-S)Top 1 Accuracy78.9EViT (60%)
Image ClassificationImageNet-1K (with DeiT-S)GFLOPs2.3EViT (50%)
Image ClassificationImageNet-1K (with DeiT-S)Top 1 Accuracy78.5EViT (50%)

Related Papers

Modeling Code: Is Text All You Need?2025-07-15All Eyes, no IMU: Learning Flight Attitude from Vision Alone2025-07-15Is Diversity All You Need for Scalable Robotic Manipulation?2025-07-08DESIGN AND IMPLEMENTATION OF ONLINE CLEARANCE REPORT.2025-07-07Is Reasoning All You Need? Probing Bias in the Age of Reasoning Language Models2025-07-03Prompt2SegCXR:Prompt to Segment All Organs and Diseases in Chest X-rays2025-07-01State and Memory is All You Need for Robust and Reliable AI Agents2025-06-30EAMamba: Efficient All-Around Vision State Space Model for Image Restoration2025-06-27