TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Understanding Gaussian Attention Bias of Vision Transforme...

Understanding Gaussian Attention Bias of Vision Transformers Using Effective Receptive Fields

Bum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Sang Woo Kim

2023-05-08Image ClassificationSemantic Segmentationobject-detectionFine-Grained Image ClassificationObject Detection
PaperPDFCode(official)

Abstract

Vision transformers (ViTs) that model an image as a sequence of partitioned patches have shown notable performance in diverse vision tasks. Because partitioning patches eliminates the image structure, to reflect the order of patches, ViTs utilize an explicit component called positional embedding. However, we claim that the use of positional embedding does not simply guarantee the order-awareness of ViT. To support this claim, we analyze the actual behavior of ViTs using an effective receptive field. We demonstrate that during training, ViT acquires an understanding of patch order from the positional embedding that is trained to be a specific pattern. Based on this observation, we propose explicitly adding a Gaussian attention bias that guides the positional embedding to have the corresponding pattern from the beginning of training. We evaluated the influence of Gaussian attention bias on the performance of ViTs in several image classification, object detection, and semantic segmentation experiments. The results showed that proposed method not only facilitates ViTs to understand images but also boosts their performance on various datasets, including ImageNet, COCO 2017, and ADE20K.

Results

TaskDatasetMetricValueModel
Semantic SegmentationADE20K valmIoU46.41Swin-S (RPE w/ GAB)
Object DetectionCOCO test-devbox mAP48.23Swin-S (RPE w/ GAB)
Image ClassificationStanford CarsAccuracy93.743ViT-B/16 (RPE w/ GAB)
Image ClassificationStanford CarsAccuracy83.89ViT-M/16 (RPE w/ GAB)
3DCOCO test-devbox mAP48.23Swin-S (RPE w/ GAB)
2D ClassificationCOCO test-devbox mAP48.23Swin-S (RPE w/ GAB)
2D Object DetectionCOCO test-devbox mAP48.23Swin-S (RPE w/ GAB)
10-shot image generationADE20K valmIoU46.41Swin-S (RPE w/ GAB)
16kCOCO test-devbox mAP48.23Swin-S (RPE w/ GAB)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17