TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving Visual Prompt Tuning for Self-supervised Vision ...

Improving Visual Prompt Tuning for Self-supervised Vision Transformers

Seungryong Yoo, Eunji Kim, Dahuin Jung, Jungbeom Lee, Sungroh Yoon

2023-06-08Image ClassificationSemantic SegmentationVisual Prompt Tuning
PaperPDFCode(official)

Abstract

Visual Prompt Tuning (VPT) is an effective tuning method for adapting pretrained Vision Transformers (ViTs) to downstream tasks. It leverages extra learnable tokens, known as prompts, which steer the frozen pretrained ViTs. Although VPT has demonstrated its applicability with supervised vision transformers, it often underperforms with self-supervised ones. Through empirical observations, we deduce that the effectiveness of VPT hinges largely on the ViT blocks with which the prompt tokens interact. Specifically, VPT shows improved performance on image classification tasks for MAE and MoCo v3 when the prompt tokens are inserted into later blocks rather than the first block. These observations suggest that there exists an optimal location of blocks for the insertion of prompt tokens. Unfortunately, identifying the optimal blocks for prompts within each self-supervised ViT for diverse future scenarios is a costly process. To mitigate this problem, we propose a simple yet effective method that learns a gate for each ViT block to adjust its intervention into the prompt tokens. With our method, prompt tokens are selectively influenced by blocks that require steering for task adaptation. Our method outperforms VPT variants in FGVC and VTAB image classification and ADE20K semantic segmentation. The code is available at https://github.com/ryongithub/GatedPromptTuning.

Results

TaskDatasetMetricValueModel
Visual Prompt TuningFGVCMean Accuracy83GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Visual Prompt TuningFGVCMean Accuracy73.39GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Visual Prompt TuningVTAB-1k(Structured<8>)Mean Accuracy49.1GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Visual Prompt TuningVTAB-1k(Structured<8>)Mean Accuracy36.8GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Visual Prompt TuningVTAB-1k(Natural<7>)Mean Accuracy74.84GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Visual Prompt TuningVTAB-1k(Natural<7>)Mean Accuracy47.61GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Visual Prompt TuningVTAB-1k(Specialized<4>)Mean Accuracy83.38GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Visual Prompt TuningVTAB-1k(Specialized<4>)Mean Accuracy76.86GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17