Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim
The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision. Taking inspiration from recent advances in efficiently tuning large language models, VPT introduces only a small amount (less than 1% of model parameters) of trainable parameters in the input space while keeping the model backbone frozen. Via extensive experiments on a wide variety of downstream recognition tasks, we show that VPT achieves significant performance gains compared to other parameter efficient tuning protocols. Most importantly, VPT even outperforms full fine-tuning in many cases across model capacities and training data scales, while reducing per-task storage cost.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Classification | CIFAR-100-LT (ρ=50) | Error Rate | 15.2 | VPT |
| Image Classification | CIFAR-100-LT (ρ=10) | Error Rate | 10.4 | VPT |
| Image Classification | CIFAR-100-LT (ρ=100) | Error Rate | 19 | VPT |
| Few-Shot Image Classification | CIFAR-100-LT (ρ=50) | Error Rate | 15.2 | VPT |
| Few-Shot Image Classification | CIFAR-100-LT (ρ=10) | Error Rate | 10.4 | VPT |
| Few-Shot Image Classification | CIFAR-100-LT (ρ=100) | Error Rate | 19 | VPT |
| Generalized Few-Shot Classification | CIFAR-100-LT (ρ=50) | Error Rate | 15.2 | VPT |
| Generalized Few-Shot Classification | CIFAR-100-LT (ρ=10) | Error Rate | 10.4 | VPT |
| Generalized Few-Shot Classification | CIFAR-100-LT (ρ=100) | Error Rate | 19 | VPT |
| Long-tail Learning | CIFAR-100-LT (ρ=50) | Error Rate | 15.2 | VPT |
| Long-tail Learning | CIFAR-100-LT (ρ=10) | Error Rate | 10.4 | VPT |
| Long-tail Learning | CIFAR-100-LT (ρ=100) | Error Rate | 19 | VPT |
| Generalized Few-Shot Learning | CIFAR-100-LT (ρ=50) | Error Rate | 15.2 | VPT |
| Generalized Few-Shot Learning | CIFAR-100-LT (ρ=10) | Error Rate | 10.4 | VPT |
| Generalized Few-Shot Learning | CIFAR-100-LT (ρ=100) | Error Rate | 19 | VPT |
| Prompt Engineering | ImageNet-21k | Accuracy | 24.8 | VPT |
| Visual Prompt Tuning | FGVC | Mean Accuracy | 83.12 | VPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | FGVC | Mean Accuracy | 79.26 | VPT-Shallow (ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | FGVC | Mean Accuracy | 72.02 | VPT-Deep (ViT-B/16_MAE_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | FGVC | Mean Accuracy | 57.84 | VPT-Shallow (ViT-B/16_MAE_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Structured<8>) | Mean Accuracy | 42.38 | VPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Structured<8>) | Mean Accuracy | 37.55 | VPT-Shallow(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Structured<8>) | Mean Accuracy | 27.5 | VPT-Shallow(ViT-B/16_MAE_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Structured<8>) | Mean Accuracy | 26.57 | VPT-Deep(ViT-B/16_MAE_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Natural<7>) | Mean Accuracy | 70.27 | VPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Natural<7>) | Mean Accuracy | 67.34 | VPT-Shallow(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Natural<7>) | Mean Accuracy | 39.96 | VPT-Shallow(ViT-B/16_MAE_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Natural<7>) | Mean Accuracy | 36.02 | VPT-Deep(ViT-B/16_MAE_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Specialized<4>) | Mean Accuracy | 83.04 | VPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Specialized<4>) | Mean Accuracy | 82.26 | VPT-Shallow(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Specialized<4>) | Mean Accuracy | 69.65 | VPT-Shallow(ViT-B/16_MAE_pretrained_ImageNet-1K) |
| Visual Prompt Tuning | VTAB-1k(Specialized<4>) | Mean Accuracy | 60.61 | VPT-Deep(ViT-B/16_MAE_pretrained_ImageNet-1K) |