TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LPT: Long-tailed Prompt Tuning for Image Classification

LPT: Long-tailed Prompt Tuning for Image Classification

Bowen Dong, Pan Zhou, Shuicheng Yan, WangMeng Zuo

2022-10-03Image ClassificationLong-tail LearningClassification
PaperPDFCode(official)

Abstract

For long-tailed classification, most works often pretrain a big model on a large-scale dataset, and then fine-tune the whole model for adapting to long-tailed data. Though promising, fine-tuning the whole pretrained model tends to suffer from high cost in computation and deployment of different models for different tasks, as well as weakened generalization ability for overfitting to certain features of long-tailed data. To alleviate these issues, we propose an effective Long-tailed Prompt Tuning method for long-tailed classification. LPT introduces several trainable prompts into a frozen pretrained model to adapt it to long-tailed data. For better effectiveness, we divide prompts into two groups: 1) a shared prompt for the whole long-tailed dataset to learn general features and to adapt a pretrained model into target domain; and 2) group-specific prompts to gather group-specific features for the samples which have similar features and also to empower the pretrained model with discrimination ability. Then we design a two-phase training paradigm to learn these prompts. In phase 1, we train the shared prompt via supervised prompt tuning to adapt a pretrained model to the desired long-tailed domain. In phase 2, we use the learnt shared prompt as query to select a small best matched set for a group of similar samples from the group-specific prompt set to dig the common features of these similar samples, then optimize these prompts with dual sampling strategy and asymmetric GCL loss. By only fine-tuning a few prompts while fixing the pretrained model, LPT can reduce training and deployment cost by storing a few prompts, and enjoys a strong generalization ability of the pretrained model. Experiments show that on various long-tailed benchmarks, with only ~1.1% extra parameters, LPT achieves comparable performance than previous whole model fine-tuning methods, and is more robust to domain-shift.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-100-LT (ρ=50)Error Rate10LPT
Image ClassificationCIFAR-100-LT (ρ=10)Error Rate9LPT
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate10.9LPT
Image ClassificationCIFAR-10-LT (ρ=100)Error Rate10.9LPT
Few-Shot Image ClassificationCIFAR-100-LT (ρ=50)Error Rate10LPT
Few-Shot Image ClassificationCIFAR-100-LT (ρ=10)Error Rate9LPT
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate10.9LPT
Few-Shot Image ClassificationCIFAR-10-LT (ρ=100)Error Rate10.9LPT
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=50)Error Rate10LPT
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=10)Error Rate9LPT
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate10.9LPT
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=100)Error Rate10.9LPT
Long-tail LearningCIFAR-100-LT (ρ=50)Error Rate10LPT
Long-tail LearningCIFAR-100-LT (ρ=10)Error Rate9LPT
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate10.9LPT
Long-tail LearningCIFAR-10-LT (ρ=100)Error Rate10.9LPT
Generalized Few-Shot LearningCIFAR-100-LT (ρ=50)Error Rate10LPT
Generalized Few-Shot LearningCIFAR-100-LT (ρ=10)Error Rate9LPT
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate10.9LPT
Generalized Few-Shot LearningCIFAR-10-LT (ρ=100)Error Rate10.9LPT

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15