TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Prompt Learning via Meta-Regularization

Prompt Learning via Meta-Regularization

Jinyoung Park, Juyeon Ko, Hyunwoo J. Kim

2024-04-01CVPR 2024 1Prompt EngineeringDomain GeneralizationGeneral Knowledge
PaperPDFCode(official)

Abstract

Pre-trained vision-language models have shown impressive success on various computer vision tasks with their zero-shot generalizability. Recently, prompt learning approaches have been explored to efficiently and effectively adapt the vision-language models to a variety of downstream tasks. However, most existing prompt learning methods suffer from task overfitting since the general knowledge of the pre-trained vision language models is forgotten while the prompts are finetuned on a small data set from a specific target task. To address this issue, we propose a Prompt Meta-Regularization (ProMetaR) to improve the generalizability of prompt learning for vision-language models. Specifically, ProMetaR meta-learns both the regularizer and the soft prompts to harness the task-specific knowledge from the downstream tasks and task-agnostic general knowledge from the vision-language models. Further, ProMetaR augments the task to generate multiple virtual tasks to alleviate the meta-overfitting. In addition, we provide the analysis to comprehend how ProMetaR improves the generalizability of prompt tuning in the perspective of the gradient alignment. Our extensive experiments demonstrate that our ProMetaR improves the generalizability of conventional prompt learning methods under base-to-base/base-to-new and domain generalization settings. The code of ProMetaR is available at https://github.com/mlvlab/ProMetaR.

Results

TaskDatasetMetricValueModel
Prompt EngineeringStanford CarsHarmonic mean76.72ProMetaR
Prompt EngineeringOxford 102 FlowerHarmonic mean86.7ProMetaR
Prompt EngineeringEuroSATHarmonic mean85.3ProMetaR
Prompt EngineeringOxford-IIIT Pet DatasetHarmonic mean96.49ProMetaR
Prompt EngineeringDTDHarmonic mean72.31ProMetaR
Prompt EngineeringUCF101Harmonic mean83.25ProMetaR
Prompt EngineeringFood-101Harmonic mean91.34ProMetaR
Prompt EngineeringCaltech-101Harmonic mean96.16ProMetaR
Prompt EngineeringImageNetHarmonic mean74.09ProMetaR
Prompt EngineeringFGVC-AircraftHarmonic mean40.25ProMetaR
Prompt EngineeringSUN397Harmonic mean80.82ProMetaR

Related Papers

Leveraging Language Prior for Infrared Small Target Detection2025-07-17Emotional Support with LLM-based Empathetic Dialogue Generation2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Prompt Engineering in Segment Anything Model: Methodologies, Applications, and Emerging Challenges2025-07-13