TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PRE: Vision-Language Prompt Learning with Reparameterizati...

PRE: Vision-Language Prompt Learning with Reparameterization Encoder

Thi Minh Anh Pham, An Duc Nguyen, Cephas Svosve, Vasileios Argyriou, Georgios Tzimiropoulos

2023-09-14Prompt EngineeringFew-Shot Image Classification
PaperPDFCode(official)Code(official)

Abstract

Large pre-trained vision-language models such as CLIP have demonstrated great potential in zero-shot transferability to downstream tasks. However, to attain optimal performance, the manual selection of prompts is necessary to improve alignment between the downstream image distribution and the textual class descriptions. This manual prompt engineering is the major challenge for deploying such models in practice since it requires domain expertise and is extremely time-consuming. To avoid non-trivial prompt engineering, recent work Context Optimization (CoOp) introduced the concept of prompt learning to the vision domain using learnable textual tokens. While CoOp can achieve substantial improvements over manual prompts, its learned context is worse generalizable to wider unseen classes within the same dataset. In this work, we present Prompt Learning with Reparameterization Encoder (PRE) - a simple and efficient method that enhances the generalization ability of the learnable prompt to unseen classes while maintaining the capacity to learn Base classes. Instead of directly optimizing the prompts, PRE employs a prompt encoder to reparameterize the input prompt embeddings, enhancing the exploration of task-specific knowledge from few-shot samples. Experiments and extensive ablation studies on 8 benchmarks demonstrate that our approach is an efficient method for prompt learning. Specifically, PRE achieves a notable enhancement of 5.60% in average accuracy on New classes and 3% in Harmonic mean compared to CoOp in the 16-shot setting, all achieved within a good training time.

Results

TaskDatasetMetricValueModel
Image ClassificationCaltech101Harmonic mean95.7PRE
Few-Shot Image ClassificationCaltech101Harmonic mean95.7PRE

Related Papers

Leveraging Language Prior for Infrared Small Target Detection2025-07-17Emotional Support with LLM-based Empathetic Dialogue Generation2025-07-17Prompt Engineering in Segment Anything Model: Methodologies, Applications, and Emerging Challenges2025-07-13ViT-ProtoNet for Few-Shot Image Classification: A Multi-Benchmark Evaluation2025-07-12AdaptaGen: Domain-Specific Image Generation through Hierarchical Semantic Optimization Framework2025-07-08Helping CLIP See Both the Forest and the Trees: A Decomposition and Description Approach2025-07-04State and Memory is All You Need for Robust and Reliable AI Agents2025-06-30Prompt Mechanisms in Medical Imaging: A Comprehensive Survey2025-06-28