TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MaPLe: Multi-modal Prompt Learning

MaPLe: Multi-modal Prompt Learning

Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, Fahad Shahbaz Khan

2022-10-06CVPR 2023 1Prompt Engineering
PaperPDFCodeCodeCode(official)

Abstract

Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.

Results

TaskDatasetMetricValueModel
Prompt EngineeringImageNet-RTop-1 accuracy %76.98MaPLe
Prompt EngineeringStanford CarsHarmonic mean73.47MaPLe
Prompt EngineeringOxford 102 FlowerHarmonic mean82.56MaPLe
Prompt EngineeringEuroSATHarmonic mean82.35MaPLe
Prompt EngineeringOxford-IIIT Pet DatasetHarmonic mean96.58MaPLe
Prompt EngineeringImageNet-STop-1 accuracy %49.15MaPLe
Prompt EngineeringDTDHarmonic mean68.16MaPLe
Prompt EngineeringUCF101Harmonic mean80.82MaPLe
Prompt EngineeringFood-101Harmonic mean91.38MaPLe
Prompt EngineeringCaltech-101Harmonic mean96.02MaPLe
Prompt EngineeringImageNetHarmonic mean73.47MaPLe
Prompt EngineeringFGVC-AircraftHarmonic mean36.5MaPLe
Prompt EngineeringSUN397Harmonic mean79.75MaPLe
Prompt EngineeringImageNet-ATop-1 accuracy %50.9MaPLe
Prompt EngineeringImageNet V2Top-1 accuracy %64.07MaPLe

Related Papers

Leveraging Language Prior for Infrared Small Target Detection2025-07-17Emotional Support with LLM-based Empathetic Dialogue Generation2025-07-17Prompt Engineering in Segment Anything Model: Methodologies, Applications, and Emerging Challenges2025-07-13AdaptaGen: Domain-Specific Image Generation through Hierarchical Semantic Optimization Framework2025-07-08Helping CLIP See Both the Forest and the Trees: A Decomposition and Description Approach2025-07-04State and Memory is All You Need for Robust and Reliable AI Agents2025-06-30Prompt Mechanisms in Medical Imaging: A Comprehensive Survey2025-06-28Fine-Tuning and Prompt Engineering of LLMs, for the Creation of Multi-Agent AI for Addressing Sustainable Protein Production Challenges2025-06-25