TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/CoOp

CoOp

Context Optimization

GeneralIntroduced 200030 papers
Source Paper

Description

CoOp, or Context Optimization, is an automated prompt engineering method that avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data. The context could be shared among all classes or designed to be class-specific. During training, we simply minimize the prediction error using the cross-entropy loss with respect to the learnable context vectors, while keeping the pre-trained parameters fixed. The gradients can be back-propagated all the way through the text encoder, distilling the rich knowledge encoded in the parameters for learning task-relevant context.

Papers Using This Method

FA: Forced Prompt Learning of Vision-Language Models for Out-of-Distribution Detection2025-07-06MMRL++: Parameter-Efficient and Interaction-Aware Representation Learning for Vision-Language Models2025-05-15MMRL: Multi-Modal Representation Learning for Vision-Language Models2025-03-11Multi-Point Positional Insertion Tuning for Small Object Detection2024-12-24PLPP: Prompt Learning with Perplexity Is Self-Distillation for Vision-Language Models2024-12-18TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning2024-12-11TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration2024-10-16FLIER: Few-shot Language Image Models Embedded with Latent Representations2024-10-10Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models2024-10-03Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization2024-07-11Learning to Adapt Category Consistent Meta-Feature of CLIP for Few-Shot Classification2024-07-08IntCoOp: Interpretability-Aware Vision-Language Prompt Tuning2024-06-19AAPL: Adding Attributes to Prompt Learning for Vision-Language Models2024-04-25Weak Distribution Detectors Lead to Stronger Generalizability of Vision-Language Prompt Tuning2024-03-31Concept-Guided Prompt Learning for Generalization in Vision-Language Models2024-01-15Text-driven Prompt Generation for Vision-Language Models in Federated Learning2023-10-09SwapPrompt: Test-Time Prompt Adaptation for Vision-Language Models2023-09-21PRE: Vision-Language Prompt Learning with Reparameterization Encoder2023-09-14Language Models as Black-Box Optimizers for Vision-Language Models2023-09-12Context-Aware Prompt Tuning for Vision-Language Model with Dual-Alignment2023-09-08