TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Pic2Word: Mapping Pictures to Words for Zero-shot Composed...

Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image Retrieval

Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister

2023-02-06CVPR 2023 1Composed Image Retrieval (CoIR)AttributeRetrievalZero-Shot Composed Image Retrieval (ZS-CIR)Zero-shot Image RetrievalImage Retrieval
PaperPDFCode(official)

Abstract

In Composed Image Retrieval (CIR), a user combines a query image with text to describe their intended target. Existing methods rely on supervised learning of CIR models using labeled triplets consisting of the query image, text specification, and the target image. Labeling such triplets is expensive and hinders broad applicability of CIR. In this work, we propose to study an important task, Zero-Shot Composed Image Retrieval (ZS-CIR), whose goal is to build a CIR model without requiring labeled triplets for training. To this end, we propose a novel method, called Pic2Word, that requires only weakly labeled image-caption pairs and unlabeled image datasets to train. Unlike existing supervised CIR models, our model trained on weakly labeled or unlabeled datasets shows strong generalization across diverse ZS-CIR tasks, e.g., attribute editing, object composition, and domain conversion. Our approach outperforms several supervised CIR methods on the common CIR benchmark, CIRR and Fashion-IQ. Code will be made publicly available at https://github.com/google-research/composed_image_retrieval.

Results

TaskDatasetMetricValueModel
Image RetrievalFashion IQ(Recall@10+Recall@50)/234.2Pic2Word
Image RetrievalImageNetAverage Recall18.85Pic2Word
Image RetrievalCIRCOmAP@109.51Pic2Word
Image RetrievalCOCO (Common Objects in Context)Actions Recall@524.8Pic2Word
Image RetrievalImageNet-R(Recall@10+Recall@50)/216.65Pic2Word
Image RetrievalCIRRR@551.7Pic2Word
Composed Image Retrieval (CoIR)Fashion IQ(Recall@10+Recall@50)/234.2Pic2Word
Composed Image Retrieval (CoIR)ImageNetAverage Recall18.85Pic2Word
Composed Image Retrieval (CoIR)CIRCOmAP@109.51Pic2Word
Composed Image Retrieval (CoIR)COCO (Common Objects in Context)Actions Recall@524.8Pic2Word
Composed Image Retrieval (CoIR)ImageNet-R(Recall@10+Recall@50)/216.65Pic2Word
Composed Image Retrieval (CoIR)CIRRR@551.7Pic2Word

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16