TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Context-I2W: Mapping Images to Context-dependent Words for...

Context-I2W: Mapping Images to Context-dependent Words for Accurate Zero-Shot Composed Image Retrieval

Yuanmin Tang, Jing Yu, Keke Gai, Jiamin Zhuang, Gang Xiong, Yue Hu, Qi Wu

2023-09-28AttributeRetrievalZero-Shot Composed Image Retrieval (ZS-CIR)Zero-shot Image RetrievalImage Retrieval
PaperPDFCode(official)

Abstract

Different from Composed Image Retrieval task that requires expensive labels for training task-specific models, Zero-Shot Composed Image Retrieval (ZS-CIR) involves diverse tasks with a broad range of visual content manipulation intent that could be related to domain, scene, object, and attribute. The key challenge for ZS-CIR tasks is to learn a more accurate image representation that has adaptive attention to the reference image for various manipulation descriptions. In this paper, we propose a novel context-dependent mapping network, named Context-I2W, for adaptively converting description-relevant Image information into a pseudo-word token composed of the description for accurate ZS-CIR. Specifically, an Intent View Selector first dynamically learns a rotation rule to map the identical image to a task-specific manipulation view. Then a Visual Target Extractor further captures local information covering the main targets in ZS-CIR tasks under the guidance of multiple learnable queries. The two complementary modules work together to map an image to a context-dependent pseudo-word token without extra supervision. Our model shows strong generalization ability on four ZS-CIR tasks, including domain conversion, object composition, object manipulation, and attribute manipulation. It obtains consistent and significant performance boosts ranging from 1.88% to 3.60% over the best methods and achieves new state-of-the-art results on ZS-CIR. Our code is available at https://github.com/Pter61/context-i2w.

Results

TaskDatasetMetricValueModel
Image RetrievalGeneCIS A-R@112.7Context-I2W (CLIP L/14)
Image RetrievalFashion IQ(Recall@10+Recall@50)/238.35Context-I2W (CLIP L/14)
Image RetrievalImageNetAverage Recall20.25Context-I2W
Image RetrievalCIRCOmAP@1014.62Context-I2W
Image RetrievalCOCO (Common Objects in Context)Actions Recall@528.5Context-I2W
Image RetrievalImageNet-R(Recall@10+Recall@50)/220.25Context-I2W
Image RetrievalCIRRR@555.1Context-I2W (CLIP L/14)
Composed Image Retrieval (CoIR)GeneCIS A-R@112.7Context-I2W (CLIP L/14)
Composed Image Retrieval (CoIR)Fashion IQ(Recall@10+Recall@50)/238.35Context-I2W (CLIP L/14)
Composed Image Retrieval (CoIR)ImageNetAverage Recall20.25Context-I2W
Composed Image Retrieval (CoIR)CIRCOmAP@1014.62Context-I2W
Composed Image Retrieval (CoIR)COCO (Common Objects in Context)Actions Recall@528.5Context-I2W
Composed Image Retrieval (CoIR)ImageNet-R(Recall@10+Recall@50)/220.25Context-I2W
Composed Image Retrieval (CoIR)CIRRR@555.1Context-I2W (CLIP L/14)

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16