TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Visual Semantic Reasoning for Image-Text Matching

Visual Semantic Reasoning for Image-Text Matching

Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, Yun Fu

2019-09-06ICCV 2019 10Cross-Modal RetrievalImage-text matchingText MatchingVisual ReasoningRetrievalImage Retrieval
PaperPDFCodeCode(official)

Abstract

Image-text matching has been a hot research topic bridging the vision and language areas. It remains challenging because the current representation of image usually lacks global semantic concepts as in its corresponding text caption. To address this issue, we propose a simple and interpretable reasoning model to generate visual representation that captures key objects and semantic concepts of a scene. Specifically, we first build up connections between image regions and perform reasoning with Graph Convolutional Networks to generate features with semantic relationships. Then, we propose to use the gate and memory mechanism to perform global semantic reasoning on these relationship-enhanced features, select the discriminative information and gradually generate the representation for the whole scene. Experiments validate that our method achieves a new state-of-the-art for the image-text matching on MS-COCO and Flickr30K datasets. It outperforms the current best method by 6.8% relatively for image retrieval and 4.8% relatively for caption retrieval on MS-COCO (Recall@1 using 1K test set). On Flickr30K, our model improves image retrieval by 12.6% relatively and caption retrieval by 5.8% relatively (Recall@1). Our code is available at https://github.com/KunpengLi1994/VSRN.

Results

TaskDatasetMetricValueModel
Image RetrievalFlickr30K 1K testR@154.7VSRN
Image RetrievalFlickr30K 1K testR@1088.2VSRN
Image RetrievalFlickr30K 1K testR@581.8VSRN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@153VSRN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1089.4VSRN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@581.1VSRN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@140.5VSRN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1081.1VSRN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@570.6VSRN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@153VSRN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1089.4VSRN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@581.1VSRN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@140.5VSRN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1081.1VSRN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@570.6VSRN
Cross-Modal RetrievalCOCO 2014Image-to-text R@153VSRN
Cross-Modal RetrievalCOCO 2014Image-to-text R@1089.4VSRN
Cross-Modal RetrievalCOCO 2014Image-to-text R@581.1VSRN
Cross-Modal RetrievalCOCO 2014Text-to-image R@140.5VSRN
Cross-Modal RetrievalCOCO 2014Text-to-image R@1081.1VSRN
Cross-Modal RetrievalCOCO 2014Text-to-image R@570.6VSRN

Related Papers

LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16