TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Semantic Concepts and Order for Image and Sentenc...

Learning Semantic Concepts and Order for Image and Sentence Matching

Yan Huang, Qi Wu, Liang Wang

2017-12-06CVPR 2018 6Cross-Modal Retrieval
PaperPDF

Abstract

Image and sentence matching has made great progress recently, but it remains challenging due to the large visual-semantic discrepancy. This mainly arises from that the representation of pixel-level image usually lacks of high-level semantic information as in its matched sentence. In this work, we propose a semantic-enhanced image and sentence matching model, which can improve the image representation by learning semantic concepts and then organizing them in a correct semantic order. Given an image, we first use a multi-regional multi-label CNN to predict its semantic concepts, including objects, properties, actions, etc. Then, considering that different orders of semantic concepts lead to diverse semantic meanings, we use a context-gated sentence generation scheme for semantic order learning. It simultaneously uses the image global context containing concept relations as reference and the groundtruth semantic order in the matched sentence as supervision. After obtaining the improved image representation, we learn the sentence representation with a conventional LSTM, and then jointly perform image and sentence matching and sentence generation for model learning. Extensive experiments demonstrate the effectiveness of our learned semantic concepts and order, by achieving the state-of-the-art results on two public benchmark datasets.

Results

TaskDatasetMetricValueModel
Image RetrievalFlickr30K 1K testR@141.1SCO
Image RetrievalFlickr30K 1K testR@1080.1SCO
Image RetrievalFlickr30K 1K testR@570.5SCO
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@155.5SCO (ResNet)
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@1089.3SCO (ResNet)
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@582SCO (ResNet)
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@141.1SCO (ResNet)
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@1080.1SCO (ResNet)
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@570.5SCO (ResNet)
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@142.8SCO (ResNet)
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1083SCO (ResNet)
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@572.3SCO (ResNet)
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@133.1SCO (ResNet)
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1075.5SCO (ResNet)
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@562.9SCO (ResNet)
Cross-Modal Information RetrievalFlickr30kImage-to-text R@155.5SCO (ResNet)
Cross-Modal Information RetrievalFlickr30kImage-to-text R@1089.3SCO (ResNet)
Cross-Modal Information RetrievalFlickr30kImage-to-text R@582SCO (ResNet)
Cross-Modal Information RetrievalFlickr30kText-to-image R@141.1SCO (ResNet)
Cross-Modal Information RetrievalFlickr30kText-to-image R@1080.1SCO (ResNet)
Cross-Modal Information RetrievalFlickr30kText-to-image R@570.5SCO (ResNet)
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@142.8SCO (ResNet)
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1083SCO (ResNet)
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@572.3SCO (ResNet)
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@133.1SCO (ResNet)
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1075.5SCO (ResNet)
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@562.9SCO (ResNet)
Cross-Modal RetrievalFlickr30kImage-to-text R@155.5SCO (ResNet)
Cross-Modal RetrievalFlickr30kImage-to-text R@1089.3SCO (ResNet)
Cross-Modal RetrievalFlickr30kImage-to-text R@582SCO (ResNet)
Cross-Modal RetrievalFlickr30kText-to-image R@141.1SCO (ResNet)
Cross-Modal RetrievalFlickr30kText-to-image R@1080.1SCO (ResNet)
Cross-Modal RetrievalFlickr30kText-to-image R@570.5SCO (ResNet)
Cross-Modal RetrievalCOCO 2014Image-to-text R@142.8SCO (ResNet)
Cross-Modal RetrievalCOCO 2014Image-to-text R@1083SCO (ResNet)
Cross-Modal RetrievalCOCO 2014Image-to-text R@572.3SCO (ResNet)
Cross-Modal RetrievalCOCO 2014Text-to-image R@133.1SCO (ResNet)
Cross-Modal RetrievalCOCO 2014Text-to-image R@1075.5SCO (ResNet)
Cross-Modal RetrievalCOCO 2014Text-to-image R@562.9SCO (ResNet)

Related Papers

An analysis of vision-language models for fabric retrieval2025-07-07Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval2025-06-28Maximal Matching Matters: Preventing Representation Collapse for Robust Cross-Modal Retrieval2025-06-26Multimodal Medical Image Binding via Shared Text Embeddings2025-06-22ContextRefine-CLIP for EPIC-KITCHENS-100 Multi-Instance Retrieval Challenge 20252025-06-12FedNano: Toward Lightweight Federated Tuning for Pretrained Multimodal Large Language Models2025-06-12SA-Person: Text-Based Person Retrieval with Scene-aware Re-ranking2025-05-30EmotionRankCLAP: Bridging Natural Language Speaking Styles and Ordinal Speech Emotion via Rank-N-Contrast2025-05-29