TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Stacked Cross Attention for Image-Text Matching

Stacked Cross Attention for Image-Text Matching

Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, Xiaodong He

2018-03-21ECCV 2018 9Cross-Modal RetrievalImage-text matchingText MatchingText RetrievalSentence Retrievaltext similarityRetrievalImage Retrieval
PaperPDFCodeCodeCodeCodeCode(official)Code

Abstract

In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN.

Results

TaskDatasetMetricValueModel
Image RetrievalFlickr30K 1K testR@144SCAN i-t
Image RetrievalFlickr30K 1K testR@1082.6SCAN i-t
Image RetrievalFlickr30K 1K testR@574.2SCAN i-t
Image RetrievalPhotoChatR110.4SCAN
Image RetrievalPhotoChatR@1037.1SCAN
Image RetrievalPhotoChatR@527SCAN
Image RetrievalPhotoChatSum(R@1,5,10)74.5SCAN
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@167.4SCAN
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@1095.8SCAN
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@590.3SCAN
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@148.6SCAN
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@1085.2SCAN
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@577.7SCAN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@150.4SCAN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1090SCAN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@582.2SCAN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@138.6SCAN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1080.4SCAN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@569.3SCAN
Cross-Modal Information RetrievalFlickr30kImage-to-text R@167.4SCAN
Cross-Modal Information RetrievalFlickr30kImage-to-text R@1095.8SCAN
Cross-Modal Information RetrievalFlickr30kImage-to-text R@590.3SCAN
Cross-Modal Information RetrievalFlickr30kText-to-image R@148.6SCAN
Cross-Modal Information RetrievalFlickr30kText-to-image R@1085.2SCAN
Cross-Modal Information RetrievalFlickr30kText-to-image R@577.7SCAN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@150.4SCAN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1090SCAN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@582.2SCAN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@138.6SCAN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1080.4SCAN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@569.3SCAN
Cross-Modal RetrievalFlickr30kImage-to-text R@167.4SCAN
Cross-Modal RetrievalFlickr30kImage-to-text R@1095.8SCAN
Cross-Modal RetrievalFlickr30kImage-to-text R@590.3SCAN
Cross-Modal RetrievalFlickr30kText-to-image R@148.6SCAN
Cross-Modal RetrievalFlickr30kText-to-image R@1085.2SCAN
Cross-Modal RetrievalFlickr30kText-to-image R@577.7SCAN
Cross-Modal RetrievalCOCO 2014Image-to-text R@150.4SCAN
Cross-Modal RetrievalCOCO 2014Image-to-text R@1090SCAN
Cross-Modal RetrievalCOCO 2014Image-to-text R@582.2SCAN
Cross-Modal RetrievalCOCO 2014Text-to-image R@138.6SCAN
Cross-Modal RetrievalCOCO 2014Text-to-image R@1080.4SCAN
Cross-Modal RetrievalCOCO 2014Text-to-image R@569.3SCAN

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16