TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ViSTA: Vision and Scene Text Aggregation for Cross-Modal R...

ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval

Mengjun Cheng, Yipeng Sun, Longchao Wang, Xiongwei Zhu, Kun Yao, Jie Chen, Guoli Song, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang

2022-03-31CVPR 2022 1Cross-Modal RetrievalContrastive LearningRetrieval
PaperPDF

Abstract

Visual appearance is considered to be the most important cue to understand images for cross-modal retrieval, while sometimes the scene text appearing in images can provide valuable information to understand the visual semantics. Most of existing cross-modal retrieval approaches ignore the usage of scene text information and directly adding this information may lead to performance degradation in scene text free scenarios. To address this issue, we propose a full transformer architecture to unify these cross-modal retrieval scenarios in a single $\textbf{Vi}$sion and $\textbf{S}$cene $\textbf{T}$ext $\textbf{A}$ggregation framework (ViSTA). Specifically, ViSTA utilizes transformer blocks to directly encode image patches and fuse scene text embedding to learn an aggregated visual representation for cross-modal retrieval. To tackle the modality missing problem of scene text, we propose a novel fusion token based transformer aggregation approach to exchange the necessary scene text information only through the fusion token and concentrate on the most important features in each modality. To further strengthen the visual modality, we develop dual contrastive learning losses to embed both image-text pairs and fusion-text pairs into a common cross-modal space. Compared to existing methods, ViSTA enables to aggregate relevant scene text semantics with visual appearance, and hence improve results under both scene text free and scene text aware scenarios. Experimental results show that ViSTA outperforms other methods by at least $\bf{8.4}\%$ at Recall@1 for scene text aware retrieval task. Compared with state-of-the-art scene text free retrieval methods, ViSTA can achieve better accuracy on Flicker30K and MSCOCO while running at least three times faster during the inference stage, which validates the effectiveness of the proposed framework.

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@189.5ViSTA
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@1099.6ViSTA
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@598.4ViSTA
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@175.8ViSTA
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@1096.9ViSTA
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@594.2ViSTA
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@168.9ViSTA
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1095.4ViSTA
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@590.1ViSTA
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@152.6ViSTA
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1087.6ViSTA
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@579.6ViSTA
Cross-Modal Information RetrievalFlickr30kImage-to-text R@189.5ViSTA
Cross-Modal Information RetrievalFlickr30kImage-to-text R@1099.6ViSTA
Cross-Modal Information RetrievalFlickr30kImage-to-text R@598.4ViSTA
Cross-Modal Information RetrievalFlickr30kText-to-image R@175.8ViSTA
Cross-Modal Information RetrievalFlickr30kText-to-image R@1096.9ViSTA
Cross-Modal Information RetrievalFlickr30kText-to-image R@594.2ViSTA
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@168.9ViSTA
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1095.4ViSTA
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@590.1ViSTA
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@152.6ViSTA
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1087.6ViSTA
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@579.6ViSTA
Cross-Modal RetrievalFlickr30kImage-to-text R@189.5ViSTA
Cross-Modal RetrievalFlickr30kImage-to-text R@1099.6ViSTA
Cross-Modal RetrievalFlickr30kImage-to-text R@598.4ViSTA
Cross-Modal RetrievalFlickr30kText-to-image R@175.8ViSTA
Cross-Modal RetrievalFlickr30kText-to-image R@1096.9ViSTA
Cross-Modal RetrievalFlickr30kText-to-image R@594.2ViSTA
Cross-Modal RetrievalCOCO 2014Image-to-text R@168.9ViSTA
Cross-Modal RetrievalCOCO 2014Image-to-text R@1095.4ViSTA
Cross-Modal RetrievalCOCO 2014Image-to-text R@590.1ViSTA
Cross-Modal RetrievalCOCO 2014Text-to-image R@152.6ViSTA
Cross-Modal RetrievalCOCO 2014Text-to-image R@1087.6ViSTA
Cross-Modal RetrievalCOCO 2014Text-to-image R@579.6ViSTA

Related Papers

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16