TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ALADIN: Distilling Fine-grained Alignment Scores for Effic...

ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval

Nicola Messina, Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Fabrizio Falchi, Giuseppe Amato, Rita Cucchiara

2022-07-29Cross-Modal RetrievalImage-text matchingText MatchingRetrieval
PaperPDFCode(official)

Abstract

Image-text matching is gaining a leading role among tasks involving the joint understanding of vision and language. In literature, this task is often used as a pre-training objective to forge architectures able to jointly deal with images and texts. Nonetheless, it has a direct downstream application: cross-modal retrieval, which consists in finding images related to a given query text or vice-versa. Solving this task is of critical importance in cross-modal search engines. Many recent methods proposed effective solutions to the image-text matching problem, mostly using recent large vision-language (VL) Transformer networks. However, these models are often computationally expensive, especially at inference time. This prevents their adoption in large-scale cross-modal retrieval scenarios, where results should be provided to the user almost instantaneously. In this paper, we propose to fill in the gap between effectiveness and efficiency by proposing an ALign And DIstill Network (ALADIN). ALADIN first produces high-effective scores by aligning at fine-grained level images and texts. Then, it learns a shared embedding space - where an efficient kNN search can be performed - by distilling the relevance scores obtained from the fine-grained alignments. We obtained remarkable results on MS-COCO, showing that our method can compete with state-of-the-art VL Transformers while being almost 90 times faster. The code for reproducing our results is available at https://github.com/mesnico/ALADIN.

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@164.9ALADIN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1094.5ALADIN
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@588.6ALADIN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@151.3ALADIN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1087.5ALADIN
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@579.2ALADIN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@164.9ALADIN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1094.5ALADIN
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@588.6ALADIN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@151.3ALADIN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1087.5ALADIN
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@579.2ALADIN
Cross-Modal RetrievalCOCO 2014Image-to-text R@164.9ALADIN
Cross-Modal RetrievalCOCO 2014Image-to-text R@1094.5ALADIN
Cross-Modal RetrievalCOCO 2014Image-to-text R@588.6ALADIN
Cross-Modal RetrievalCOCO 2014Text-to-image R@151.3ALADIN
Cross-Modal RetrievalCOCO 2014Text-to-image R@1087.5ALADIN
Cross-Modal RetrievalCOCO 2014Text-to-image R@579.2ALADIN

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15