TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Vision and Structured-Language Pretraining for Cross-Modal...

Vision and Structured-Language Pretraining for Cross-Modal Food Retrieval

Mustafa Shukor, Nicolas Thome, Matthieu Cord

2022-12-08Cross-Modal RetrievalRetrievalFood Recognition
PaperPDFCode(official)

Abstract

Vision-Language Pretraining (VLP) and Foundation models have been the go-to recipe for achieving SoTA performance on general benchmarks. However, leveraging these powerful techniques for more complex vision-language tasks, such as cooking applications, with more structured input data, is still little investigated. In this work, we propose to leverage these techniques for structured-text based computational cuisine tasks. Our strategy, dubbed VLPCook, first transforms existing image-text pairs to image and structured-text pairs. This allows to pretrain our VLPCook model using VLP objectives adapted to the strutured data of the resulting datasets, then finetuning it on downstream computational cooking tasks. During finetuning, we also enrich the visual encoder, leveraging pretrained foundation models (e.g. CLIP) to provide local and global textual context. VLPCook outperforms current SoTA by a significant margin (+3.3 Recall@1 absolute improvement) on the task of Cross-Modal Food Retrieval on the large Recipe1M dataset. We conduct further experiments on VLP to validate their importance, especially on the Recipe1M+ dataset. Finally, we validate the generalization of the approach to other tasks (i.e, Food Recognition) and domains with structured text such as the Medical domain on the ROCO dataset. The code is available here: https://github.com/mshukor/VLPCook

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QueryRecipe1MImage-to-text R@174.9VLPCook (R1M+)
Image Retrieval with Multi-Modal QueryRecipe1MText-to-image R@175.6VLPCook (R1M+)
Image Retrieval with Multi-Modal QueryRecipe1MImage-to-text R@173.6VLPCook
Image Retrieval with Multi-Modal QueryRecipe1MText-to-image R@174.7VLPCook
Image Retrieval with Multi-Modal QueryRecipe1M+Image-to-text R@145.2VLPCook
Image Retrieval with Multi-Modal QueryRecipe1M+Text-to-image R@147.3VLPCook
Cross-Modal Information RetrievalRecipe1MImage-to-text R@174.9VLPCook (R1M+)
Cross-Modal Information RetrievalRecipe1MText-to-image R@175.6VLPCook (R1M+)
Cross-Modal Information RetrievalRecipe1MImage-to-text R@173.6VLPCook
Cross-Modal Information RetrievalRecipe1MText-to-image R@174.7VLPCook
Cross-Modal Information RetrievalRecipe1M+Image-to-text R@145.2VLPCook
Cross-Modal Information RetrievalRecipe1M+Text-to-image R@147.3VLPCook
Cross-Modal RetrievalRecipe1MImage-to-text R@174.9VLPCook (R1M+)
Cross-Modal RetrievalRecipe1MText-to-image R@175.6VLPCook (R1M+)
Cross-Modal RetrievalRecipe1MImage-to-text R@173.6VLPCook
Cross-Modal RetrievalRecipe1MText-to-image R@174.7VLPCook
Cross-Modal RetrievalRecipe1M+Image-to-text R@145.2VLPCook
Cross-Modal RetrievalRecipe1M+Text-to-image R@147.3VLPCook

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15