TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BiCro: Noisy Correspondence Rectification for Multi-modali...

BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity Consistency

Shuo Yang, Zhaopan Xu, Kai Wang, Yang You, Hongxun Yao, Tongliang Liu, Min Xu

2023-03-22CVPR 2023 1Cross-modal retrieval with noisy correspondenceImage-text matchingText Matching
PaperPDFCode(official)

Abstract

As one of the most fundamental techniques in multimodal learning, cross-modal matching aims to project various sensory modalities into a shared feature space. To achieve this, massive and correctly aligned data pairs are required for model training. However, unlike unimodal datasets, multimodal datasets are extremely harder to collect and annotate precisely. As an alternative, the co-occurred data pairs (e.g., image-text pairs) collected from the Internet have been widely exploited in the area. Unfortunately, the cheaply collected dataset unavoidably contains many mismatched data pairs, which have been proven to be harmful to the model's performance. To address this, we propose a general framework called BiCro (Bidirectional Cross-modal similarity consistency), which can be easily integrated into existing cross-modal matching models and improve their robustness against noisy data. Specifically, BiCro aims to estimate soft labels for noisy data pairs to reflect their true correspondence degree. The basic idea of BiCro is motivated by that -- taking image-text matching as an example -- similar images should have similar textual descriptions and vice versa. Then the consistency of these two similarities can be recast as the estimated soft labels to train the matching model. The experiments on three popular cross-modal matching datasets demonstrate that our method significantly improves the noise-robustness of various matching models, and surpass the state-of-the-art by a clear margin.

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QueryCOCO-NoisyImage-to-text R@178.8BiCro*
Image Retrieval with Multi-Modal QueryCOCO-NoisyImage-to-text R@1098.6BiCro*
Image Retrieval with Multi-Modal QueryCOCO-NoisyImage-to-text R@596.1BiCro*
Image Retrieval with Multi-Modal QueryCOCO-NoisyR-Sum523.2BiCro*
Image Retrieval with Multi-Modal QueryCOCO-NoisyText-to-image R@163.7BiCro*
Image Retrieval with Multi-Modal QueryCOCO-NoisyText-to-image R@1095.7BiCro*
Image Retrieval with Multi-Modal QueryCOCO-NoisyText-to-image R@590.3BiCro*
Image Retrieval with Multi-Modal QueryCC152KImage-to-text R@140.8BiCro*
Image Retrieval with Multi-Modal QueryCC152KImage-to-text R@1076.1BiCro*
Image Retrieval with Multi-Modal QueryCC152KImage-to-text R@567.2BiCro*
Image Retrieval with Multi-Modal QueryCC152KR-Sum370.2BiCro*
Image Retrieval with Multi-Modal QueryCC152KText-to-image R@142.1BiCro*
Image Retrieval with Multi-Modal QueryCC152KText-to-image R@1076.4BiCro*
Image Retrieval with Multi-Modal QueryCC152KText-to-image R@567.6BiCro*
Image Retrieval with Multi-Modal QueryFlickr30K-NoisyImage-to-text R@178.1BiCro*
Image Retrieval with Multi-Modal QueryFlickr30K-NoisyImage-to-text R@1097.5BiCro*
Image Retrieval with Multi-Modal QueryFlickr30K-NoisyImage-to-text R@594.4BiCro*
Image Retrieval with Multi-Modal QueryFlickr30K-NoisyR-Sum504.7BiCro*
Image Retrieval with Multi-Modal QueryFlickr30K-NoisyText-to-image R@160.4BiCro*
Image Retrieval with Multi-Modal QueryFlickr30K-NoisyText-to-image R@1089.9BiCro*
Image Retrieval with Multi-Modal QueryFlickr30K-NoisyText-to-image R@584.4BiCro*
Cross-Modal Information RetrievalCOCO-NoisyImage-to-text R@178.8BiCro*
Cross-Modal Information RetrievalCOCO-NoisyImage-to-text R@1098.6BiCro*
Cross-Modal Information RetrievalCOCO-NoisyImage-to-text R@596.1BiCro*
Cross-Modal Information RetrievalCOCO-NoisyR-Sum523.2BiCro*
Cross-Modal Information RetrievalCOCO-NoisyText-to-image R@163.7BiCro*
Cross-Modal Information RetrievalCOCO-NoisyText-to-image R@1095.7BiCro*
Cross-Modal Information RetrievalCOCO-NoisyText-to-image R@590.3BiCro*
Cross-Modal Information RetrievalCC152KImage-to-text R@140.8BiCro*
Cross-Modal Information RetrievalCC152KImage-to-text R@1076.1BiCro*
Cross-Modal Information RetrievalCC152KImage-to-text R@567.2BiCro*
Cross-Modal Information RetrievalCC152KR-Sum370.2BiCro*
Cross-Modal Information RetrievalCC152KText-to-image R@142.1BiCro*
Cross-Modal Information RetrievalCC152KText-to-image R@1076.4BiCro*
Cross-Modal Information RetrievalCC152KText-to-image R@567.6BiCro*
Cross-Modal Information RetrievalFlickr30K-NoisyImage-to-text R@178.1BiCro*
Cross-Modal Information RetrievalFlickr30K-NoisyImage-to-text R@1097.5BiCro*
Cross-Modal Information RetrievalFlickr30K-NoisyImage-to-text R@594.4BiCro*
Cross-Modal Information RetrievalFlickr30K-NoisyR-Sum504.7BiCro*
Cross-Modal Information RetrievalFlickr30K-NoisyText-to-image R@160.4BiCro*
Cross-Modal Information RetrievalFlickr30K-NoisyText-to-image R@1089.9BiCro*
Cross-Modal Information RetrievalFlickr30K-NoisyText-to-image R@584.4BiCro*
Cross-Modal RetrievalCOCO-NoisyImage-to-text R@178.8BiCro*
Cross-Modal RetrievalCOCO-NoisyImage-to-text R@1098.6BiCro*
Cross-Modal RetrievalCOCO-NoisyImage-to-text R@596.1BiCro*
Cross-Modal RetrievalCOCO-NoisyR-Sum523.2BiCro*
Cross-Modal RetrievalCOCO-NoisyText-to-image R@163.7BiCro*
Cross-Modal RetrievalCOCO-NoisyText-to-image R@1095.7BiCro*
Cross-Modal RetrievalCOCO-NoisyText-to-image R@590.3BiCro*
Cross-Modal RetrievalCC152KImage-to-text R@140.8BiCro*
Cross-Modal RetrievalCC152KImage-to-text R@1076.1BiCro*
Cross-Modal RetrievalCC152KImage-to-text R@567.2BiCro*
Cross-Modal RetrievalCC152KR-Sum370.2BiCro*
Cross-Modal RetrievalCC152KText-to-image R@142.1BiCro*
Cross-Modal RetrievalCC152KText-to-image R@1076.4BiCro*
Cross-Modal RetrievalCC152KText-to-image R@567.6BiCro*
Cross-Modal RetrievalFlickr30K-NoisyImage-to-text R@178.1BiCro*
Cross-Modal RetrievalFlickr30K-NoisyImage-to-text R@1097.5BiCro*
Cross-Modal RetrievalFlickr30K-NoisyImage-to-text R@594.4BiCro*
Cross-Modal RetrievalFlickr30K-NoisyR-Sum504.7BiCro*
Cross-Modal RetrievalFlickr30K-NoisyText-to-image R@160.4BiCro*
Cross-Modal RetrievalFlickr30K-NoisyText-to-image R@1089.9BiCro*
Cross-Modal RetrievalFlickr30K-NoisyText-to-image R@584.4BiCro*

Related Papers

Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models2025-06-10TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP2025-05-24Scaling Computer-Use Grounding via User Interface Decomposition and Synthesis2025-05-19Descriptive Image-Text Matching with Graded Contextual Similarity2025-05-15Compositional Image-Text Matching and Retrieval by Grounding Entities2025-05-04LGD: Leveraging Generative Descriptions for Zero-Shot Referring Image Segmentation2025-04-20Instruction-augmented Multimodal Alignment for Image-Text and Element Matching2025-04-16Dependency Structure Augmented Contextual Scoping Framework for Multimodal Aspect-Based Sentiment Analysis2025-04-15