TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Vision-Language Pre-Training with Triple Contrastive Learn...

Vision-Language Pre-Training with Triple Contrastive Learning

Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang

2022-02-21CVPR 2022 1Cross-Modal RetrievalQuestion AnsweringZero-Shot Cross-Modal RetrievalImage-text RetrievalRepresentation LearningText Retrievalcross-modal alignmentContrastive LearningRetrievalVisual Question Answering (VQA)Visual Question Answering
PaperPDFCode(official)

Abstract

Vision-language representation learning largely benefits from image-text alignment through contrastive losses (e.g., InfoNCE loss). The success of this alignment strategy is attributed to its capability in maximizing the mutual information (MI) between an image and its matched text. However, simply performing cross-modal alignment (CMA) ignores data potential within each modality, which may result in degraded representations. For instance, although CMA-based models are able to map image-text pairs close together in the embedding space, they fail to ensure that similar inputs from the same modality stay close by. This problem can get even worse when the pre-training data is noisy. In this paper, we propose triple contrastive learning (TCL) for vision-language pre-training by leveraging both cross-modal and intra-modal self-supervision. Besides CMA, TCL introduces an intra-modal contrastive objective to provide complementary benefits in representation learning. To take advantage of localized and structural information from image and text input, TCL further maximizes the average MI between local regions of image/text and their global summary. To the best of our knowledge, ours is the first work that takes into account local structure information for multi-modality representation learning. Experimental evaluations show that our approach is competitive and achieves the new state of the art on various common down-stream vision-language tasks such as image-text retrieval and visual question answering.

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@175.6TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1096.7TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@592.8TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@159TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1089.9TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@171.4TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1095.4TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@590.8TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@153.5TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1087.1TCL
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@579TCL
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@175.6TCL
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1096.7TCL
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@592.8TCL
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@159TCL
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1089.9TCL
Cross-Modal RetrievalCOCO 2014Image-to-text R@175.6TCL
Cross-Modal RetrievalCOCO 2014Image-to-text R@1096.7TCL
Cross-Modal RetrievalCOCO 2014Image-to-text R@592.8TCL
Cross-Modal RetrievalCOCO 2014Text-to-image R@159TCL
Cross-Modal RetrievalCOCO 2014Text-to-image R@1089.9TCL

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Transformer-based Spatial Grounding: A Comprehensive Survey2025-07-17