TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/WenLan: Bridging Vision and Language by Large-Scale Multi-...

WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training

Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Danyang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, ShiZhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, Ji-Rong Wen

2021-03-11Image CaptioningContrastive LearningImage-to-Text RetrievalImage Retrieval
PaperPDFCodeCode

Abstract

Multi-modal pre-training models have been intensively explored to bridge vision and language in recent years. However, most of them explicitly model the cross-modal interaction between image-text pairs, by assuming that there exists strong semantic correlation between the text and image modalities. Since this strong assumption is often invalid in real-world scenarios, we choose to implicitly model the cross-modal correlation for large-scale multi-modal pre-training, which is the focus of the Chinese project `WenLan' led by our team. Specifically, with the weak correlation assumption over image-text pairs, we propose a two-tower pre-training model called BriVL within the cross-modal contrastive learning framework. Unlike OpenAI CLIP that adopts a simple contrastive learning method, we devise a more advanced algorithm by adapting the latest method MoCo into the cross-modal scenario. By building a large queue-based dictionary, our BriVL can incorporate more negative samples in limited GPU resources. We further construct a large Chinese multi-source image-text dataset called RUC-CAS-WenLan for pre-training our BriVL model. Extensive experiments demonstrate that the pre-trained BriVL model outperforms both UNITER and OpenAI CLIP on various downstream tasks.

Results

TaskDatasetMetricValueModel
Image CaptioningAIC-ICCBLEU66.1CMCL
Image CaptioningAIC-ICCCIDEr220.7CMCL
Image CaptioningAIC-ICCMETEOR41.1CMCL
Image CaptioningAIC-ICCROUGE-L71.9CMCL
Image RetrievalAIC-ICCRecall@114.4CMCL
Image RetrievalAIC-ICCRecall@1039.1CMCL
Image RetrievalAIC-ICCRecall@539.1CMCL
Image RetrievalRUC-CAS-WenLanRecall@136CMCL
Image RetrievalRUC-CAS-WenLanRecall@1062.1CMCL
Image RetrievalRUC-CAS-WenLanRecall@555.4CMCL
Image-to-Text RetrievalAIC-ICCRecall@120.3CMCL
Image-to-Text RetrievalAIC-ICCRecall@1045.6CMCL
Image-to-Text RetrievalAIC-ICCRecall@537CMCL
Image-to-Text RetrievalRUC-CAS-WenLanRecall@136.1CMCL
Image-to-Text RetrievalRUC-CAS-WenLanRecall@1062.2CMCL
Image-to-Text RetrievalRUC-CAS-WenLanRecall@555.5CMCL

Related Papers

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16