TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Align before Fuse: Vision and Language Representation Lear...

Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi

2021-07-16NeurIPS 2021 12Cross-Modal RetrievalZero-Shot Cross-Modal RetrievalImage-text RetrievalOpen Vocabulary Attribute DetectionRepresentation LearningImage-text matchingText RetrievalVisual ReasoningImage-to-Text RetrievalRetrievalVisual Question Answering (VQA)Grounded language learning
PaperPDFCode(official)CodeCodeCodeCodeCode

Abstract

Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VQA v2 test-devAccuracy75.84ALBEF (14M)
Visual Question Answering (VQA)VQA v2 test-stdoverall76.04ALBEF (14M)
Visual ReasoningNLVR2 DevAccuracy83.14ALBEF (14M)
Visual ReasoningNLVR2 TestAccuracy82.55ALBEF (14M)
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@177.6ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1097.2ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@594.3ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@160.7ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1090.5ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@584.3ALBEF
Image Retrieval with Multi-Modal QueryCommercialAdsDatasetADD(S) AUC82.74ALBEF
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@190.5ALBEF
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@1099.7ALBEF
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@598.8ALBEF
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@176.8ALBEF
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@1096.7ALBEF
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@593.7ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@168.7ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1094.7ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@589.5ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@150.1ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1084.5ALBEF
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@576.4ALBEF
Object DetectionOVAD-Box benchmarkmean average precision21ALBEF
3DOVAD-Box benchmarkmean average precision21ALBEF
2D ClassificationOVAD-Box benchmarkmean average precision21ALBEF
2D Object DetectionOVAD-Box benchmarkmean average precision21ALBEF
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@177.6ALBEF
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1097.2ALBEF
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@594.3ALBEF
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@160.7ALBEF
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1090.5ALBEF
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@584.3ALBEF
Cross-Modal Information RetrievalCommercialAdsDatasetADD(S) AUC82.74ALBEF
Open Vocabulary Object DetectionOVAD-Box benchmarkmean average precision21ALBEF
Cross-Modal RetrievalCOCO 2014Image-to-text R@177.6ALBEF
Cross-Modal RetrievalCOCO 2014Image-to-text R@1097.2ALBEF
Cross-Modal RetrievalCOCO 2014Image-to-text R@594.3ALBEF
Cross-Modal RetrievalCOCO 2014Text-to-image R@160.7ALBEF
Cross-Modal RetrievalCOCO 2014Text-to-image R@1090.5ALBEF
Cross-Modal RetrievalCOCO 2014Text-to-image R@584.3ALBEF
Cross-Modal RetrievalCommercialAdsDatasetADD(S) AUC82.74ALBEF
Image-to-Text RetrievalFlickr30kRecall@195.9ALBEF
Image-to-Text RetrievalFlickr30kRecall@10100ALBEF
Image-to-Text RetrievalFlickr30kRecall@599.8ALBEF
16kOVAD-Box benchmarkmean average precision21ALBEF

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17