Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Furu Wei
We present a unified Vision-Language pretrained Model (VLMo) that jointly learns a dual encoder and a fusion encoder with a modular Transformer network. Specifically, we introduce Mixture-of-Modality-Experts (MoME) Transformer, where each block contains a pool of modality-specific experts and a shared self-attention layer. Because of the modeling flexibility of MoME, pretrained VLMo can be fine-tuned as a fusion encoder for vision-language classification tasks, or used as a dual encoder for efficient image-text retrieval. Moreover, we propose a stagewise pre-training strategy, which effectively leverages large-scale image-only and text-only data besides image-text pairs. Experimental results show that VLMo achieves state-of-the-art results on various vision-language tasks, including VQA, NLVR2 and image-text retrieval. The code and pretrained models are available at https://aka.ms/vlmo.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Visual Question Answering (VQA) | VQA v2 test-dev | Accuracy | 82.78 | VLMo |
| Visual Question Answering (VQA) | VQA v2 test-std | number | 67.26 | VLMo |
| Visual Question Answering (VQA) | VQA v2 test-std | other | 72.87 | VLMo |
| Visual Question Answering (VQA) | VQA v2 test-std | overall | 81.3 | VLMo |
| Visual Question Answering (VQA) | VQA v2 test-std | yes/no | 94.68 | VLMo |
| Visual Reasoning | NLVR2 Dev | Accuracy | 85.64 | VLMo |
| Visual Reasoning | NLVR2 Test | Accuracy | 86.86 | VLMo |
| Image Retrieval | PhotoChat | R1 | 11.5 | VLMo |
| Image Retrieval | PhotoChat | R@10 | 39.4 | VLMo |
| Image Retrieval | PhotoChat | R@5 | 30 | VLMo |
| Image Retrieval | PhotoChat | Sum(R@1,5,10) | 83.2 | VLMo |
| Retrieval | Image-Chat | R@1 | 46.8 | VLMo |
| Retrieval | Image-Chat | R@5 | 67.5 | VLMo |
| Retrieval | Image-Chat | Sum(R@1,5) | 114.3 | VLMo |