Yuan YAO, Qianyu Chen, Ao Zhang, Wei Ji, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun
Vision-language pre-training (VLP) has shown impressive performance on a wide range of cross-modal tasks, where VLP models without reliance on object detectors are becoming the mainstream due to their superior computation efficiency and competitive performance. However, the removal of object detectors also deprives the capability of VLP models in explicit object modeling, which is essential to various position-sensitive vision-language (VL) tasks, such as referring expression comprehension and visual commonsense reasoning. To address the challenge, we introduce PEVL that enhances the pre-training and prompt tuning of VLP models with explicit object position modeling. Specifically, PEVL reformulates discretized object positions and language in a unified language modeling framework, which facilitates explicit VL alignment during pre-training, and also enables flexible prompt tuning for various downstream tasks. We show that PEVL enables state-of-the-art performance of detector-free VLP models on position-sensitive tasks such as referring expression comprehension and phrase grounding, and also improves the performance on position-insensitive tasks with grounded inputs. We make the data and code for this paper publicly available at https://github.com/thunlp/PEVL.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Visual Question Answering (VQA) | GQA | Accuracy | 77 | PEVL+ |
| Scene Parsing | Visual Genome | R@100 | 66.3 | PEVL |
| Scene Parsing | Visual Genome | R@50 | 64.4 | PEVL |
| Scene Parsing | Visual Genome | mR@100 | 23.5 | PEVL |
| Scene Parsing | Visual Genome | mR@50 | 21.7 | PEVL |
| Visual Reasoning | VCR (Q-AR) dev | Accuracy | 57.8 | PEVL |
| Visual Reasoning | VCR (Q-A) test | Accuracy | 76 | PEVL |
| Visual Reasoning | VCR (Q-AR) test | Accuracy | 58.6 | PEVL |
| Visual Reasoning | VCR (QA-R) dev | Accuracy | 76.4 | PEVL |
| Visual Reasoning | VCR (Q-A) dev | Accuracy | 75.1 | PEVL |
| Visual Reasoning | VCR (QA-R) test | Accuracy | 76.7 | PEVL |
| Visual Relationship Detection | Visual Genome | R@100 | 66.3 | PEVL |
| Visual Relationship Detection | Visual Genome | R@50 | 64.4 | PEVL |
| Visual Relationship Detection | Visual Genome | mR@100 | 23.5 | PEVL |
| Visual Relationship Detection | Visual Genome | mR@50 | 21.7 | PEVL |
| Phrase Grounding | Flickr30k Entities Dev | R@1 | 84.1 | PEVL |
| Phrase Grounding | Flickr30k Entities Test | R@1 | 84.4 | PEVL |
| Scene Understanding | Visual Genome | R@100 | 66.3 | PEVL |
| Scene Understanding | Visual Genome | R@50 | 64.4 | PEVL |
| Scene Understanding | Visual Genome | mR@100 | 23.5 | PEVL |
| Scene Understanding | Visual Genome | mR@50 | 21.7 | PEVL |
| 2D Semantic Segmentation | Visual Genome | R@100 | 66.3 | PEVL |
| 2D Semantic Segmentation | Visual Genome | R@50 | 64.4 | PEVL |
| 2D Semantic Segmentation | Visual Genome | mR@100 | 23.5 | PEVL |
| 2D Semantic Segmentation | Visual Genome | mR@50 | 21.7 | PEVL |