TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/mPLUG: Effective and Efficient Vision-Language Learning by...

mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections

Chenliang Li, Haiyang Xu, Junfeng Tian, Wei Wang, Ming Yan, Bin Bi, Jiabo Ye, Hehong Chen, Guohai Xu, Zheng Cao, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou, Luo Si

2022-05-24Question AnsweringVisual GroundingImage-text RetrievalVideo-Text RetrievalText Retrievalcross-modal alignmentImage CaptioningRetrievalVisual Question Answering (VQA)Visual Question Answering
PaperPDFCodeCode(official)Code

Abstract

Large-scale pretrained foundation models have been an emerging paradigm for building artificial intelligence (AI) systems, which can be quickly adapted to a wide range of downstream tasks. This paper presents mPLUG, a new vision-language foundation model for both cross-modal understanding and generation. Most existing pre-trained models suffer from the problems of low computational efficiency and information asymmetry brought by the long visual sequence in cross-modal alignment. To address these problems, mPLUG introduces an effective and efficient vision-language architecture with novel cross-modal skip-connections, which creates inter-layer shortcuts that skip a certain number of layers for time-consuming full self-attention on the vision side. mPLUG is pre-trained end-to-end on large-scale image-text pairs with both discriminative and generative objectives. It achieves state-of-the-art results on a wide range of vision-language downstream tasks, such as image captioning, image-text retrieval, visual grounding and visual question answering. mPLUG also demonstrates strong zero-shot transferability when directly transferred to multiple video-language tasks.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VQA v2 test-devAccuracy82.43mPLUG (Huge)
Visual Question Answering (VQA)VQA v2 test-stdnumber69.82mPLUG-Huge
Visual Question Answering (VQA)VQA v2 test-stdother77.02mPLUG-Huge
Visual Question Answering (VQA)VQA v2 test-stdoverall83.62mPLUG-Huge
Visual Question Answering (VQA)VQA v2 test-stdyes/no94.83mPLUG-Huge
Image CaptioningCOCO CaptionsBLEU-446.5mPLUG
Image CaptioningCOCO CaptionsCIDER155.1mPLUG
Image CaptioningCOCO CaptionsMETEOR32mPLUG
Image CaptioningCOCO CaptionsSPICE26mPLUG

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Transformer-based Spatial Grounding: A Comprehensive Survey2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17