TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/mPLUG-2: A Modularized Multi-modal Foundation Model Across...

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video

Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu, Chenliang Li, Bin Bi, Qi Qian, Wei Wang, Guohai Xu, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou

2023-02-01Visual GroundingVideo RetrievalImage ClassificationAction ClassificationZero-Shot Video RetrievalVideo Question AnsweringVideo CaptioningVisual Question Answering (VQA)TGIF-FrameVisual Question AnsweringImage Retrieval
PaperPDFCodeCodeCode(official)Code

Abstract

Recent years have witnessed a big convergence of language, vision, and multi-modal pretraining. In this work, we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. Empirical study shows that mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks with a far smaller model size and data scale. It also demonstrates strong zero-shot transferability on vision-language and video-language tasks. Code and models will be released in https://github.com/alibaba/AliceMind.

Results

TaskDatasetMetricValueModel
VideoMSR-VTT-1kAtext-to-video R@153.1mPLUG-2
VideoMSR-VTT-1kAtext-to-video R@1084.7mPLUG-2
VideoMSR-VTT-1kAtext-to-video R@577.6mPLUG-2
VideoDiDeMotext-to-video R@156.4mPLUG-2
VideoDiDeMotext-to-video R@1085.2mPLUG-2
VideoDiDeMotext-to-video R@579.1mPLUG-2
VideoLSMDCtext-to-video R@134.4mPLUG-2
VideoLSMDCtext-to-video R@1065.1mPLUG-2
VideoLSMDCtext-to-video R@555.2mPLUG-2
VideoKinetics-700Top-1 Accuracy80.4mPLUG-2
VideoKinetics-700Top-5 Accuracy94.9mPLUG-2
VideoKinetics-400Acc@187.1mPLUG-2
VideoKinetics-400Acc@597.7mPLUG-2
VideoKinetics-600Top-1 Accuracy89.8mPLUG-2
VideoKinetics-600Top-5 Accuracy98.3mPLUG-2
Visual Question Answering (VQA)MSRVTT-QAAccuracy0.48mPLUG-2
Visual Question Answering (VQA)MSVD-QAAccuracy0.581mPLUG-2
Visual Question Answering (VQA)VQA v2 test-devAccuracy81.11mPLUG-2
Video Question AnsweringMSRVTT-QAAccuracy48mPLUG-2
Video CaptioningMSR-VTTBLEU-457.8mPLUG-2
Video CaptioningMSR-VTTCIDEr80mPLUG-2
Video CaptioningMSR-VTTMETEOR34.9mPLUG-2
Video CaptioningMSR-VTTROUGE-L70.1mPLUG-2
Video CaptioningMSVDBLEU-470.5mPLUG-2
Video CaptioningMSVDCIDEr165.8mPLUG-2
Video CaptioningMSVDMETEOR48.4mPLUG-2
Video CaptioningMSVDROUGE-L85.3mPLUG-2
Video RetrievalMSR-VTT-1kAtext-to-video R@153.1mPLUG-2
Video RetrievalMSR-VTT-1kAtext-to-video R@1084.7mPLUG-2
Video RetrievalMSR-VTT-1kAtext-to-video R@577.6mPLUG-2
Video RetrievalDiDeMotext-to-video R@156.4mPLUG-2
Video RetrievalDiDeMotext-to-video R@1085.2mPLUG-2
Video RetrievalDiDeMotext-to-video R@579.1mPLUG-2
Video RetrievalLSMDCtext-to-video R@134.4mPLUG-2
Video RetrievalLSMDCtext-to-video R@1065.1mPLUG-2
Video RetrievalLSMDCtext-to-video R@555.2mPLUG-2
Visual GroundingRefCOCO+ test BAccuracy (%)86.05mPLUG-2
Visual GroundingRefCOCO+ valAccuracy (%)90.33mPLUG-2
Visual GroundingRefCOCO+ testAAccuracy (%)92.8mPLUG-2
Visual Question AnsweringVQA v2 test-devAccuracy81.11mPLUG-2
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@147.1mPLUG-2
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@1079mPLUG-2
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@569.7mPLUG-2
Zero-Shot Video RetrievalDiDeMotext-to-video R@145.7mPLUG-2
Zero-Shot Video RetrievalDiDeMotext-to-video R@1079.2mPLUG-2
Zero-Shot Video RetrievalDiDeMotext-to-video R@571.1mPLUG-2
Zero-Shot Video RetrievalLSMDCtext-to-video R@124.1mPLUG-2
Zero-Shot Video RetrievalLSMDCtext-to-video R@1052mPLUG-2
Zero-Shot Video RetrievalLSMDCtext-to-video R@543.8mPLUG-2

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17