TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MiniGPT-v2: large language model as a unified interface fo...

MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning

Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, Mohamed Elhoseiny

2023-10-14Question AnsweringVisual GroundingImage ClassificationReferring expression generationReferring Expression ComprehensionNatural Language Visual GroundingMulti-Task LearningLarge Language ModelLanguage ModellingVisual Question Answering
PaperPDFCodeCode

Abstract

Large language models have shown their remarkable capabilities as a general interface for various language-related applications. Motivated by this, we target to build a unified interface for completing many vision-language tasks including image description, visual question answering, and visual grounding, among others. The challenge is to use a single model for performing diverse vision-language tasks effectively with simple multi-modal instructions. Towards this objective, we introduce MiniGPT-v2, a model that can be treated as a unified interface for better handling various vision-language tasks. We propose using unique identifiers for different tasks when training the model. These identifiers enable our model to better distinguish each task instruction effortlessly and also improve the model learning efficiency for each task. After the three-stage training, the experimental results show that MiniGPT-v2 achieves strong performance on many visual question-answering and visual grounding benchmarks compared to other vision-language generalist models. Our model and codes are available at https://minigpt-v2.github.io/

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)BenchLMMGPT-3.5 score30.1MiniGPTv2-7B
Natural Language Visual GroundingScreenSpotAccuracy (%)5.7MiniGPT-v2
Image ClassificationColonINST-v1 (Seen)Accuray91.49MiniGPT-v2 (w/ LoRA, w/o extra data)
Image ClassificationColonINST-v1 (Seen)Accuray90MiniGPT-v2 (w/ LoRA, w/ extra data)
Image ClassificationColonINST-v1 (Unseen)Accuray77.93MiniGPT-v2 (w/ LoRA, w/o extra data)
Image ClassificationColonINST-v1 (Unseen)Accuray76.82MiniGPT-v2 (w/ LoRA, w/ extra data)
Referring expression generationColonINST-v1 (Unseen)Accuray72.05MiniGPT-v2 (w/ LoRA, w/o extra data)
Referring expression generationColonINST-v1 (Unseen)Accuray70.23MiniGPT-v2 (w/ LoRA, w/ extra data)
Referring expression generationColonINST-v1 (Seen)Accuray94.69MiniGPT-v2 (w/ LoRA, w/o extra data)
Referring expression generationColonINST-v1 (Seen)Accuray87.65MiniGPT-v2 (w/ LoRA, w/ extra data)
Visual Question AnsweringBenchLMMGPT-3.5 score30.1MiniGPTv2-7B

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17