Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, Yu Qiao
How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | MSVD-QA | Accuracy | 54.9 | LLaMA Adapter-7B |
| Question Answering | MSVD-QA | Confidence Score | 3.1 | LLaMA Adapter-7B |
| Question Answering | MSRVTT-QA | Accuracy | 43.8 | LLaMA Adapter-7B |
| Question Answering | MSRVTT-QA | Confidence Score | 2.7 | LLaMA Adapter-7B |
| Question Answering | ActivityNet-QA | Accuracy | 34.2 | LLaMA Adapter |
| Question Answering | ActivityNet-QA | Confidence Score | 2.7 | LLaMA Adapter |
| Visual Question Answering (VQA) | InfiMM-Eval | Abductive | 46.12 | LLaMA-Adapter V2 |
| Visual Question Answering (VQA) | InfiMM-Eval | Analogical | 22.08 | LLaMA-Adapter V2 |
| Visual Question Answering (VQA) | InfiMM-Eval | Deductive | 28.7 | LLaMA-Adapter V2 |
| Visual Question Answering (VQA) | InfiMM-Eval | Overall score | 30.46 | LLaMA-Adapter V2 |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 2.15 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 2.3 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 2.03 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 2.32 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 1.98 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | mean | 2.16 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.3 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.03 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.32 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 1.98 | LLaMA Adapter |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.15 | LLaMA Adapter |
| Video Question Answering | ActivityNet-QA | Accuracy | 34.2 | LLaMA Adapter V2 |
| Video Question Answering | ActivityNet-QA | Confidence score | 2.7 | LLaMA Adapter V2 |
| Video Question Answering | MSVD-QA | Accuracy | 54.9 | LLaMA Adapter-7B |
| Video Question Answering | MSVD-QA | Confidence Score | 3.1 | LLaMA Adapter-7B |
| Video Question Answering | MSRVTT-QA | Accuracy | 43.8 | LLaMA Adapter-7B |
| Video Question Answering | MSRVTT-QA | Confidence Score | 2.7 | LLaMA Adapter-7B |
| Video Question Answering | ActivityNet-QA | Accuracy | 34.2 | LLaMA Adapter |
| Video Question Answering | ActivityNet-QA | Confidence Score | 2.7 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | Consistency | 2.15 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 2.3 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 2.03 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 2.32 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 1.98 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | mean | 2.16 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.3 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.03 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.32 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 1.98 | LLaMA Adapter |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.15 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking (Correctness of Information) | VideoInstruct | gpt-score | 2.03 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 2.15 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 2.3 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 2.03 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 2.32 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 1.98 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 2.16 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.3 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.03 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.32 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 1.98 | LLaMA Adapter |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.15 | LLaMA Adapter |