Ruyang Liu, Haoran Tang, Haibo Liu, Yixiao Ge, Ying Shan, Chen Li, Jiankun Yang
The past year has witnessed the significant advancement of video-based large language models. However, the challenge of developing a unified model for both short and long video understanding remains unresolved. Most existing video LLMs cannot handle hour-long videos, while methods custom for long videos tend to be ineffective for shorter videos and images. In this paper, we identify the key issue as the redundant content in videos. To address this, we propose a novel pooling strategy that simultaneously achieves token compression and instruction-aware visual feature aggregation. Our model is termed Prompt-guided Pooling LLaVA, or PPLLaVA for short. Specifically, PPLLaVA consists of three core components: the CLIP-based visual-prompt alignment that extracts visual information relevant to the user's instructions, the prompt-guided pooling that compresses the visual sequence to arbitrary scales using convolution-style pooling, and the clip context extension designed for lengthy prompt common in visual dialogue. Moreover, our codebase also integrates the most advanced video Direct Preference Optimization (DPO) and visual interleave training. Extensive experiments have validated the performance of our model. With superior throughput and only 1024 visual context, PPLLaVA achieves better results on image benchmarks as a video LLM, while achieving state-of-the-art performance across various video benchmarks, excelling in tasks ranging from caption generation to multiple-choice questions, and handling video lengths from seconds to hours. Codes have been available at https://github.com/farewellthree/PPLLaVA.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | MSVD-QA | Accuracy | 77.1 | PPLLaVA-7B |
| Question Answering | MSVD-QA | Confidence Score | 4 | PPLLaVA-7B |
| Question Answering | MSRVTT-QA | Accuracy | 64.3 | PPLLaVA-7B |
| Question Answering | MSRVTT-QA | Confidence Score | 3.5 | PPLLaVA-7B |
| Question Answering | ActivityNet-QA | Accuracy | 60.7 | PPLLaVA-7B |
| Question Answering | ActivityNet-QA | Confidence Score | 3.6 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 3.81 | PPLLaVA-7B-dpo |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 4.21 | PPLLaVA-7B-dpo |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 3.85 | PPLLaVA-7B-dpo |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 3.56 | PPLLaVA-7B-dpo |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 3.21 | PPLLaVA-7B-dpo |
| Visual Question Answering (VQA) | VideoInstruct | mean | 3.73 | PPLLaVA-7B-dpo |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 3.2 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 3.88 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 3.32 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 3.2 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 3 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | mean | 3.32 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 4.21 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.85 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.56 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.21 | PPLLaVA-7B |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.81 | PPLLaVA-7B |
| Video Question Answering | MVBench | Avg. | 59.2 | PPLLaVA (7b) |
| Video Question Answering | MSVD-QA | Accuracy | 77.1 | PPLLaVA-7B |
| Video Question Answering | MSVD-QA | Confidence Score | 4 | PPLLaVA-7B |
| Video Question Answering | MSRVTT-QA | Accuracy | 64.3 | PPLLaVA-7B |
| Video Question Answering | MSRVTT-QA | Confidence Score | 3.5 | PPLLaVA-7B |
| Video Question Answering | ActivityNet-QA | Accuracy | 60.7 | PPLLaVA-7B |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.6 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | Consistency | 3.81 | PPLLaVA-7B-dpo |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 4.21 | PPLLaVA-7B-dpo |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 3.85 | PPLLaVA-7B-dpo |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 3.56 | PPLLaVA-7B-dpo |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 3.21 | PPLLaVA-7B-dpo |
| Generative Visual Question Answering | VideoInstruct | mean | 3.73 | PPLLaVA-7B-dpo |
| Generative Visual Question Answering | VideoInstruct | Consistency | 3.2 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 3.88 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 3.32 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 3.2 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 3 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | mean | 3.32 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 4.21 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.85 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.56 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.21 | PPLLaVA-7B |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.81 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking (Correctness of Information) | VideoInstruct | gpt-score | 3.85 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 3.81 | PPLLaVA-7B-dpo |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 4.21 | PPLLaVA-7B-dpo |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 3.85 | PPLLaVA-7B-dpo |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 3.56 | PPLLaVA-7B-dpo |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 3.21 | PPLLaVA-7B-dpo |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 3.73 | PPLLaVA-7B-dpo |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 3.2 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 3.88 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 3.32 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 3.2 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 3 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 3.32 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 4.21 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.85 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.56 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.21 | PPLLaVA-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.81 | PPLLaVA-7B |