Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Khan
Building on the advances of language models, Large Multimodal Models (LMMs) have contributed significant improvements in video understanding. While the current video LMMs utilize advanced Large Language Models (LLMs), they rely on either image or video encoders to process visual inputs, each of which has its own limitations. Image encoders excel at capturing rich spatial details from frame sequences but lack explicit temporal context, which can be important in videos with intricate action sequences. On the other hand, video encoders provide temporal context but are often limited by computational constraints that lead to processing only sparse frames at lower resolutions, resulting in reduced contextual and spatial understanding. To this end, we introduce VideoGPT+, which combines the complementary benefits of the image encoder (for detailed spatial understanding) and the video encoder (for global temporal context modeling). The model processes videos by dividing them into smaller segments and applies an adaptive pooling strategy on features extracted by both image and video encoders. Our architecture showcases improved performance across multiple video benchmarks, including VCGBench, MVBench and Zero-shot question-answering. Further, we develop 112K video-instruction set using a novel semi-automatic annotation pipeline which further improves the model performance. Additionally, to comprehensively evaluate video LMMs, we present VCGBench-Diverse, covering 18 broad video categories such as lifestyle, sports, science, gaming, and surveillance videos. This benchmark with 4,354 question-answer pairs evaluates the generalization of existing LMMs on dense video captioning, spatial and temporal understanding, and complex reasoning, ensuring comprehensive assessment across diverse video types and dynamics. Code: https://github.com/mbzuai-oryx/VideoGPT-plus.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | MSVD-QA | Accuracy | 72.4 | VideoGPT+ |
| Question Answering | MSVD-QA | Confidence Score | 3.6 | VideoGPT+ |
| Question Answering | TGIF-QA | Accuracy | 74.6 | VideoGPT+ |
| Question Answering | TGIF-QA | Confidence Score | 4.1 | VideoGPT+ |
| Question Answering | MSRVTT-QA | Accuracy | 60.6 | VideoGPT+ |
| Question Answering | MSRVTT-QA | Confidence Score | 3.6 | VideoGPT+ |
| Question Answering | ActivityNet-QA | Accuracy | 50.6 | VideoGPT+ |
| Question Answering | ActivityNet-QA | Confidence Score | 3.6 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 3.39 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 3.74 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 3.27 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 3.18 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 2.83 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | mean | 3.28 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.74 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.27 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.18 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.83 | VideoGPT+ |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.39 | VideoGPT+ |
| Video Question Answering | TVBench | Average Accuracy | 41.7 | VideoGPT+ |
| Video Question Answering | MVBench | Avg. | 58.7 | VideoGPT+ |
| Video Question Answering | MSVD-QA | Accuracy | 72.4 | VideoGPT+ |
| Video Question Answering | MSVD-QA | Confidence Score | 3.6 | VideoGPT+ |
| Video Question Answering | TGIF-QA | Accuracy | 74.6 | VideoGPT+ |
| Video Question Answering | TGIF-QA | Confidence Score | 4.1 | VideoGPT+ |
| Video Question Answering | MSRVTT-QA | Accuracy | 60.6 | VideoGPT+ |
| Video Question Answering | MSRVTT-QA | Confidence Score | 3.6 | VideoGPT+ |
| Video Question Answering | ActivityNet-QA | Accuracy | 50.6 | VideoGPT+ |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.6 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | Consistency | 3.39 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 3.74 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 3.27 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 3.18 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 2.83 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | mean | 3.28 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.74 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.27 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.18 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.83 | VideoGPT+ |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.39 | VideoGPT+ |
| Video-based Generative Performance Benchmarking (Correctness of Information) | VideoInstruct | gpt-score | 3.27 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 3.39 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 3.74 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 3.27 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 3.18 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 2.83 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 3.28 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.74 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.27 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.18 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.83 | VideoGPT+ |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.39 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Consistency | 2.59 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Contextual Understanding | 2.81 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Correctness of Information | 2.46 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Dense Captioning | 1.38 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Detail Orientation | 2.73 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Reasoning | 3.63 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Spatial Understanding | 2.8 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | Temporal Understanding | 1.78 | VideoGPT+ |
| VCGBench-Diverse | VideoInstruct | mean | 2.47 | VideoGPT+ |