Ruyang Liu, Chen Li, Haoran Tang, Yixiao Ge, Ying Shan, Ge Li
Large Language Models (LLMs) have showcased impressive capabilities in text comprehension and generation, prompting research efforts towards video LLMs to facilitate human-AI interaction at the video level. However, how to effectively encode and understand videos in video-based dialogue systems remains to be solved. In this paper, we investigate a straightforward yet unexplored question: Can we feed all spatial-temporal tokens into the LLM, thus delegating the task of video sequence modeling to the LLMs? Surprisingly, this simple approach yields significant improvements in video understanding. Based upon this, we propose ST-LLM, an effective video-LLM baseline with Spatial-Temporal sequence modeling inside LLM. Furthermore, to address the overhead and stability issues introduced by uncompressed video tokens within LLMs, we develop a dynamic masking strategy with tailor-made training objectives. For particularly long videos, we have also designed a global-local input module to balance efficiency and effectiveness. Consequently, we harness LLM for proficient spatial-temporal modeling, while upholding efficiency and stability. Extensive experimental results attest to the effectiveness of our method. Through a more concise model and training pipeline, ST-LLM establishes a new state-of-the-art result on VideoChatGPT-Bench and MVBench. Codes have been available at https://github.com/TencentARC/ST-LLM.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | MSVD-QA | Accuracy | 74.6 | ST-LLM |
| Question Answering | MSVD-QA | Confidence Score | 3.9 | ST-LLM |
| Question Answering | MSRVTT-QA | Accuracy | 63.2 | ST-LLM |
| Question Answering | MSRVTT-QA | Confidence Score | 3.4 | ST-LLM |
| Question Answering | ActivityNet-QA | Accuracy | 50.9 | ST-LLM |
| Question Answering | ActivityNet-QA | Confidence Score | 3.3 | ST-LLM |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 2.81 | ST-LLM-7B |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 3.74 | ST-LLM-7B |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 3.23 | ST-LLM-7B |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 3.05 | ST-LLM-7B |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 2.93 | ST-LLM-7B |
| Visual Question Answering (VQA) | VideoInstruct | mean | 3.15 | ST-LLM-7B |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.74 | ST-LLM |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.23 | ST-LLM |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.05 | ST-LLM |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.93 | ST-LLM |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.81 | ST-LLM |
| Video Question Answering | TVBench | Average Accuracy | 35.7 | ST-LLM |
| Video Question Answering | MVBench | Avg. | 54.9 | ST-LLM |
| Video Question Answering | MSVD-QA | Accuracy | 74.6 | ST-LLM |
| Video Question Answering | MSVD-QA | Confidence Score | 3.9 | ST-LLM |
| Video Question Answering | MSRVTT-QA | Accuracy | 63.2 | ST-LLM |
| Video Question Answering | MSRVTT-QA | Confidence Score | 3.4 | ST-LLM |
| Video Question Answering | ActivityNet-QA | Accuracy | 50.9 | ST-LLM |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.3 | ST-LLM |
| Generative Visual Question Answering | VideoInstruct | Consistency | 2.81 | ST-LLM-7B |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 3.74 | ST-LLM-7B |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 3.23 | ST-LLM-7B |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 3.05 | ST-LLM-7B |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 2.93 | ST-LLM-7B |
| Generative Visual Question Answering | VideoInstruct | mean | 3.15 | ST-LLM-7B |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.74 | ST-LLM |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.23 | ST-LLM |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.05 | ST-LLM |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.93 | ST-LLM |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.81 | ST-LLM |
| Video-based Generative Performance Benchmarking (Correctness of Information) | VideoInstruct | gpt-score | 3.23 | ST-LLM |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 2.81 | ST-LLM-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 3.74 | ST-LLM-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 3.23 | ST-LLM-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 3.05 | ST-LLM-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 2.93 | ST-LLM-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 3.15 | ST-LLM-7B |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.74 | ST-LLM |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.23 | ST-LLM |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.05 | ST-LLM |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.93 | ST-LLM |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.81 | ST-LLM |