Yanwei Li, Chengyao Wang, Jiaya Jia
In this work, we present a novel method to tackle the token generation challenge in Vision Language Models (VLMs) for video and image understanding, called LLaMA-VID. Current VLMs, while proficient in tasks like image captioning and visual question answering, face computational burdens when processing long videos due to the excessive visual tokens. LLaMA-VID addresses this issue by representing each frame with two distinct tokens, namely context token and content token. The context token encodes the overall image context based on user input, whereas the content token encapsulates visual cues in each frame. This dual-token strategy significantly reduces the overload of long videos while preserving critical information. Generally, LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. It is proved to surpass previous methods on most of video- or image-based benchmarks. Code is available https://github.com/dvlab-research/LLaMA-VID}{https://github.com/dvlab-research/LLaMA-VID
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | MSVD-QA | Accuracy | 70 | LLaMA-VID-13B (2 Token) |
| Question Answering | MSVD-QA | Confidence Score | 3.7 | LLaMA-VID-13B (2 Token) |
| Question Answering | MSVD-QA | Accuracy | 69.7 | LLaMA-VID-7B (2 Token) |
| Question Answering | MSVD-QA | Confidence Score | 3.7 | LLaMA-VID-7B (2 Token) |
| Question Answering | MSRVTT-QA | Accuracy | 58.9 | LLaMA-VID-13B (2 Token) |
| Question Answering | MSRVTT-QA | Confidence Score | 3.3 | LLaMA-VID-13B (2 Token) |
| Question Answering | MSRVTT-QA | Accuracy | 57.7 | LLaMA-VID-7B (2 Token) |
| Question Answering | MSRVTT-QA | Confidence Score | 3.2 | LLaMA-VID-7B (2 Token) |
| Question Answering | ActivityNet-QA | Accuracy | 47.5 | LLaMA-VID-13B (2 Token) |
| Question Answering | ActivityNet-QA | Confidence Score | 3.3 | LLaMA-VID-13B (2 Token) |
| Question Answering | ActivityNet-QA | Accuracy | 47.4 | LLaMA-VID-7B (2 Token) |
| Question Answering | ActivityNet-QA | Confidence Score | 3.3 | LLaMA-VID-7B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 2.63 | LLaMA-VID-13B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 3.6 | LLaMA-VID-13B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 3.07 | LLaMA-VID-13B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 3.05 | LLaMA-VID-13B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 2.58 | LLaMA-VID-13B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | mean | 2.99 | LLaMA-VID-13B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 2.51 | LLaMA-VID-7B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 3.53 | LLaMA-VID-7B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 2.96 | LLaMA-VID-7B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 3 | LLaMA-VID-7B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 2.46 | LLaMA-VID-7B (2 Token) |
| Visual Question Answering (VQA) | VideoInstruct | mean | 2.89 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | OVBench | AVG | 41.9 | LLaMA-VID (7B) |
| Video Question Answering | ActivityNet-QA | Accuracy | 47.5 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | ActivityNet-QA | Confidence score | 3.3 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | ActivityNet-QA | Accuracy | 47.4 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | ActivityNet-QA | Confidence score | 3.3 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | MSVD-QA | Accuracy | 70 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | MSVD-QA | Confidence Score | 3.7 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | MSVD-QA | Accuracy | 69.7 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | MSVD-QA | Confidence Score | 3.7 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | MSRVTT-QA | Accuracy | 58.9 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | MSRVTT-QA | Confidence Score | 3.3 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | MSRVTT-QA | Accuracy | 57.7 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | MSRVTT-QA | Confidence Score | 3.2 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | ActivityNet-QA | Accuracy | 47.5 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.3 | LLaMA-VID-13B (2 Token) |
| Video Question Answering | ActivityNet-QA | Accuracy | 47.4 | LLaMA-VID-7B (2 Token) |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.3 | LLaMA-VID-7B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Consistency | 2.63 | LLaMA-VID-13B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 3.6 | LLaMA-VID-13B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 3.07 | LLaMA-VID-13B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 3.05 | LLaMA-VID-13B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 2.58 | LLaMA-VID-13B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | mean | 2.99 | LLaMA-VID-13B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Consistency | 2.51 | LLaMA-VID-7B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 3.53 | LLaMA-VID-7B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 2.96 | LLaMA-VID-7B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 3 | LLaMA-VID-7B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 2.46 | LLaMA-VID-7B (2 Token) |
| Generative Visual Question Answering | VideoInstruct | mean | 2.89 | LLaMA-VID-7B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 2.63 | LLaMA-VID-13B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 3.6 | LLaMA-VID-13B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 3.07 | LLaMA-VID-13B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 3.05 | LLaMA-VID-13B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 2.58 | LLaMA-VID-13B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 2.99 | LLaMA-VID-13B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 2.51 | LLaMA-VID-7B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 3.53 | LLaMA-VID-7B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 2.96 | LLaMA-VID-7B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 3 | LLaMA-VID-7B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 2.46 | LLaMA-VID-7B (2 Token) |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 2.89 | LLaMA-VID-7B (2 Token) |