Jiawei Wang, Liping Yuan, Yuchen Zhang, Haomiao Sun
Generating fine-grained video descriptions is a fundamental challenge in video understanding. In this work, we introduce Tarsier, a family of large-scale video-language models designed to generate high-quality video descriptions. Tarsier employs CLIP-ViT to encode frames separately and then uses an LLM to model temporal relationships. Despite its simple architecture, we demonstrate that with a meticulously designed two-stage training procedure, the Tarsier models exhibit substantially stronger video description capabilities than any existing open-source model, showing a $+51.4\%$ advantage in human side-by-side evaluation over the strongest model. Additionally, they are comparable to state-of-the-art proprietary models, with a $+12.3\%$ advantage against GPT-4V and a $-6.7\%$ disadvantage against Gemini 1.5 Pro. When upgraded to Tarsier2 by building upon SigLIP and Qwen2-7B, it further improves significantly with a $+4.8\%$ advantage against GPT-4o. Besides video description, Tarsier proves to be a versatile generalist model, achieving new state-of-the-art results across nine public benchmarks, including multi-choice VQA, open-ended VQA, and zero-shot video captioning. Our second contribution is the introduction of a new benchmark -- DREAM-1K (https://tarsier-vlm.github.io/) for evaluating video description models, consisting of a new challenging dataset featuring videos from diverse sources and varying complexity, along with an automatic method specifically designed to assess the quality of fine-grained video descriptions. We make our models and evaluation benchmark publicly available at https://github.com/bytedance/tarsier.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | NExT-QA | Accuracy | 79.2 | Tarsier (34B) |
| Question Answering | MSVD-QA | Accuracy | 80.3 | Tarsier (34B) |
| Question Answering | MSVD-QA | Confidence Score | 4.2 | Tarsier (34B) |
| Question Answering | TGIF-QA | Accuracy | 82.5 | Tarsier (34B) |
| Question Answering | TGIF-QA | Confidence Score | 4.4 | Tarsier (34B) |
| Question Answering | MSRVTT-QA | Accuracy | 66.4 | Tarsier (34B) |
| Question Answering | MSRVTT-QA | Confidence Score | 3.7 | Tarsier (34B) |
| Question Answering | EgoSchema (fullset) | Accuracy | 61.7 | Tarsier (34B) |
| Question Answering | EgoSchema (subset) | Accuracy | 68.6 | Tarsier (34B) |
| Question Answering | ActivityNet-QA | Accuracy | 61.6 | Tarsier (34B) |
| Question Answering | ActivityNet-QA | Confidence Score | 3.7 | Tarsier (34B) |
| Video Question Answering | TVBench | Average Accuracy | 55.5 | Tarsier-34B |
| Video Question Answering | TVBench | Average Accuracy | 46.9 | Tarsier-7B |
| Video Question Answering | MVBench | Avg. | 67.6 | Tarsier (34B) |
| Video Question Answering | NExT-QA | Accuracy | 79.2 | Tarsier (34B) |
| Video Question Answering | MSVD-QA | Accuracy | 80.3 | Tarsier (34B) |
| Video Question Answering | MSVD-QA | Confidence Score | 4.2 | Tarsier (34B) |
| Video Question Answering | TGIF-QA | Accuracy | 82.5 | Tarsier (34B) |
| Video Question Answering | TGIF-QA | Confidence Score | 4.4 | Tarsier (34B) |
| Video Question Answering | MSRVTT-QA | Accuracy | 66.4 | Tarsier (34B) |
| Video Question Answering | MSRVTT-QA | Confidence Score | 3.7 | Tarsier (34B) |
| Video Question Answering | EgoSchema (fullset) | Accuracy | 61.7 | Tarsier (34B) |
| Video Question Answering | EgoSchema (subset) | Accuracy | 68.6 | Tarsier (34B) |
| Video Question Answering | ActivityNet-QA | Accuracy | 61.6 | Tarsier (34B) |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.7 | Tarsier (34B) |