Peng Jin, Ryuichi Takanobu, Wancai Zhang, Xiaochun Cao, Li Yuan
Large language models have demonstrated impressive universal capabilities across a wide range of open-ended tasks and have extended their utility to encompass multimodal conversations. However, existing methods encounter challenges in effectively handling both image and video understanding, particularly with limited visual tokens. In this work, we introduce Chat-UniVi, a Unified Vision-language model capable of comprehending and engaging in conversations involving images and videos through a unified visual representation. Specifically, we employ a set of dynamic visual tokens to uniformly represent images and videos. This representation framework empowers the model to efficiently utilize a limited number of visual tokens to simultaneously capture the spatial details necessary for images and the comprehensive temporal relationship required for videos. Moreover, we leverage a multi-scale representation, enabling the model to perceive both high-level semantic concepts and low-level visual details. Notably, Chat-UniVi is trained on a mixed dataset containing both images and videos, allowing direct application to tasks involving both mediums without requiring any modifications. Extensive experimental results demonstrate that Chat-UniVi consistently outperforms even existing methods exclusively designed for either images or videos. Code is available at https://github.com/PKU-YuanGroup/Chat-UniVi.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | MSVD-QA | Accuracy | 69.3 | Chat-UniVi-7B |
| Question Answering | MSVD-QA | Confidence Score | 3.7 | Chat-UniVi-7B |
| Question Answering | TGIF-QA | Accuracy | 69 | Chat-UniVi-7B |
| Question Answering | TGIF-QA | Confidence Score | 3.8 | Chat-UniVi-7B |
| Question Answering | MSRVTT-QA | Accuracy | 55 | Chat-UniVi-7B |
| Question Answering | MSRVTT-QA | Confidence Score | 3.1 | Chat-UniVi-7B |
| Question Answering | ActivityNet-QA | Accuracy | 46.4 | Chat-UniVi-13B |
| Question Answering | ActivityNet-QA | Confidence Score | 3.6 | Chat-UniVi-13B |
| Question Answering | ActivityNet-QA | Accuracy | 46.1 | Chat-UniVi |
| Question Answering | ActivityNet-QA | Confidence Score | 3.3 | Chat-UniVi |
| Question Answering | ScienceQA | Avg. Accuracy | 90.99 | Chat-UniVi-13B |
| Question Answering | ScienceQA | Grades 1-6 | 91.19 | Chat-UniVi-13B |
| Question Answering | ScienceQA | Grades 7-12 | 90.64 | Chat-UniVi-13B |
| Question Answering | ScienceQA | Image Context | 88.05 | Chat-UniVi-13B |
| Question Answering | ScienceQA | Language Science | 88.91 | Chat-UniVi-13B |
| Question Answering | ScienceQA | Natural Science | 90.41 | Chat-UniVi-13B |
| Question Answering | ScienceQA | No Context | 90.94 | Chat-UniVi-13B |
| Question Answering | ScienceQA | Social Science | 95.05 | Chat-UniVi-13B |
| Question Answering | ScienceQA | Text Context | 89.64 | Chat-UniVi-13B |
| Visual Question Answering (VQA) | VideoInstruct | Consistency | 2.81 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | Contextual Understanding | 3.46 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | Correctness of Information | 2.89 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | Detail Orientation | 2.91 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | Temporal Understanding | 2.39 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | mean | 2.99 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 3.46 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.89 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.91 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.39 | Chat-UniVi |
| Visual Question Answering (VQA) | VideoInstruct | gpt-score | 2.81 | Chat-UniVi |
| Video Question Answering | ActivityNet-QA | Accuracy | 46.4 | Chat-UniVi-13B |
| Video Question Answering | ActivityNet-QA | Confidence score | 3.3 | Chat-UniVi-13B |
| Video Question Answering | MSVD-QA | Accuracy | 69.3 | Chat-UniVi-7B |
| Video Question Answering | MSVD-QA | Confidence Score | 3.7 | Chat-UniVi-7B |
| Video Question Answering | TGIF-QA | Accuracy | 69 | Chat-UniVi-7B |
| Video Question Answering | TGIF-QA | Confidence Score | 3.8 | Chat-UniVi-7B |
| Video Question Answering | MSRVTT-QA | Accuracy | 55 | Chat-UniVi-7B |
| Video Question Answering | MSRVTT-QA | Confidence Score | 3.1 | Chat-UniVi-7B |
| Video Question Answering | ActivityNet-QA | Accuracy | 46.4 | Chat-UniVi-13B |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.6 | Chat-UniVi-13B |
| Video Question Answering | ActivityNet-QA | Accuracy | 46.1 | Chat-UniVi |
| Video Question Answering | ActivityNet-QA | Confidence Score | 3.3 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | Consistency | 2.81 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | Contextual Understanding | 3.46 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | Correctness of Information | 2.89 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | Detail Orientation | 2.91 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | Temporal Understanding | 2.39 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | mean | 2.99 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 3.46 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.89 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.91 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.39 | Chat-UniVi |
| Generative Visual Question Answering | VideoInstruct | gpt-score | 2.81 | Chat-UniVi |
| Video-based Generative Performance Benchmarking (Correctness of Information) | VideoInstruct | gpt-score | 2.89 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | Consistency | 2.81 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | Contextual Understanding | 3.46 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | Correctness of Information | 2.89 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | Detail Orientation | 2.91 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | Temporal Understanding | 2.39 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | mean | 2.99 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 3.46 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.89 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.91 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.39 | Chat-UniVi |
| Video-based Generative Performance Benchmarking | VideoInstruct | gpt-score | 2.81 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Consistency | 2.36 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Contextual Understanding | 2.66 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Correctness of Information | 2.29 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Dense Captioning | 1.33 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Detail Orientation | 2.56 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Reasoning | 3.59 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Spatial Understanding | 2.36 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | Temporal Understanding | 1.56 | Chat-UniVi |
| VCGBench-Diverse | VideoInstruct | mean | 2.29 | Chat-UniVi |